Microsoft Windows [版本 10.0.18363.720]
(c) 2019 Microsoft Corporation。保留所有权利。
C:\Users\liuyupeng11>d:
D:\>cd D:\liuyupeng11\kafka_2.12-2.7.0
D:\liuyupeng11\kafka_2.12-2.7.0\bin>cd windows
D:\liuyupeng11\kafka_2.12-2.7.0\bin\windows>dir
驱动器 D 中的卷是 data
卷的序列号是 2ED9-9FC2
D:\liuyupeng11\kafka_2.12-2.7.0\bin\windows 的目录
2021/01/19 16:33 <DIR> .
2021/01/19 16:33 <DIR> ..
2020/12/16 21:58 1,241 connect-distributed.bat
2020/12/16 21:58 1,239 connect-standalone.bat
2020/12/16 21:58 873 kafka-acls.bat
2020/12/16 21:58 885 kafka-broker-api-versions.bat
2020/12/16 21:58 876 kafka-configs.bat
2020/12/16 21:58 925 kafka-console-consumer.bat
2020/12/16 21:58 925 kafka-console-producer.bat
2020/12/16 21:58 883 kafka-consumer-groups.bat
2020/12/16 21:58 938 kafka-consumer-perf-test.bat
2020/12/16 21:58 885 kafka-delegation-tokens.bat
2020/12/16 21:58 883 kafka-delete-records.bat
2020/12/16 21:58 878 kafka-dump-log.bat
2020/12/16 21:58 884 kafka-leader-election.bat
2020/12/16 21:58 877 kafka-log-dirs.bat
2020/12/16 21:58 874 kafka-mirror-maker.bat
2020/12/16 21:58 900 kafka-preferred-replica-election.bat
2020/12/16 21:58 940 kafka-producer-perf-test.bat
2020/12/16 21:58 888 kafka-reassign-partitions.bat
2020/12/16 21:58 886 kafka-replica-verification.bat
2020/12/16 21:58 5,274 kafka-run-class.bat
2020/12/16 21:58 1,377 kafka-server-start.bat
2020/12/16 21:58 997 kafka-server-stop.bat
2020/12/16 21:58 972 kafka-streams-application-reset.bat
2020/12/16 21:58 875 kafka-topics.bat
2020/12/16 21:58 1,192 zookeeper-server-start.bat
2020/12/16 21:58 905 zookeeper-server-stop.bat
2020/12/16 21:58 1,026 zookeeper-shell.bat
27 个文件 30,298 字节
2 个目录 989,896,036,352 可用字节
D:\liuyupeng11\kafka_2.12-2.7.0\bin\windows>kafka-console-producer.bat -H
Option Description
------ -----------
--batch-size <Integer: size> Number of messages to send in a single
batch if they are not being sent
synchronously. (default: 200)
--bootstrap-server <String: server to REQUIRED unless --broker-list
connect to> (deprecated) is specified. The server
(s) to connect to. The broker list
string in the form HOST1:PORT1,HOST2:
PORT2.
--broker-list <String: broker-list> DEPRECATED, use --bootstrap-server
instead; ignored if --bootstrap-
server is specified. The broker
list string in the form HOST1:PORT1,
HOST2:PORT2.
--compression-codec [String: The compression codec: either 'none',
compression-codec] 'gzip', 'snappy', 'lz4', or 'zstd'.
If specified without value, then it
defaults to 'gzip'
--help Print usage information.
--line-reader <String: reader_class> The class name of the class to use for
reading lines from standard in. By
default each line is read as a
separate message. (default: kafka.
tools.
ConsoleProducer$LineMessageReader)
--max-block-ms <Long: max block on The max time that the producer will
send> block for during a send request
(default: 60000)
--max-memory-bytes <Long: total memory The total memory used by the producer
in bytes> to buffer records waiting to be sent
to the server. (default: 33554432)
--max-partition-memory-bytes <Long: The buffer size allocated for a
memory in bytes per partition> partition. When records are received
which are smaller than this size the
producer will attempt to
optimistically group them together
until this size is reached.
(default: 16384)
--message-send-max-retries <Integer> Brokers can fail receiving the message
for multiple reasons, and being
unavailable transiently is just one
of them. This property specifies the
number of retries before the
producer give up and drop this
message. (default: 3)
--metadata-expiry-ms <Long: metadata The period of time in milliseconds
expiration interval> after which we force a refresh of
metadata even if we haven't seen any
leadership changes. (default: 300000)
--producer-property <String: A mechanism to pass user-defined
producer_prop> properties in the form key=value to
the producer.
--producer.config <String: config file> Producer config properties file. Note
that [producer-property] takes
precedence over this config.
--property <String: prop> A mechanism to pass user-defined
properties in the form key=value to
the message reader. This allows
custom configuration for a user-
defined message reader. Default
properties include:
parse.
key=true|false
key.separator=<key.
separator>
ignore.error=true|false
--request-required-acks <String: The required acks of the producer
request required acks> requests (default: 1)
--request-timeout-ms <Integer: request The ack timeout of the producer
timeout ms> requests. Value must be non-negative
and non-zero (default: 1500)
--retry-backoff-ms <Integer> Before each retry, the producer
refreshes the metadata of relevant
topics. Since leader election takes
a bit of time, this property
specifies the amount of time that
the producer waits before refreshing
the metadata. (default: 100)
--socket-buffer-size <Integer: size> The size of the tcp RECV size.
(default: 102400)
--sync If set message send requests to the
brokers are synchronously, one at a
time as they arrive.
--timeout <Integer: timeout_ms> If set and the producer is running in
asynchronous mode, this gives the
maximum amount of time a message
will queue awaiting sufficient batch
size. The value is given in ms.
(default: 1000)
--topic <String: topic> REQUIRED: The topic id to produce
messages to.
--version Display Kafka version.
D:\liuyupeng11\kafka_2.12-2.7.0\bin\windows>kafka-console-producer.bat --bootstrap-server 192.168.7.126:9092 --topic jmc_category_upd_inst
>{"result":0,"msg":"success","solution":null,"body":{"ts":1610430513,"type":"update","old":{"name":"22","code":"01","pid":99,"isLeaf":1,"status":0},"data":{"name":"12","code":"02","pid":88,"isLeaf":0,"status":1}}}
>