基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

至于为什么,要写这篇博客以及安装Kafka-manager?

 

 

问题详情

  无奈于,在kafka里没有一个较好自带的web ui。启动后无法观看,并且不友好。所以,需安装一个第三方的kafka管理工具

 

 

 

功能

  为了简化开发者和服务工程师维护Kafka集群的工作,yahoo构建了一个叫做Kafka管理器的基于Web工具,叫做 Kafka Manager。这个管理工具可以很容易地发现分布在集群中的哪些topic分布不均匀,或者是分区在整个集群分布不均匀的的情况。

  它支持管理多个集群、选择副本、副本重新分配以及创建Topic。同时,这个管理工具也是一个非常好的可以快速浏览这个集群的工具。

有如下功能:

  1. 管理多个kafka集群
  2. 便捷的检查kafka集群状态(topics,brokers,备份分布情况,分区分布情况)
  3. 选择你要运行的副本
  4. 基于当前分区状况进行
  5. 可以选择topic配置并创建topic(0.8.1.1和0.8.2的配置不同)
  6. 删除topic(只支持0.8.2以上的版本并且要在broker配置中设置delete.topic.enable=true)
  7. Topic list会指明哪些topic被删除(在0.8.2以上版本适用)
  8. 为已存在的topic增加分区
  9. 为已存在的topic更新配置
  10. 在多个topic上批量重分区
  11. 在多个topic上批量重分区(可选partition broker位置)

 

 

 

 

  大家编译的步骤,可以参考

kafka管理器kafka-manager部署安装

 

或者

Apache Kafka 入门 - Kafka-manager的基本配置和运行

 

   我这里不多说。大家去看看这个流程就好

 

 

 

 

 

    下载地址:  https://pan.baidu.com/s/1jIE3YL4

     若此连接失效,则大家可以在我这篇博客下方留言评论,我将无偿发送给你们。

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

 

 

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

 

 

 

 

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

 

 

 基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

 

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

 

 

 

 

 

 

步骤:

  1、解压kafka-manager-1.3.2.1.zip

 基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)
lrwxrwxrwx.  1 hadoop hadoop       12 Apr 12 11:27 hadoop -> hadoop-2.6.0
drwxr-xr-x. 10 hadoop hadoop     4096 Apr 12 16:33 hadoop-2.6.0
lrwxrwxrwx.  1 hadoop hadoop       13 Apr 12 11:28 hbase -> hbase-0.98.19
drwxrwxr-x.  8 hadoop hadoop     4096 Apr 12 17:27 hbase-0.98.19
lrwxrwxrwx.  1 hadoop hadoop       10 Apr 12 11:28 hive -> hive-1.0.0
drwxrwxr-x.  8 hadoop hadoop     4096 Jul 26 08:14 hive-1.0.0
lrwxrwxrwx.  1 hadoop hadoop       11 Apr 12 10:18 jdk -> jdk1.7.0_79
drwxr-xr-x.  8 hadoop hadoop     4096 Apr 11  2015 jdk1.7.0_79
drwxr-xr-x.  8 hadoop hadoop     4096 Aug  5  2015 jdk1.8.0_60
lrwxrwxrwx   1 hadoop hadoop       19 Jul 27 17:27 kafka -> kafka_2.10-0.9.0.1/
drwxr-xr-x   7 hadoop hadoop     4096 Jul 28 19:55 kafka_2.10-0.9.0.1
drwxr-xr-x   6 hadoop hadoop     4096 May  3 22:01 kafka_2.11-0.8.2.2
-rw-r--r--   1 hadoop hadoop 60984831 Jul 28 20:47 kafka-manager-1.3.2.1.zip
lrwxrwxrwx   1 hadoop hadoop       26 Apr 21 22:18 kibana -> kibana-4.6.3-linux-x86_64/
drwxrwxr-x  11 hadoop hadoop     4096 Nov  4  2016 kibana-4.6.3-linux-x86_64
lrwxrwxrwx   1 hadoop hadoop       37 Jul  3 22:18 phoenix -> apache-phoenix-4.10.0-HBase-0.98-bin/
lrwxrwxrwx   1 hadoop hadoop       13 Jun  8 09:44 scala -> scala-2.11.8/
drwxrwxr-x   9 hadoop hadoop     4096 Feb 27  2015 scala-2.10.5
drwxrwxr-x   6 hadoop hadoop     4096 Mar  4  2016 scala-2.11.8
lrwxrwxrwx   1 hadoop hadoop       12 May  1 19:35 snappy -> snappy-1.1.3
drwxr-xr-x   6 hadoop hadoop     4096 May  1 19:40 snappy-1.1.3
lrwxrwxrwx   1 hadoop hadoop       26 Jun  8 00:39 spark -> spark-1.6.1-bin-hadoop2.6/
drwxr-xr-x  12 hadoop hadoop     4096 Feb 27  2016 spark-1.6.1-bin-hadoop2.6
lrwxrwxrwx.  1 hadoop hadoop       11 Apr 12 11:28 sqoop -> sqoop-1.4.6
drwxr-xr-x.  9 hadoop hadoop     4096 May 19 10:31 sqoop-1.4.6
lrwxrwxrwx   1 hadoop hadoop       19 May 21 17:21 storm -> apache-storm-1.0.2/
lrwxrwxrwx   1 hadoop hadoop       34 Jun  8 11:20 zeppelin -> zeppelin-0.5.6-incubating-bin-all/
drwxr-xr-x  11 hadoop hadoop     4096 Jun  8 11:33 zeppelin-0.5.6-incubating-bin-all
lrwxrwxrwx.  1 hadoop hadoop       15 Apr 12 11:28 zookeeper -> zookeeper-3.4.6
drwxr-xr-x. 10 hadoop hadoop     4096 Apr 12 17:13 zookeeper-3.4.6
[hadoop@master app]$ unzip kafka-manager-1.3.2.1.zip 
基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

 

 

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

 

 

 

 

 

   2、cd kafka-manager-1.3.2.1

 基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)
lrwxrwxrwx.  1 hadoop hadoop       10 Apr 12 11:28 hive -> hive-1.0.0
drwxrwxr-x.  8 hadoop hadoop     4096 Jul 26 08:14 hive-1.0.0
lrwxrwxrwx.  1 hadoop hadoop       11 Apr 12 10:18 jdk -> jdk1.7.0_79
drwxr-xr-x.  8 hadoop hadoop     4096 Apr 11  2015 jdk1.7.0_79
drwxr-xr-x.  8 hadoop hadoop     4096 Aug  5  2015 jdk1.8.0_60
lrwxrwxrwx   1 hadoop hadoop       19 Jul 27 17:27 kafka -> kafka_2.10-0.9.0.1/
drwxr-xr-x   7 hadoop hadoop     4096 Jul 28 19:55 kafka_2.10-0.9.0.1
drwxr-xr-x   6 hadoop hadoop     4096 May  3 22:01 kafka_2.11-0.8.2.2
drwxrwxr-x   6 hadoop hadoop     4096 Jul 28 20:51 kafka-manager-1.3.2.1
-rw-r--r--   1 hadoop hadoop 60984831 Jul 28 20:47 kafka-manager-1.3.2.1.zip
lrwxrwxrwx   1 hadoop hadoop       26 Apr 21 22:18 kibana -> kibana-4.6.3-linux-x86_64/
drwxrwxr-x  11 hadoop hadoop     4096 Nov  4  2016 kibana-4.6.3-linux-x86_64
lrwxrwxrwx   1 hadoop hadoop       37 Jul  3 22:18 phoenix -> apache-phoenix-4.10.0-HBase-0.98-bin/
lrwxrwxrwx   1 hadoop hadoop       13 Jun  8 09:44 scala -> scala-2.11.8/
drwxrwxr-x   9 hadoop hadoop     4096 Feb 27  2015 scala-2.10.5
drwxrwxr-x   6 hadoop hadoop     4096 Mar  4  2016 scala-2.11.8
lrwxrwxrwx   1 hadoop hadoop       12 May  1 19:35 snappy -> snappy-1.1.3
drwxr-xr-x   6 hadoop hadoop     4096 May  1 19:40 snappy-1.1.3
lrwxrwxrwx   1 hadoop hadoop       26 Jun  8 00:39 spark -> spark-1.6.1-bin-hadoop2.6/
drwxr-xr-x  12 hadoop hadoop     4096 Feb 27  2016 spark-1.6.1-bin-hadoop2.6
lrwxrwxrwx.  1 hadoop hadoop       11 Apr 12 11:28 sqoop -> sqoop-1.4.6
drwxr-xr-x.  9 hadoop hadoop     4096 May 19 10:31 sqoop-1.4.6
lrwxrwxrwx   1 hadoop hadoop       19 May 21 17:21 storm -> apache-storm-1.0.2/
lrwxrwxrwx   1 hadoop hadoop       34 Jun  8 11:20 zeppelin -> zeppelin-0.5.6-incubating-bin-all/
drwxr-xr-x  11 hadoop hadoop     4096 Jun  8 11:33 zeppelin-0.5.6-incubating-bin-all
lrwxrwxrwx.  1 hadoop hadoop       15 Apr 12 11:28 zookeeper -> zookeeper-3.4.6
drwxr-xr-x. 10 hadoop hadoop     4096 Apr 12 17:13 zookeeper-3.4.6
[hadoop@master app]$ rm kafka-manager-1.3.2.1.zip 
[hadoop@master app]$ cd kafka-manager-1.3.2.1/
基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

 

 

 

   3、 修改conf/application.conf文件,特别是kafka-manager.zkhosts的配置

 基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)
[hadoop@master kafka-manager-1.3.2.1]$ pwd
/home/hadoop/app/kafka-manager-1.3.2.1
[hadoop@master kafka-manager-1.3.2.1]$ ll
total 32
drwxrwxr-x 2 hadoop hadoop  4096 Jul 28 20:51 bin
drwxrwxr-x 2 hadoop hadoop  4096 Jul 28 20:51 conf
drwxrwxr-x 2 hadoop hadoop 12288 Jul 28 20:51 lib
-rw-r--r-- 1 hadoop hadoop  6323 Feb 22 08:55 README.md
drwxrwxr-x 3 hadoop hadoop  4096 Jul 28 20:51 share
[hadoop@master kafka-manager-1.3.2.1]$ cd conf/
[hadoop@master conf]$ 
[hadoop@master conf]$ pwd
/home/hadoop/app/kafka-manager-1.3.2.1/conf
[hadoop@master conf]$ ll
total 24
-rw-r--r-- 1 hadoop hadoop 1277 Feb 22 11:07 application.conf
-rw-r--r-- 1 hadoop hadoop   27 Feb 22 08:55 consumer.properties
-rw-r--r-- 1 hadoop hadoop 2108 Feb 22 08:55 logback.xml
-rw-r--r-- 1 hadoop hadoop 1367 Feb 22 08:55 logger.xml
-rw-r--r-- 1 hadoop hadoop 7167 Feb 22 08:55 routes
[hadoop@master conf]$ vim application.conf 
基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

 

 

  以下是默认,我贴出来,大家学习学习

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)
# Copyright 2015 Yahoo Inc. Licensed under the Apache License, Version 2.0
# See accompanying LICENSE file.

# This is the main configuration file for the application.
# ~~~~~

# Secret key
# ~~~~~
# The secret key is used to secure cryptographics functions.
# If you deploy your application to several instances be sure to use the same key!
play.crypto.secret="^<csmm5Fx4d=r2HEX8pelM3iBkFVv?k[mc;IZE<_Qoq8EkX_/7@Zt6dP05Pzea3U"
play.crypto.secret=${?APPLICATION_SECRET}

# The application languages
# ~~~~~
play.i18n.langs=["en"]

play.http.requestHandler = "play.http.DefaultHttpRequestHandler"
play.http.context = "/"
play.application.loader=loader.KafkaManagerLoader

kafka-manager.zkhosts="localhost:2181"
kafka-manager.zkhosts=${?ZK_HOSTS}
pinned-dispatcher.type="PinnedDispatcher"
pinned-dispatcher.executor="thread-pool-executor"
application.features=["KMClusterManagerFeature","KMTopicManagerFeature","KMPreferredReplicaElectionFeature","KMReassignPartitionsFeature"]

akka {
  loggers = ["akka.event.slf4j.Slf4jLogger"]
  loglevel = "INFO"
}


basicAuthentication.enabled=false
basicAuthentication.username="admin"
basicAuthentication.password="password"
basicAuthentication.realm="Kafka-Manager"


kafka-manager.consumer.properties.file=${?CONSUMER_PROPERTIES_FILE}
                                                                           
基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

 

 

 

  修改

  我的是master、slave1和slave2,大家根据自己的机器情况对应进行修改即可。

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

 

 

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

kafka-manager.zkhosts="master:2181,slave1:2181,slave2:2181"

 

 

 

 

4、 运行kafka manager

  注意:默认启动端口为9000。

  要到大家kafka-manager的安装目录下来执行。

  如我这里是在/home/hadoop/app/kafka-manager-1.3.2.1

bin/kafka-manager -Dconfig.file=conf/application.conf

 

  或者 

nohup bin/kafka-manager -Dconfig.file=conf/application.conf &  (后台运行)

 

 

  当然,大家可以以这个端口为所用,大家也可以在启动的时候,开启另一个端口,比如我这里开启10000端口。

   最好使用绝对路径。

  要到大家kafka-manager的安装目录下来执行。

  如我这里是在/home/hadoop/app/kafka-manager-1.3.2.1

nohup bin/kafka-manager  -Dconfig.file=/home/hadoop/app/kafka-manager-1.3.2.1/conf/application.conf -Dhttp.port=10000 &

 

 

 

  

 5、打开浏览器,访问http://IP:10000

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)
[hadoop@master kafka-manager-1.3.2.1]$ pwd
/home/hadoop/app/kafka-manager-1.3.2.1
[hadoop@master kafka-manager-1.3.2.1]$ ll
total 32
drwxrwxr-x 2 hadoop hadoop  4096 Jul 28 20:51 bin
drwxrwxr-x 2 hadoop hadoop  4096 Jul 28 21:00 conf
drwxrwxr-x 2 hadoop hadoop 12288 Jul 28 20:51 lib
-rw-r--r-- 1 hadoop hadoop  6323 Feb 22 08:55 README.md
drwxrwxr-x 3 hadoop hadoop  4096 Jul 28 20:51 share
[hadoop@master kafka-manager-1.3.2.1]$ nohup bin/kafka-manager  -Dconfig.file=/home/hadoop/app/kafka-manager-1.3.2.1/conf/application.conf -Dhttp.port=10000 &
[1] 5930
[hadoop@master kafka-manager-1.3.2.1]$ nohup: ignoring input and appending output to `nohup.out'

[1]+  Exit 1                  nohup bin/kafka-manager -Dconfig.file=/home/hadoop/app/kafka-manager-1.3.2.1/conf/application.conf -Dhttp.port=10000
[hadoop@master kafka-manager-1.3.2.1]$ 
基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

 

 

 

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

 

 

 

 

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)
[hadoop@master kafka-manager-1.3.2.1]$ cat nohup.out 
This application is already running (Or delete /home/hadoop/app/kafka-manager-1.3.2.1/RUNNING_PID file).
21:08:31,607 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
21:08:31,608 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
21:08:31,640 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [file:/home/hadoop/app/kafka-manager-1.3.2.1/conf/logback.xml]
21:08:32,822 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug attribute not set
21:08:32,898 |-INFO in ch.qos.logback.core.joran.action.ConversionRuleAction - registering conversion word coloredLevel with class [play.api.Logger$ColoredLevel]
21:08:32,902 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.rolling.RollingFileAppender]
21:08:32,967 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [FILE]
21:08:33,311 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
21:08:33,634 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@18:27 - no applicable action for [totalSizeCap], current ElementPath  is [[configuration][appender][rollingPolicy][totalSizeCap]]
21:08:33,669 |-INFO in c.q.l.core.rolling.TimeBasedRollingPolicy - No compression will be used
21:08:33,681 |-INFO in c.q.l.core.rolling.TimeBasedRollingPolicy - Will use the pattern application.home_IS_UNDEFINED/logs/application.%d{yyyy-MM-dd}.log for the active file
21:08:33,708 |-INFO in c.q.l.core.rolling.DefaultTimeBasedFileNamingAndTriggeringPolicy - The date pattern is 'yyyy-MM-dd' from file name pattern 'application.home_IS_UNDEFINED/logs/application.%d{yyyy-MM-dd}.log'.
21:08:33,708 |-INFO in c.q.l.core.rolling.DefaultTimeBasedFileNamingAndTriggeringPolicy - Roll-over at midnight.
21:08:33,770 |-INFO in c.q.l.core.rolling.DefaultTimeBasedFileNamingAndTriggeringPolicy - Setting initial period to Fri Jul 28 21:08:33 CST 2017
21:08:33,796 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - Active log file name: application.home_IS_UNDEFINED/logs/application.log
21:08:33,796 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - File property is set to [application.home_IS_UNDEFINED/logs/application.log]
21:08:33,834 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
21:08:33,838 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
21:08:33,852 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
21:08:33,890 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.classic.AsyncAppender]
21:08:33,948 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [ASYNCFILE]
21:08:33,949 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [FILE] to ch.qos.logback.classic.AsyncAppender[ASYNCFILE]
21:08:33,949 |-INFO in ch.qos.logback.classic.AsyncAppender[ASYNCFILE] - Attaching appender named [FILE] to AsyncAppender.
21:08:33,950 |-INFO in ch.qos.logback.classic.AsyncAppender[ASYNCFILE] - Setting discardingThreshold to 51
21:08:33,953 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.classic.AsyncAppender]
21:08:33,954 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [ASYNCSTDOUT]
21:08:33,954 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to ch.qos.logback.classic.AsyncAppender[ASYNCSTDOUT]
21:08:33,954 |-INFO in ch.qos.logback.classic.AsyncAppender[ASYNCSTDOUT] - Attaching appender named [STDOUT] to AsyncAppender.
21:08:33,954 |-INFO in ch.qos.logback.classic.AsyncAppender[ASYNCSTDOUT] - Setting discardingThreshold to 51
21:08:33,956 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [play] to INFO
21:08:33,956 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [application] to INFO
21:08:33,956 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [kafka.manager] to INFO
21:08:33,960 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.avaje.ebean.config.PropertyMapLoader] to OFF
21:08:33,960 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.avaje.ebeaninternal.server.core.XmlConfigLoader] to OFF
21:08:33,960 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.avaje.ebeaninternal.server.lib.BackgroundThread] to OFF
21:08:33,960 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.gargoylesoftware.htmlunit.javascript] to OFF
21:08:33,961 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.apache.zookeeper] to INFO
21:08:33,961 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to WARN
21:08:33,962 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [ASYNCFILE] to Logger[ROOT]
21:08:33,962 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [ASYNCSTDOUT] to Logger[ROOT]
21:08:33,966 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
21:08:33,970 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@18cf1e03 - Registering current configuration as safe fallback point

[warn] o.a.c.r.ExponentialBackoffRetry - maxRetries too large (100). Pinning to 29
[info] k.m.a.KafkaManagerActor - Starting curator...
[info] o.a.z.ZooKeeper - Client environment:zookeeper.version=3.4.8--1, built on 02/06/2016 03:18 GMT
[info] o.a.z.ZooKeeper - Client environment:host.name=master
[info] o.a.z.ZooKeeper - Client environment:java.version=1.8.0_60
[info] o.a.z.ZooKeeper - Client environment:java.vendor=Oracle Corporation
[info] o.a.z.ZooKeeper - Client environment:java.home=/home/hadoop/app/jdk1.8.0_60/jre
[info] o.a.z.ZooKeeper - Client environment:java.class.path=/home/hadoop/app/kafka-manager-1.3.2.1/lib/../conf/:/home/hadoop/app/kafka-manager-1.3.2.1/lib/kafka-manager.kafka-manager-1.3.2.1-sans-externalized.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.scala-lang.scala-library-2.11.8.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.typesafe.play.twirl-api_2.11-1.1.1.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.apache.commons.commons-lang3-3.4.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.typesafe.play.play-server_2.11-2.4.6.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.typesafe.play.play_2.11-2.4.6.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.typesafe.play.build-link-2.4.6.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.typesafe.play.play-exceptions-2.4.6.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.javassist.javassist-3.19.0-GA.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.typesafe.play.play-iteratees_2.11-2.4.6.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.scala-stm.scala-stm_2.11-0.7.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.typesafe.config-1.3.0.jar:/home/hadoop/app/
kafka-manager-1.3.2.1/lib/com.typesafe.play.play-json_2.11-2.4.6.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.typesafe.play.play-functional_2.11-2.4.6.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.typesafe.play.play-datacommons_2.11-2.4.6.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/joda-time.joda-time-2.8.1.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.joda.joda-convert-1.7.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.fasterxml.jackson.datatype.jackson-datatype-jdk8-2.5.4.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.fasterxml.jackson.datatype.jackson-datatype-jsr310-2.5.4.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.typesafe.play.play-netty-utils-2.4.6.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.slf4j.jul-to-slf4j-1.7.12.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.slf4j.jcl-over-slf4j-1.7.12.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/ch.qos.logback.logback-core-1.1.3.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/ch.qos.logback.logback-classic-1.1.3.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.typesafe.akka.akka-actor_2.11-2.3.14.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.typesafe.akka.akka-slf4j_2.11-2.3.14.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/commons-codec.commons-codec-1.10.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/xerces.xercesImpl-2.11.0.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/xml-apis.xml-apis-1.4.01.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/javax.transaction.jta-1.1.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.google.inject.guice-4.0.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/javax.inject.javax.inject-1.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/aopalliance.aopalliance-1.0.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.google.guava.guava-16.0.1.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.google.inject.extensions.guice-assistedinject-4.0.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.typesafe.play.play-netty-server_2.11-2.4.6.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/io.netty.netty-3.10.4.Final.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.typesafe.netty.netty-http-pipelining-1.1.4.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.google.code.findbugs.jsr305-2.0.1.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.webjars.webjars-play_2.11-2.4.0-2.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.webjars.requirejs-2.1.20.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.webjars.webjars-locator-0.28.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.webjars.webjars-locator-core-0.27.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.apache.commons.commons-compress-1.9.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.webjars.npm.validate.js-0.8.0.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.webjars.bootstrap-3.3.5.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.webjars.jquery-2.1.4.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.webjars.backbonejs-1.2.3.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.webjars.underscorejs-1.8.3.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.webjars.dustjs-linkedin-2.6.1-1.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.webjars.json-20121008-1.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.apache.curator.curator-framework-2.10.0.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.apache.curator.curator-client-2.10.0.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/jline.jline-0.9.94.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.apache.curator.curator-recipes-2.10.0.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.json4s.json4s-jackson_2.11-3.4.0.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.json4s.json4s-core_2.11-3.4.0.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.json4s.json4s-ast_2.11-3.4.0.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.json4s.json4s-scalap_2.11-3.4.0.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.thoughtworks.paranamer.paranamer-2.8.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.scala-lang.modules.scala-xml_2.11-1.0.5.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.fasterxml.jackson.core.jackson-databind-2.6.7.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.fasterxml.jackson.core.jackson-annotations-2.6.0.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.fasterxml.jackson.core.jackson-core-2.6.7.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.json4s.json4s-scalaz_2.11-3.4.0.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.scalaz.scalaz-core_2.11-7.2.4.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.slf4j.log4j-over-slf4j-1.7.12.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.adrianhurt.play-bootstrap3_2.11-0.4.5-P24.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.clapper.grizzled-slf4j_2.11-1.0.2.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.apache.kafka.kafka_2.11-0.10.1.1.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.apache.kafka.kafka-clients-0.10.1.1.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/net.jpountz.lz4.lz4-1.3.0.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.xerial.snappy.snappy-java-1.1.2.6.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.slf4j.slf4j-api-1.7.21.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/net.sf.jopt-simple.jopt-simple-4.9.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.yammer.metrics.metrics-core-2.2.0.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.101tec.zkclient-0.9.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.apache.zookeeper.zookeeper-3.4.8.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.scala-lang.modules.scala-parser-combinators_2.11-1.0.4.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.beachape.enumeratum_2.11-1.4.4.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/com.beachape.enumera
tum-macros_2.11-1.4.4.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/org.scala-lang.scala-reflect-2.11.8.jar:/home/hadoop/app/kafka-manager-1.3.2.1/lib/kafka-manager.kafka-manager-1.3.2.1-assets.jar
[info] o.a.z.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
[info] o.a.z.ZooKeeper - Client environment:java.io.tmpdir=/tmp
[info] o.a.z.ZooKeeper - Client environment:java.compiler=<NA>
[info] o.a.z.ZooKeeper - Client environment:os.name=Linux
[info] o.a.z.ZooKeeper - Client environment:os.arch=amd64
[info] o.a.z.ZooKeeper - Client environment:os.version=2.6.32-431.el6.x86_64
[info] o.a.z.ZooKeeper - Client environment:user.name=hadoop
[info] o.a.z.ZooKeeper - Client environment:user.home=/home/hadoop
[info] o.a.z.ZooKeeper - Client environment:user.dir=/home/hadoop/app/kafka-manager-1.3.2.1
[info] o.a.z.ZooKeeper - Initiating client connection, connectString=master:2181,slave1:2181,slave2:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@1dc0b758
[info] o.a.z.ClientCnxn - Opening socket connection to server slave1/192.168.80.146:2181. Will not attempt to authenticate using SASL (unknown error)
[info] k.m.a.KafkaManagerActor - zk=master:2181,slave1:2181,slave2:2181
[info] k.m.a.KafkaManagerActor - baseZkPath=/kafka-manager
[info] o.a.z.ClientCnxn - Socket connection established to slave1/192.168.80.146:2181, initiating session
[info] o.a.z.ClientCnxn - Session establishment complete on server slave1/192.168.80.146:2181, sessionid = 0x25d88012f990002, negotiated timeout = 40000
[info] k.m.a.DeleteClusterActor - Started actor akka://kafka-manager-system/user/kafka-manager/delete-cluster
[info] k.m.a.DeleteClusterActor - Starting delete clusters path cache...
[info] k.m.a.KafkaManagerActor - Started actor akka://kafka-manager-system/user/kafka-manager
[info] k.m.a.KafkaManagerActor - Starting delete clusters path cache...
[info] k.m.a.DeleteClusterActor - Adding kafka manager path cache listener...
[info] k.m.a.DeleteClusterActor - Scheduling updater for 10 seconds
[info] k.m.a.KafkaManagerActor - Starting kafka manager path cache...
[info] k.m.a.KafkaManagerActor - Adding kafka manager path cache listener...
[info] k.m.a.KafkaManagerActor - Updating internal state...
[info] play.api.Play - Application started (Prod)
[info] p.c.s.NettyServer - Listening for HTTP on /0:0:0:0:0:0:0:0:10000
[info] k.m.a.KafkaManagerActor - Updating internal state...
[info] k.m.a.KafkaManagerActor - Updating internal state...
基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

 

 

   也许,大家在这一步启动的时候,出现如下错误


  

 

 

  Kafka-manager的进程是

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

[hadoop@master ~]$ jps
6037 ProdServerStart

 

 

 

   比如kill掉它,则

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

 

 

 

 

 

 

 

 

 

 

   然后,再次打开,即可

http://192.168.80.145:10000/

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

 

 

 

  或者

http://master:10000/

基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)



本文转自大数据躺过的坑博客园博客,原文链接:http://www.cnblogs.com/zlslch/p/7252484.html,如需转载请自行联系原作者


上一篇:Survey | 深度学习方法在生物网络中的应用


下一篇:PowerShell 操作 Azure SQL Active Geo-Replication 实战