kafka投入生产使用后,需要借助一些管理(监控)工具。目前这类工具有很多种,具体如下表:
监控工具 |
特点 |
备注 |
Kafka Web Console |
实现brokers、topic列表等监控,展示生产和消费流量图。 |
存在bug,会与生产者、消费者和zookeeper建立大量的连接,导致网络阻塞。 |
Kafka Manager |
实现broker级常见的jmx监控,可以对consumer消费进度进行监控,可以web对多个集群进行管理。 |
编译安装耗时,不能设置访问控制,不能配置告警,非常耗内存。 |
Kafka Eagle |
实现broker级常见的jmx监控,可以对consumer消费进度进行监控,可以web对多个集群进行管理。 |
安装简单(二进制包解压即用), 可以配置告警(钉钉、微信、email均可),需要数据库(mysql或sqlite)。 |
Kafka Offset Monitor |
如果场景是偏重集群管理,则不要选择 |
该项目已经近2年未维护。 |
JmxTool |
结合Influxdb和Grafana使用 |
比较繁琐 |
我们这里选择Kafka Eagle
基础环境准备:
时区调整,时间校准: # date -R # timedatectl set-timezone Asia/Shanghai # yum -y install ntp # ntpdate ntp1.aliyun.com 安装wget: # yum install -y wget 关闭selinux: # vi /etc/sysconfig/selinux SELINUX=disabled 重启服务器并验证: # getenforce Disabled 设置防火墙 [root@node1 ~]# firewall-cmd --zone=public --add-port=8048/tcp --permanent success [root@node1 ~]# firewall-cmd --reload success 设置句柄数等
1、安装JDK
[root@node1 opt]# yum -y install java-1.8.0-openjdk java-1.8.0-openjdk-devel [root@node1 opt]# java -version openjdk version "1.8.0_262" OpenJDK Runtime Environment (build 1.8.0_262-b10) OpenJDK 64-Bit Server VM (build 25.262-b10, mixed mode) [root@node1 opt]# vim /etc/profile export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.262.b10-0.el7_8.x86_64 export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export PATH=$PATH:$JAVA_HOME/bin [root@node1 opt]# source /etc/profile
2、Mysql安装
备注:eagle默认使用sqlite存储,我们这里改成mysql存储
[root@node1 ~]# yum -y install wget lrzsz net-tools [root@node1 ~]# wget http://dev.mysql.com/get/mysql57-community-release-el7-10.noarch.rpm [root@node1 ~]# yum -y install mysql57-community-release-el7-10.noarch.rpm [root@node1 ~]# yum -y install mysql-community-server [root@node1 ~]# systemctl start mysqld.service [root@node1 ~]# systemctl status mysqld.service [root@node1 ~]# systemctl enable mysqld.service [root@node1 ~]# netstat -anpt | grep 3306 查询root临时密码 [root@node1 ~]# grep "password" /var/log/mysqld.log 2019-02-14T22:34:02.620689Z 1 [Note] A temporary password is generated for root@localhost: &5E%TRUAzrDL [root@node1 ~]# mysql -u root -p #root临时密码是&5E%TRUAzrDL 修改root密码: mysql> alter user 'root'@'localhost' identified by 'root新密码'; Query OK, 0 rows affected (0.00 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec) 创建数据库eagle: mysql> create database eagle character set utf8 collate utf8_bin; Query OK, 1 row affected (0.01 sec) mysql> use eagle; Database changed 创建普通用户zabbix,并授权数据库eagle给eagle用户、并允许远程访问: mysql> create user 'eagle'@'%' identified by 'Mysql@123'; mysql> grant all privileges on eagle.* TO 'eagle'@'%' identified by 'Mysql@123'; mysql> flush privileges; 卸载yum源 [root@node1 ~]# yum -y remove mysql57-community-release-el7-10.noarch
3、安装Eagle
下载路径:https://codeload.github.com/smartloli/kafka-eagle-bin/tar.gz/v2.0.1
[root@node1 opt]# cd /opt [root@node1 opt]# tar -zxvf kafka-eagle-bin-2.0.1.tar.gz kafka-eagle-bin-2.0.1/ kafka-eagle-bin-2.0.1/kafka-eagle-web-2.0.1-bin.tar.gz [root@node1 opt]# mv kafka-eagle-bin-2.0.1 kafka-eagle [root@node1 opt]# cd kafka-eagle [root@node1 kafka-eagle]# tar zxvf kafka-eagle-web-2.0.1-bin.tar.gz [root@node1 kafka-eagle]# mv kafka-eagle-web-2.0.1 kafka-eagle-web [root@node1 opt]# vi /etc/profile export KE_HOME=/opt/kafka-eagle/kafka-eagle-web export PATH=$PATH:$KE_HOME/bin [root@node1 opt]# source /etc/profile
4、编辑Eagle配置文件
[root@node1 ~]# vim /opt/kafka-eagle/kafka-eagle-web/conf/system-config.properties ###################################### # multi zookeeper & kafka cluster list ###################################### kafka.eagle.zk.cluster.alias=cluster1,cluster2 cluster1.zk.list=node1:2181,node2:2181,node3:2181 #cluster2.zk.list=xdn10:2181,xdn11:2181,xdn12:2181 ###################################### # zookeeper enable acl ###################################### cluster1.zk.acl.enable=false cluster1.zk.acl.schema=digest cluster1.zk.acl.username=test cluster1.zk.acl.password=test123 ###################################### # broker size online list ###################################### cluster1.kafka.eagle.broker.size=20 ###################################### # zk client thread limit ###################################### kafka.zk.limit.size=25 ###################################### # kafka eagle webui port ###################################### kafka.eagle.webui.port=8048 ###################################### # kafka offset storage ###################################### cluster1.kafka.eagle.offset.storage=kafka cluster2.kafka.eagle.offset.storage=zk ###################################### # kafka metrics, 15 days by default ###################################### kafka.eagle.metrics.charts=true kafka.eagle.metrics.retain=15 ###################################### # kafka sql topic records max ###################################### kafka.eagle.sql.topic.records.max=5000 kafka.eagle.sql.fix.error=true ###################################### # delete kafka topic token ###################################### kafka.eagle.topic.token=keadmin ###################################### # kafka sasl authenticate ###################################### cluster1.kafka.eagle.sasl.enable=false cluster1.kafka.eagle.sasl.protocol=SASL_PLAINTEXT cluster1.kafka.eagle.sasl.mechanism=SCRAM-SHA-256 cluster1.kafka.eagle.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="kafka" password="kafka-eagle"; cluster1.kafka.eagle.sasl.client.id= cluster1.kafka.eagle.blacklist.topics= cluster1.kafka.eagle.sasl.cgroup.enable=false cluster1.kafka.eagle.sasl.cgroup.topics= cluster2.kafka.eagle.sasl.enable=false cluster2.kafka.eagle.sasl.protocol=SASL_PLAINTEXT cluster2.kafka.eagle.sasl.mechanism=PLAIN cluster2.kafka.eagle.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka" password="kafka-eagle"; cluster2.kafka.eagle.sasl.client.id= cluster2.kafka.eagle.blacklist.topics= cluster2.kafka.eagle.sasl.cgroup.enable=false cluster2.kafka.eagle.sasl.cgroup.topics= ###################################### # kafka ssl authenticate ###################################### cluster3.kafka.eagle.ssl.enable=false cluster3.kafka.eagle.ssl.protocol=SSL cluster3.kafka.eagle.ssl.truststore.location= cluster3.kafka.eagle.ssl.truststore.password= cluster3.kafka.eagle.ssl.keystore.location= cluster3.kafka.eagle.ssl.keystore.password= cluster3.kafka.eagle.ssl.key.password= cluster3.kafka.eagle.ssl.cgroup.enable=false cluster3.kafka.eagle.ssl.cgroup.topics= ###################################### # kafka sqlite jdbc driver address ###################################### #kafka.eagle.driver=org.sqlite.JDBC #kafka.eagle.url=jdbc:sqlite:/hadoop/kafka-eagle/db/ke.db #kafka.eagle.username=root #kafka.eagle.password=www.kafka-eagle.org ###################################### # kafka mysql jdbc driver address ###################################### kafka.eagle.driver=com.mysql.jdbc.Driver kafka.eagle.url=jdbc:mysql://node1:3306/eagle kafka.eagle.username=eagle kafka.eagle.password=Mysql@123
5、启动Eagle
[root@node1 ~]# cd /opt/kafka-eagle/kafka-eagle-web/bin/ [root@node1 bin]# chmod +x ke.sh [root@node1 bin]# ke.sh start …… Welcome to __ __ ___ ____ __ __ ___ ______ ___ ______ __ ______ / //_/ / | / __/ / //_/ / | / ____/ / | / ____/ / / / ____/ / ,< / /| | / /_ / ,< / /| | / __/ / /| | / / __ / / / __/ / /| | / ___ | / __/ / /| | / ___ | / /___ / ___ |/ /_/ / / /___ / /___ /_/ |_| /_/ |_|/_/ /_/ |_| /_/ |_| /_____/ /_/ |_|\____/ /_____//_____/ Version 2.0.1 -- Copyright 2016-2020 ******************************************************************* * Kafka Eagle Service has started success. * Welcome, Now you can visit 'http://192.168.146.199:8048' * Account:admin ,Password:123456 ******************************************************************* * <Usage> ke.sh [start|status|stop|restart|stats] </Usage> * <Usage> https://www.kafka-eagle.org/ </Usage> *******************************************************************
6、访问Eagle
默认用户名admin,密码123456
参考链接:http://www.kafka-eagle.org/articles/docs/installation/linux-macos.html
参考链接:https://github.com/smartloli/kafka-eagle
参考链接:http://download.kafka-eagle.org/