https://www.cnblogs.com/xbblogs/p/10775908.html
https://www.liuyixiang.com/post/69592.html
使用Docker部署elasticsearch、logstash、kibana
指定版本:6.7.1 (建议使用同一的版本、屏蔽三个软件间的不兼容性)
下载镜像:
docker pull elasticsearch:6.7.1 docker pull logstash:6.7.1 docker pull kibana:6.7.1
修改vm.max_map_count
vim /etc/sysctl.conf 添加配置:vm.max_map_count=262144 执行命令,确保生效配置生效: sysctl -p 依据服务器配置而定
es集群
这里es集群用了3个节点,配置文件放在 /root/es/config/ 目录下,
SRE实战 互联网时代守护先锋,助力企业售后服务体系运筹帷幄!一键直达领取阿里云限量特价优惠。
注意:es是不允许使用root用户启动的,/root/es/config/ 目录最好改成 777权限,
es1.yml配置文件内容:
cluster.name: elasticsearch-cluster node.name: es-node1 network.bind_host: 0.0.0.0 network.publish_host: 10.90.101.48 http.port: 9200 transport.tcp.port: 9300 http.cors.enabled: true http.cors.allow-origin: "*" node.master: true node.data: true discovery.zen.ping.unicast.hosts: ["10.90.101.48:9300","10.90.101.48:9301","10.90.101.48:9302"] discovery.zen.minimum_master_nodes: 2 xpack.ml.enabled: false xpack.monitoring.enabled: false xpack.security.enabled: false xpack.watcher.enabled: false
View Code
启动当前配置文件的es命令:
docker run -e ES_JAVA_OPTS="-Xms512m -Xmx512m" -d -p 9200:9200 -p 9300:9300 -v /root/es/config/es1.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /root/es/data1:/usr/share/elasticsearch/data --name ES01 elasticsearch:6.7.1
cluster.name: elasticsearch-cluster node.name: es-node2 network.bind_host: 0.0.0.0 network.publish_host: 10.90.101.48 http.port: 9201 transport.tcp.port: 9301 http.cors.enabled: true http.cors.allow-origin: "*" node.master: true node.data: true discovery.zen.ping.unicast.hosts: ["10.90.101.48:9300","10.90.101.48:9301","10.90.101.48:9302"] discovery.zen.minimum_master_nodes: 2 xpack.ml.enabled: false xpack.monitoring.enabled: false xpack.security.enabled: false xpack.watcher.enabled: false
View Code
启动当前配置文件的es命令:
docker run -e ES_JAVA_OPTS="-Xms512m -Xmx512m" -d -p 9201:9201 -p 9301:9301 -v /root/es/config/es2.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /root/es/data2:/usr/share/elasticsearch/data --name ES02 elasticsearch:6.7.1 es3.yml配置文件内容:
cluster.name: elasticsearch-cluster node.name: es-node3 network.bind_host: 0.0.0.0 network.publish_host: 10.90.101.48 http.port: 9202 transport.tcp.port: 9302 http.cors.enabled: true http.cors.allow-origin: "*" node.master: true node.data: true discovery.zen.ping.unicast.hosts: ["10.90.101.48:9300","10.90.101.48:9301","10.90.101.48:9302"] discovery.zen.minimum_master_nodes: 2 xpack.ml.enabled: false xpack.monitoring.enabled: false xpack.security.enabled: false xpack.watcher.enabled: false
View Code
启动当前配置文件的es命令:
docker run -e ES_JAVA_OPTS="-Xms512m -Xmx512m" -d -p 9202:9202 -p 9302:9302 -v /root/es/config/es3.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /root/es/data3:/usr/share/elasticsearch/data --name ES03 elasticsearch:6.7.1 1、-e JAVA_OPTS="-Xms512m -Xmx512m" 是配置当前es使用jvm的最大内存,内存配置过低会导致CPU非常高,如果服务器内存很大可配置2~4g 2、-p 9200:9200 -p 9300:9300 9200是es提供给外部的通讯端口,9300是es节点之间的通讯端口 3、 -v /root/es/config/es1.yml:/usr/share/elasticsearch/config/elasticsearch.yml 指定启动的配置文件 4、-v /root/es/data1:/usr/share/elasticsearch/data 指定es的数据挂载到当前服务器的目录 5、--name ES01 当前镜像的名字
kibana
server.name: kibana server.host: "0" elasticsearch.url: http://10.90.101.48:9300 xpack.monitoring.ui.container.elasticsearch.enabled: false xpack.security.enabled: false xpack.ml.enabled: false xpack.monitoring.enabled: false
View Code
启动kibana
docker run --name kibana -v /root/kibana/config:/usr/share/kibana/config -p 5601:5601 -d kibana:6.7.1
logstash
把images中的配置文件拷贝出来: 1. 先运行一个logstash实例 2. docker cp 容器id:/usr/share/logstash/config /root/logstash/config docker cp 容器id:/usr/share/logstash/pipeline /root/logstash/pipeline
pipeline/logstash.conf
input{ http{ host => "0.0.0.0" port => 5050 additional_codecs => {"application/json"=>"json"} codec => "plain" threads => 4 ssl => false } } output { elasticsearch { hosts => ["http://10.90.101.48:9200","http://10.90.101.51:9201","http://10.90.101.51:9202"] index => "log_%{logtype}_%{+YYYY.MM.dd}" } }
View Code
设置日志输入输出方式
config/logstash.yml
http.host: "0.0.0.0" xpack.monitoring.enabled: false
View Code
启动logstash
docker run -d --name logstash -p 5050:5050 -v /root/logstash/config:/usr/share/logstash/config -v /root/logstash/pipeline:/usr/share/logstash/pipeline logstash:6.7.1
cerebro 一个管理es的工具
docker pull lmenezes/cerebro
启动cerebro
docker run --name es-head -p 9000:9000 -d lmenezes/cerebro 浏览器中打开 ip:9000 链接 http://ip:9200 即可看到es 集群的状态