文章目录
提示:以下是本篇文章正文内容,下面案例可供参考
准备工作
1.创建docker网络
本文创建docker网络,以下ip地址将都以容器名代替
docker network create zxt-net
2.编写logstash.conf
input {
jdbc {
jdbc_driver_library => "/app/mysql-connector-java-8.0.13.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://mysql:3306/synctest?useSSL=false"
jdbc_user => "root"
jdbc_password => "root"
tracking_column => "unix_ts_in_secs"
use_column_value => true
schedule => "*/5 * * * * *"
statement => "SELECT *, UNIX_TIMESTAMP(createtime) AS unix_ts_in_secs FROM student WHERE (UNIX_TIMESTAMP(createtime) > :sql_last_value AND createtime < NOW()) ORDER BY createtime ASC"
}
}
filter {
mutate {
copy => { "id" => "[@metadata][_id]"}
remove_field => ["id", "@version", "unix_ts_in_secs"]
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
index => "student"
timeout => 300
document_id => "%{[@metadata][_id]}"
}
}
3.准备jar
网上自行下载: mysql-connector-java-8.0.13.jar
一、 Docker搭建Elasticsearch
1.拉取镜像
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.6.2
2.启动容器
# discovery.type=single-node设置为指定节点为单节点发现以便绕过ES的引导检查
docker run -dit --name elasticsearch --network zxt-net -p 5002:9200 -p 5003:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.6.2
3.访问测试
http://ip:5002
4.可能出现问题
#如果出现以下页面:Kibana server is not ready yet,说明Kibana没有找到ES节点,
docker exec -it kibana bash
#修改你配置文件中的elasticsearch.hosts,改成你自己的服务器的ip地址,
vi config/kibana.yml
#重启kibana容器
docker restart kibana
二、Docker搭建Kibana
1.拉取镜像
docker pull kibana:7.6.2
2.启动容器
docker run --name kibana --network zxt-net -p 5001:5601 -d kibana:7.6.2
3.访问测试
http://ip:5001/
三、Logstash同步数据
1.拉取镜像
docker pull logstash:6.7.1
2.启动容器
docker run -dit --network zxt-net --name=logstash -v /data/logstash.conf:/usr/share/logstash/pipeline/logstash.conf -v /data/mysql-connector-java-8.0.13.jar:/app/mysql-connector-java-8.0.13.jar logstash:6.7.1
3.服务验证
#查看是否正常同步
docker logs logstash
总结
今天初步完成docker搭建elk服务,并完成logstash同步mysql数据到es
下一章将完成springboot集成es