elasticsearch部署
运行docker,并将数据目录和日志目录挂到主机数据盘。
docker run -d --name es -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -v /data1/containers/es/data/:/usr/share/elasticsearch/data/ -v /data1/containers/es/logs/:/usr/share/elasticsearch/logs/ elasticsearch:7.13.2
加入跨域配置
docker exec -it es /bin/bash
vi config/elasticsearch.yml
http.cors.enabled: true
http.cors.allow-origin: "*"
docker restart es
kibana部署
docker run --name kibana -p 5601:5601 -d -e ELASTICSEARCH_URL=http://192.168.21.102:9200 -v /data1/containers/kibana/data/:/usr/share/kibana/data/ kibana:7.13.2
修改yml配置
docker exec -it kibana /bin/bash
vi config.kibana.yml
修改es url
重启docker
docker restart kibana
logstash部署
docker启动,映射要采集的日志(以nginx access日志为例)和logstash的配置文件。
docker run --name logstash -d -v /data1/log:/var/log2 -v /data1/elk/logstash_aiops:/usr/share/logstash/pipeline logstash:7.13.2
logstash 配置文件结构为
logstash_aiops
├── aiops_patterns
│ └── nginx_pattern
└── logstash.conf
bash-4.2$ vi nginx_pattern
NGINXACCESS %{IP:remote_ip} \- \- \[%{HTTPDATE:timestamp}\] %{QS:request} %{NUMBER:status} %{NUMBER:bytes} %{QS:referer} %{QS:agent}
bash-4.2$ vi logstash.conf
input {
file {
path => "/var/log2/access.log"
start_position => "beginning"
}
}
filter {
grok {
match => {"message" => "%{NGINXACCESS}"}
}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
index => "logstash"
}
}
docker内修改yml配置
docker exec -it logstash/bin/bash
vi logstash.yml
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://localhost:9200" ]
重启docker
docker restart logstash