05 . ELK Stack+Redis日志收集平台

环境清单

IP hostname 软件 配置要求 网络 备注
192.168.43.176 ES/数据存储 elasticsearch-7.2 内存2GB/硬盘40GB Nat,内网
192.168.43.215 Kibana/UI展示 kibana-7.2 内存2GB/硬盘40GB Nat,内网
192.168.43.164 Filebeat/数据采集 Filebeat-7.2/nginx 内存2GB/硬盘40GB Nat,内网
192.168.43.86 Kibana/UI展示 kibana-7.2 内存2GB/硬盘40GB Nat,内网
192.168.43.30 Logstash/数据管道 logstash-7.2 内存2GB/硬盘40GB Nat,内网
192.168.43.47 Redis/消息队列 Redis-4.0 内存2GB/硬盘40GB Nat,内网
192.168.43.205 nginx nginx1.14 内存2GB/硬盘40GB Nat,内网

05 . ELK Stack+Redis日志收集平台

配置日志采集端Nginx

修改Nginx日志格式

nginx日志默认格式为log格式,传输到es中需要经过grok插件进行处理并转换成json格式,这一过程是很消耗logstash资源的,而且传入到es中的字段并不容易分析,所以在收集端先将日志转为json格式,再传入es中去,这样传入的字段也是利于分析的。

编辑nginx配置文件
log_format json '{ "@timestamp": "$time_iso8601", '
'"time": "$time_iso8601", '
'"clientip": "$remote_addr", '
'"remote_user": "$remote_user", '
'"body_bytes_sent": "$body_bytes_sent", '
'"request_time": "$request_time", '
'"status": "$status", '
'"host": "$host", '
'"request": "$request", '
'"request_method": "$request_method", '
'"uri": "$uri", '
'"http_referrer": "$http_referer", '
'"body_bytes_sent":"$body_bytes_sent", '
'"http_x_forwarded_for": "$http_x_forwarded_for", '
'"http_user_agent": "$http_user_agent" '
'}'; access_log /var/log/nginx/access.log json;
重启Nginx
systemctl restart nginx
访问产生日志
curl localhost -I
验证数据
tailf /var/log/nginx/access.log
{ "@timestamp": "2020-07-21T19:54:27+08:00", "time": "2020-07-21T19:54:27+08:00", "clientip": "192.168.43.45", "remote_user": "-", "body_bytes_sent": "0", "request_time": "0.000", "status": "304", "host": "192.168.43.205", "request": "GET / HTTP/1.1", "request_method": "GET", "uri": "/index.html", "http_referrer": "-", "body_bytes_sent":"0", "http_x_forwarded_for": "-", "http_user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36" }
{ "@timestamp": "2020-07-21T21:53:54+08:00", "time": "2020-07-21T21:53:54+08:00", "clientip": "127.0.0.1", "remote_user": "-", "body_bytes_sent": "0", "request_time": "0.000", "status": "200", "host": "localhost", "request": "HEAD / HTTP/1.1", "request_method": "HEAD", "uri": "/index.html", "http_referrer": "-", "body_bytes_sent":"0", "http_x_forwarded_for": "-", "http_user_agent": "curl/7.29.0" }

配置日志采集端Redis

Redis消息队列使用说明

redis服务器是logstash官方推荐的broker(代理人)选择,broker角色也就意味着会同时存在输入和输出两个插件,产生数据的被称作生产者,而消费数据的被称作消费者。

# 1、防止Logstash和ES无法正常通信,从而丢失日志。
# 2、防止日志量过大导致ES无法承受大量写操作从而丢失日志。
# 3、应用程序(php,java)在输出日志时,可以直接输出到消息队列,从而完成日志收集。
# 补充:如果redis使用的消息队列出现扩展瓶颈,可以使用更加强大的kafka,flume来代替。
编译安装Redis
wget http://download.redis.io/releases/redis-4.0.11.tar.gz  #下载Redis源码
tar -zxf redis-4.0.11.tar.gz -C /usr/local #解压Redis源码
make && make install PREFIX=/usr/local/redis #编译安装Redis
echo "export PATH=$PATH:/usr/local/redis/bin" >> /etc/profile # 将Redis加入环境变量
启动Redis
# 1.前端模式启动
# 直接运行bin/redis-server将以前端模式启动,前端模式启动的缺点是ssh命令窗口关闭则redis-server程序结束,不推荐使用此方法启动redis-server
# 2.后端模式启动
# 修改redis.conf配置文件, daemonize yes 以后端模式启动
vim /usr/local/redis/bin/redis.conf
daemonize yes
redis-server redis.conf
连接Redis和关闭Redis
# 连接redis
redis-cli # 强行终止redis进程可能会导致redis持久化数据丢失。正确停止Redis的方式应该是向Redis发送SHUTDOWN命令,命令为:
redis-cli shutdown

安装配置Filebeat

filebeat是一个轻量级的日志采集器,由于logstash比较消耗资源,不适合在每台主机上部署logstash

rpm安装FIlebeat
rpm -vi filebeat-7.2.0-x86_64.rpm
配置filebeat收集nginx日志

配置输入端采集nginx日志,根据字段类型不同输出到Redis不同的key中,每种日志存放在不同的key中,便于后续的处理

 cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access*.log
fields:
type: www_access
fields_under_root: true
- type: log
paths:
- /var/log/nginx/error*.log
fields:
type: www_error
fields_under_root: true
- type: log
paths:
- /var/log/nginx/doc.access.log
fields:
type: doc_access
fields_under_root: true
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
output.redis:
hosts: ["192.168.43.47:6379"]
key: "nginx"
keys:
- key: "%{[type]}"
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
Redis端验证数据
[root@redis ~]# redis-cli
127.0.0.1:6379> keys *
1) "a"
2) "www_access"
3) "www_error"
127.0.0.1:6379> LINDEX www_access 3
"{\"@timestamp\":\"2020-07-22T06:26:49.662Z\",\"@metadata\":{\"beat\":\"filebeat\",\"type\":\"_doc\",\"version\":\"7.2.0\"},\"input\":{\"type\":\"log\"},\"type\":\"www_access\",\"ecs\":{\"version\":\"1.0.0\"},\"host\":{\"hostname\":\"nginx\",\"architecture\":\"x86_64\",\"os\":{\"name\":\"CentOS Linux\",\"kernel\":\"3.10.0-514.el7.x86_64\",\"codename\":\"Core\",\"platform\":\"centos\",\"version\":\"7 (Core)\",\"family\":\"redhat\"},\"id\":\"b029c3ce28374f7db698c050e342457f\",\"containerized\":false,\"name\":\"nginx\"},\"agent\":{\"ephemeral_id\":\"76f6177a-dd5b-4c40-9b61-8f406507c6cb\",\"hostname\":\"nginx\",\"id\":\"cf47c715-17f2-48d5-9f10-866f10eba0cf\",\"version\":\"7.2.0\",\"type\":\"filebeat\"},\"log\":{\"file\":{\"path\":\"/var/log/nginx/access.log\"},\"offset\":8995},\"message\":\"192.168.43.45 [21/Jul/2020:19:20:16 +0800] \\\"GET / HTTP/1.1\\\" 304 0 \\\"-\\\" 0.000 \\\"-\\\" - \\\"Mozilla/5.0 (Windows NT 10.0

配置日志处理端Logstash

安装并配置logstash
rpm -ivh jdk-8u121-linux-x64.rpm
tar xvf logstash-7.2.0.tar.gz -C /opt/
编写日志处理配置文件

定义Redis列表或者频道名称,以及Redis的数据类型,定义type以区分不同的日志类型,使用json插件将message字段处理成json格式,并删掉message字段,使用date插件定义新的时间戳,使用geoip插件根据客户端IP来定位客户端大体,默认是使用GeoLite2 city数据库,此数据库官网每两周更新一次,如果对IP地址的准确性要求高,可写一个定时任务,每两周从官网下载新的数据库,mutate插件用于修改字段数据类型,因为”coordinates”子字段不修改会默认为keyword格式,这对于在kibana上创建坐标地图可能会不支持.

test.conf

cat /opt/logstash-7.2.0/test.conf
input {
redis {
host => "192.168.43.47"
port => 6379
type => "www_access"
data_type => "list"
key => "www_access"
codec => "json"
}
redis {
host => "192.168.43.47"
port => 6379
type => "nginx_error"
data_type => "list"
key => "www_error"
}
redis {
host => "192.168.43.47"
port => 6379
type => "doc_access"
data_type => "list"
key => "doc_access"
codec => "json"
}
}
filter {
if [type] == "www_access" {
json {
source => "message"
remove_field => "message"
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
geoip {
source => "clientip"
fields => ["city_name", "country_code2", "country_name", "region_name","longitude","latitude","ip"]
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]","float" ]
}
}
else if [type] =~ "error" {
grok {
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
else {
json {
source => "message"
remove_field => "message"
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ] }
geoip {
source => "clientip"
fields => ["city_name", "country_code2", "country_name", "region_name","longitude","latitude","ip"]
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ] }
mutate {
convert => ["[geoip][coordinates]","float"]
}
} }
output {
if [type] == "www_access" {
elasticsearch {
hosts => ["192.168.43.176:9200"]
index => "nginx_access-%{+YYYY.MM.dd}"
}
stdout {
codec => rubydebug
}
}
else if [type] == "www_error" {
elasticsearch {
hosts => ["192.168.43.215:9200"]
index => "nginx_error-%{+YYYY.MM.dd}"
}
}
else {
elasticsearch {
hosts => ["192.168.43.164:9200"]
index => "nginx_doc-%{+YYYY.MM.dd}"
}
stdout {
codec => rubydebug
}
}
}
启动Logstash
./bin/logstash   -f test.conf
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to /opt/logstash-7.2.0/logs which is now configured via log4j2.properties
[2020-07-22T15:26:51,015][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-07-22T15:26:51,110][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.2.0"}
[2020-07-22T15:27:02,202][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.43.176:9200/]}}
验证Redis数据是否被消费
# 我们可以看到日志都已经被消费了
127.0.0.1:6379> LLEN www_access
(integer) 0
127.0.0.1:6379> LINDEX www_access 1
(nil)

配置日志存储分析端

安装elasticsearch集群和kibana此处不做介绍,请看我之前文章

https://www.cnblogs.com/you-men/p/12761738.html

https://www.cnblogs.com/you-men/p/13167801.html

验证传递过来的nginx索引
[root@es1 ~]#  curl -XGET "http://127.0.0.1:9200/_cat/indices?v"
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .kibana 6LXFwoifQFWS7JUbWsSmkw 1 1 3 0 29.1kb 14.5kb
green open .monitoring-es-7-2020.07.22 SPa4aMKzRDSiv9EoUj-dkw 1 1 8707 7870 27.5mb 8.6mb
green open nginx_error-2020.07.22 8-ka5wT2T962BSmm1IkUoQ 1 1 7 0 44.3kb 22.1kb
green open .monitoring-es-7-2020.07.21 XmwuVQkyQfeyR2Jq6J4EmA 1 1 36700 18108 35mb 16.8mb
green open .monitoring-kibana-7-2020.07.22 fdi9l1vLRFOsVKoMeImlUQ 1 1 1337 0 744kb 372.5kb
green open nginx_access-2020.07.22 bN-5RWhBTfW4rPgTSPIJNg 1 1 9 0 46.6kb 23.3kb
green open nginx_access-2020.07.21 cO7Us2FRR8KAhgp2AP228g 1 1 5 0 39.4kb 19.7kb green open .monitoring-kibana-7-2020.07.21 hGIIp9KST0Kq60owKMWLmg 1 1 4080 0 1.7mb 887.2kb
配置kibana

05 . ELK Stack+Redis日志收集平台

05 . ELK Stack+Redis日志收集平台

也可以配置grafana集成到一个平台上

上一篇:课堂所讲整理:super和转型(修改版)


下一篇:ELK Stack企业日志平台文档