1 搭建elasticsearch
拉取镜像
docker pull elasticsearch:6.4.3
配置文件
elasticsearch.yml:
network.host: 0.0.0.0
xpack:
ml.enabled: false
monitoring.enabled: false
security.enabled: false
watcher.enabled: false
docker运行
docker run -d -p 9200:9200 -p 9300:9300 -v /home/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml --name elasticsearch elasticsearch:6.4.3
2 搭建logstash
拉取镜像
docker pull elastic/logstash:6.4.3
配置文件
logstash.conf
输入为tcp配置
input {
tcp {
mode => "server"
host => "0.0.0.0"
port => 4560
codec => json_lines
}
}
output {
elasticsearch {
hosts => "192.168.1.245:9200"
index => "intelligent-logistics-platform-%{+YYYY.MM.dd}"
}
}
输入为filebeat配置
input {
beats {
port => 5044
codec => json # 直接将filebeat保存在message中的json字串解析出来
}
}
output {
elasticsearch {
hosts => ["192.168.48.17:9200","192.168.48.22:9200","192.168.48.18:9200"]
index => "%{app_name}-%{+YYYY.MM.dd}"
manage_template => true
}
stdout{
codec=>rubydebug
}
}
logstash.yml
http.host: "0.0.0.0"
docker运行
docker run -d -p 4560:4560 -v /home/logstash/config:/usr/share/logstash/config --name logstash elastic/logstash:6.4.3 -f /usr/share/logstash/config/logstash.conf
3 搭建kibana
拉取镜像
docker pull docker.elastic.co/kibana/kibana:6.4.3
配置文件
kibana.yml
server.host: "0.0.0.0"
#elasticsearch.url: http://192.168.1.245:9200 #单机连接
elasticsearch.hosts: ["192.168.48.17:9300","192.168.48.22:9300","192.168.48.18:9300"] #集群连接
xpack:
apm.ui.enabled: false
graph.enabled: false
ml.enabled: false
monitoring.enabled: false
reporting.enabled: false
security.enabled: false
grokdebugger.enabled: false
searchprofiler.enabled: false
4 filebeat k8s配置
配置文件:filebeat-k8s.yml
1 --- 2 apiVersion: v1 3 kind: ConfigMap 4 metadata: 5 name: filebeat-config 6 namespace: platform 7 data: 8 filebeat.yml: |- 9 filebeat.prospectors: 10 - type: log 11 paths: 12 - /log/intelligent-logistics-platform/base-activiti/out*.log #容器中的路径 13 #- /root/log/intelligent-logistics-platform/base-app-message/*.log 14 #- /root/log/intelligent-logistics-platform/base-app-region/*.log 15 #- /root/log/intelligent-logistics-platform/base-data-dict/*.log 16 #- /root/log/intelligent-logistics-platform/base-gateway/*.log 17 #- /root/log/intelligent-logistics-platform/base-id-center/*.log 18 #- /root/log/intelligent-logistics-platform/base-logistics/*.log 19 #- /root/log/intelligent-logistics-platform/base-message/*.log 20 #- /root/log/intelligent-logistics-platform/base-third-party/*.log 21 #- /root/log/intelligent-logistics-platform/base-uaa/*.log 22 #- /root/log/intelligent-logistics-platform/base-upload-file/*.log 23 tags: ["base-activiti"] 24 fields_under_root: true 25 fields: 26 level: info 27 app_name: base-activiti 28 29 - type: log 30 paths: 31 - /log/intelligent-logistics-platform/base-app-message/out*.log #容器中的路径 32 tags: ["base-app-message"] 33 fields_under_root: true 34 fields: 35 level: info 36 app_name: base-app-message 37 38 - type: log 39 paths: 40 - /log/intelligent-logistics-platform/base-app-region/out*.log #容器中的路径 41 tags: ["base-app-region"] 42 fields_under_root: true 43 fields: 44 level: info 45 app_name: base-app-region 46 47 - type: log 48 paths: 49 - /log/intelligent-logistics-platform/base-data-dict/out*.log #容器中的路径 50 tags: ["base-data-dict"] 51 fields_under_root: true 52 fields: 53 level: info 54 app_name: base-data-dict 55 56 - type: log 57 paths: 58 - /log/intelligent-logistics-platform/base-gateway/out*.log #容器中的路径 59 tags: ["base-gateway"] 60 fields_under_root: true 61 fields: 62 level: info 63 app_name: base-gateway 64 65 - type: log 66 paths: 67 - /log/intelligent-logistics-platform/base-id-center/out*.log #容器中的路径 68 tags: ["base-id-center"] 69 fields_under_root: true 70 fields: 71 level: info 72 app_name: base-id-center 73 74 - type: log 75 paths: 76 - /log/intelligent-logistics-platform/base-logistics/out*.log #容器中的路径 77 tags: ["base-logistics"] 78 fields_under_root: true 79 fields: 80 level: info 81 app_name: base-logistics 82 83 - type: log 84 paths: 85 - /log/intelligent-logistics-platform/base-message/out*.log #容器中的路径 86 tags: ["base-message"] 87 fields_under_root: true 88 fields: 89 level: info 90 app_name: base-message 91 92 - type: log 93 paths: 94 - /log/intelligent-logistics-platform/base-third-party/out*.log #容器中的路径 95 tags: ["base-third-party"] 96 fields_under_root: true 97 fields: 98 level: info 99 app_name: base-third-party 100 101 - type: log 102 paths: 103 - /log/intelligent-logistics-platform/base-uaa/out*.log #容器中的路径 104 tags: ["base-uaa"] 105 fields_under_root: true 106 fields: 107 level: info 108 app_name: base-uaa 109 110 - type: log 111 paths: 112 - /log/intelligent-logistics-platform/base-upload-file/out*.log #容器中的路径 113 tags: ["base-upload-file"] 114 fields_under_root: true 115 fields: 116 level: info 117 app_name: base-upload-file 118 119 - type: log 120 paths: 121 - /log/intelligent-logistics-platform/nginx/*.log #容器中的路径 122 tags: ["nginx"] 123 fields_under_root: true 124 fields: 125 level: info 126 app_name: nginx 127 # processors: 128 # -drop_fields: 129 # fields: ["beat.hostname","beat.name","beat.version","offset","prospector.type"] 130 output.logstash: 131 hosts: ['logstash:5044'] 132 133 --- 134 135 apiVersion: apps/v1 136 kind: DaemonSet 137 metadata: 138 name: filebeat 139 namespace: platform 140 labels: 141 logs: filebeat 142 spec: 143 selector: 144 matchLabels: 145 logs: filebeat 146 template: 147 metadata: 148 labels: 149 logs: filebeat 150 spec: 151 terminationGracePeriodSeconds: 30 152 containers: 153 - name: filebeat 154 image: docker.elastic.co/beats/filebeat:6.4.3 155 args: [ 156 "-c", "/usr/share/filebeat.yml", 157 "-e", 158 ] 159 resources: 160 limits: 161 memory: 500Mi 162 requests: 163 cpu: 100m 164 memory: 200Mi 165 volumeMounts: 166 - name: config 167 mountPath: /usr/share/filebeat.yml 168 subPath: filebeat.yml 169 - name: data 170 mountPath: /usr/share/filebeat/data 171 - name: app-logs 172 mountPath: /log 173 - name: timezone 174 mountPath: /etc/localtime 175 176 volumes: 177 - name: config 178 configMap: 179 name: filebeat-config 180 - name: data 181 emptyDir: {} 182 - name: app-logs 183 hostPath: 184 path: /root/log 185 type: DirectoryOrCreate 186 - name: timezone 187 hostPath: 188 path: /etc/localtime
docker运行
docker run -d -p 5601:5601 -v /home/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml --name kibana docker.elastic.co/kibana/kibana:6.4.3
日志采集方式
日志作为任一系统不可或缺的部分,在K8S的官方文档中也介绍了多种的日志采集形式,总结起来主要有下述3种:原生方式、DaemonSet方式和Sidecar方式。
- 原生方式:使用
kubectl logs
直接在查看本地保留的日志,或者通过docker engine的log driver
把日志重定向到文件、syslog、fluentd等系统中。 - DaemonSet方式:在K8S的每个node上部署日志agent,由agent采集所有容器的日志到服务端。
- Sidecar方式:一个POD中运行一个sidecar的日志agent容器,用于采集该POD主容器产生的日志。
采集方式对比
原生方式 | DaemonSet方式 | Sidecar方式 | |
采集日志类型 | 标准输出 | 标准输出+部分文件 | 文件 |
部署运维 | 低,原生支持 | 一般,需维护DaemonSet | 较高,每个需要采集日志的POD都需要部署sidecar容器 |
日志分类存储 | 无法实现 | 一般,可通过容器/路径等映射 | 每个POD可单独配置,灵活性高 |
多租户隔离 | 弱 | 一般,只能通过配置间隔离 | 强,通过容器进行隔离,可单独分配资源 |
支持集群规模 | 本地存储无限制,若使用syslog、fluentd会有单点限制 | 中小型规模,业务数最多支持百级别 | 无限制 |
资源占用 | 低,docker engine提供 | 较低,每个节点运行一个容器 | 较高,每个POD运行一个容器 |
查询便捷性 | 低 | 较高,可进行自定义的查询、统计 | 高,可根据业务特点进行定制 |
可定制性 | 低 | 低 | 高,每个POD单独配置 |
适用场景 | 测试、POC等非生产场景 | 功能单一型的集群 | 大型、混合型、PAAS型集群 |
从上述表格中可以看出:
- 原生方式相对功能太弱,一般不建议在生产系统中使用,否则问题调查、数据统计等工作很难完成;
- DaemonSet方式在每个节点只允许一个日志agent,相对资源占用要小很多,但扩展性、租户隔离性受限,比较适用于功能单一或业务不是很多的集群;
- Sidecar方式为每个POD单独部署日志agent,相对资源占用较多,但灵活性以及多租户隔离性较强,建议大型的K8S集群或作为PAAS平台为多个业务方服务的集群使用该方式。