容器云平台No.9~kubernetes日志收集系统EFK

EFK介绍

EFK,全称Elasticsearch Fluentd Kibana ,是kubernetes中比较常用的日志收集方案,也是官方比较推荐的方案。
通过EFK,可以把集群的所有日志收集到Elasticsearch中,然后可以对日志做分析。一般用于故障排查,数据分析等。。。

数据流示意图
容器云平台No.9~kubernetes日志收集系统EFK

官方项目

https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch

小技巧,如果只希望下载github项目的某一个目录,可以使用svn,这里就只下载fluentd-elasticsearch目录,

例如需要下载的子目录为:
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch

将/tree/master/换成trunk,然后使用svn下载即可
svn co https://github.com/kubernetes/kubernetes/trunk/cluster/addons/fluentd-elasticsearch

这里因为是学习,一步步安装,感兴趣的可以看官方项目

部署Elasticsearch

存储服务是基础,需要先部署,其他两个服务运行的时候需要连接es。
1、编写efk-es-statefulset.yaml

---
apiVersion: v1
kind: Namespace
metadata:
  name: efk
---
kind: Service
apiVersion: v1
metadata:
  name: elasticsearch-logging
  namespace: efk
  labels:
    app: elasticsearch-logging
spec:
  selector:
    app: elasticsearch-logging
  clusterIP: None
  ports:
    - port: 9200
      name: rest
    - port: 9300
      name: inter
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch-logging
  namespace: efk
spec:
  serviceName: elasticsearch-logging
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch-logging
  template:
    metadata:
      labels:
        app: elasticsearch-logging
    spec:
      initContainers:
      - name: increase-vm-max-map
        image: busybox
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit
        image: busybox
        command: ["sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true
      containers:
      - name: elasticsearch-logging
        image: docker.elastic.co/elasticsearch/elasticsearch:7.9.1
        ports:
        - name: rest
          containerPort: 9200
        - name: inter
          containerPort: 9300
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 1000m
        volumeMounts:
        - name: elasticsearch-logging
          mountPath: /usr/share/elasticsearch/data
        env:
        - name: cluster.name
          value: k8s-logs
        - name: node.name
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: cluster.initial_master_nodes
          value: "elasticsearch-logging-0,elasticsearch-logging-1,elasticsearch-logging-2"
        - name: discovery.zen.minimum_master_nodes
          value: "2"
        - name: discovery.seed_hosts
          value: "elasticsearch-logging"
        - name: ES_JAVA_OPTS
          value: "-Xms512m -Xmx512m"
        - name: network.host
          value: "0.0.0.0"
      volumes:
      - name: elasticsearch-logging
        emptyDir: {}

2、执行部署命令
这里需要注意,如果长时间下载不下来镜像,可以自行先将镜像下载,要不然可能会一直不成功
本文把服务都部署到命名空间:efk

[root@k8s-master001 EFK]# kubectl  apply -f efk-es-statefulset.yaml

[root@k8s-node001 EFK]# kubectl  get po -n efk
NAME                      READY   STATUS    RESTARTS   AGE
elasticsearch-logging-0   1/1     Running   0          10m
elasticsearch-logging-1   1/1     Running   0          10m
elasticsearch-logging-2   1/1     Running   0          9m42s

3、验证es是否正常运行

sh-4.2# curl http://localhost:9200/_cluster/state?pretty
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0{
  "cluster_name" : "k8s-logs",
  "cluster_uuid" : "OLzzi6sbSZG11bqBFM9z5Q",
  "version" : 38,
  "state_uuid" : "uFa1_QKgRAK_NJ33SArGDw",
  "master_node" : "XiShXS0DSGmx0Dxp1r9vEw",
  "blocks" : { },
  "nodes" : {
    "XN-vHccLRkaEgr9Q1cctNA" : {
      "name" : "elasticsearch-logging-2",
      "ephemeral_id" : "WBEY2tGNRzmc3cBDJAEP9Q",
      "transport_address" : "100.108.163.2:9300",
      "attributes" : {
        "ml.machine_memory" : "16630661120",
        "ml.max_open_jobs" : "20",
        "xpack.installed" : "true",
        "transform.node" : "true"
      }
    },
.................

以上, elasticsearch就部署好了,接下来部署kibana

部署kibana

1、编写efk-kibana.yaml

apiVersion: v1
kind: Service
metadata:
  name: kibana-logging
  namespace: efk
  labels:
    app: kibana-logging
spec:
  ports:
  - port: 5601
  type: NodePort
  selector:
    app: kibana-logging

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana-logging
  namespace: efk
  labels:
    app: kibana-logging
spec:
  selector:
    matchLabels:
      app: kibana-logging
  template:
    metadata:
      labels:
        app: kibana-logging
    spec:
      containers:
      - name: kibana-logging
        image: docker.elastic.co/kibana/kibana:7.9.1
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 1000m
        env:
        - name: ELASTICSEARCH_HOSTS
          value: http://elasticsearch-logging:9200
        ports:
        - containerPort: 5601

2、执行部署命令

[root@k8s-node001 EFK]# kubectl  apply -f efk-kibana.yaml
service/kibana-logging created
deployment.apps/kibana-logging created

[root@k8s-node001 EFK]# kubectl  get po,svc -n efk
NAME                              READY   STATUS    RESTARTS   AGE
kibana-logging-6b5f984c44-7ljjn   1/1     Running   0          8m16s

NAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
service/elasticsearch-logging   ClusterIP   None            <none>        9200/TCP,9300/TCP   28m
service/kibana-logging          NodePort    10.105.208.90   <none>        5601:32352/TCP      13m

3、验证kibana
服务以及通过NodePort暴露,通过IP+32352,可以访问到kibana web界面,如图所示
容器云平台No.9~kubernetes日志收集系统EFK

下一步,我们来部署日志收集客户端Fluentd

部署Fluentd

1、使用configmap创建fluentd配置文件
配置比较长,可以查看链接,这里就不贴出来了

https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-elasticsearch/fluentd-es-configmap.yaml

2、执行部署

[root@k8s-node001 EFK]# kubectl  appply -f fluentd-es-configmap.yaml

3、创建fluentd-es-ds.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd-es
  namespace: efk
  labels:
    k8s-app: fluentd-es
    addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    k8s-app: fluentd-es
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - "namespaces"
  - "pods"
  verbs:
  - "get"
  - "watch"
  - "list"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    k8s-app: fluentd-es
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
  name: fluentd-es
  namespace: efk
  apiGroup: ""
roleRef:
  kind: ClusterRole
  name: fluentd-es
  apiGroup: ""
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-es-v3.0.2
  namespace: efk
  labels:
    k8s-app: fluentd-es
    version: v3.0.2
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-es
      version: v3.0.2
  template:
    metadata:
      labels:
        k8s-app: fluentd-es
        version: v3.0.2
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      priorityClassName: system-node-critical
      serviceAccountName: fluentd-es
      containers:
      - name: fluentd-es
        image: registry.cn-qingdao.aliyuncs.com/up2cloud/fluentd:v3.0.2
        env:
        - name: FLUENTD_ARGS
          value: --no-supervisor -q
        resources:
          limits:
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: config-volume
          mountPath: /etc/fluent/config.d
        ports:
        - containerPort: 24231
          name: prometheus
          protocol: TCP
        livenessProbe:
          tcpSocket:
            port: prometheus
          initialDelaySeconds: 5
          timeoutSeconds: 10
        readinessProbe:
          tcpSocket:
            port: prometheus
          initialDelaySeconds: 5
          timeoutSeconds: 10
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: config-volume
        configMap:
          name: fluentd-es-config-v0.2.0

4、执行部署

[root@k8s-node001 EFK]# kubectl apply -f fluentd-es-ds.yaml

5、查看部署结果

[root@k8s-node001 EFK]# kubectl  get po -n efk 
NAME                              READY   STATUS    RESTARTS   AGE
elasticsearch-logging-0           1/1     Running   0          3h34m
elasticsearch-logging-1           1/1     Running   0          3h33m
elasticsearch-logging-2           1/1     Running   0          3h33m
fluentd-es-v3.0.2-24lbr           1/1     Running   0          26m
fluentd-es-v3.0.2-5qcsv           1/1     Running   0          26m
fluentd-es-v3.0.2-gnp58           1/1     Running   0          26m
fluentd-es-v3.0.2-gtx4s           1/1     Running   0          26m
fluentd-es-v3.0.2-mxz9t           1/1     Running   0          26m
kibana-logging-6b5f984c44-7ljjn   1/1     Running   0          3h19m

从输出信息可以看到,整套日志收集系统已经全部正常运行,现在就可以使用kibana查看收集到的日志了
容器云平台No.9~kubernetes日志收集系统EFK

至此日志收集系统搭建完毕,EFK更多用途后面会陆续介绍,也可以自行前往官网查看。

容器云平台No.9~kubernetes日志收集系统EFK

容器云平台No.9~kubernetes日志收集系统EFK

上一篇:kubernetes(二十二) 服务网格化istio入门


下一篇:异步上传,数据从1万提升到5万