k8s部署 elfk 7.x + x-pack

k8s以StatefulSet方式部署elasticsearch集群,其中filebeat以sidecar方式部署。当前最新版本:7.10.1


namespace

mkdir -p /home/k8s/elfk/{elasticsearch-head,elasticsearch,logstash,kibana,filebeat}cd /home/k8s/elfk

vim namespace.yaml

apiVersion: v1kind: Namespacemetadata:
  name: log


pv

mount -t nfs -o vers=4,minorversion=0,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport xxx.cn-hangzhou.nas.aliyuncs.com:/ /mntmkdir -p /mnt/elfk-data/{elasticsearch-0,elasticsearch-1,elasticsearch-2}vim alicloud-nas-elfk-pv.yaml

apiVersion: v1kind: PersistentVolumemetadata:
  name: alicloud-nas-elfk-pv0  labels:
    alicloud-pvname: alicloud-nas-elfk-pvspec:
  capacity:
    storage: 500Gi  accessModes:
    - ReadWriteMany  csi:
    driver: nasplugin.csi.alibabacloud.com    volumeHandle: alicloud-nas-elfk-pv0    volumeAttributes:
      server: "xxx.cn-hangzhou.nas.aliyuncs.com"
      path: "/elfk-data/elasticsearch-0"
  mountOptions:
  - nolock,tcp,noresvport  - vers=4  
---apiVersion: v1kind: PersistentVolumemetadata:
  name: alicloud-nas-elfk-pv1  labels:
    alicloud-pvname: alicloud-nas-elfk-pvspec:
  capacity:
    storage: 500Gi  accessModes:
    - ReadWriteMany  csi:
    driver: nasplugin.csi.alibabacloud.com    volumeHandle: alicloud-nas-elfk-pv1    volumeAttributes:
      server: "xxx.cn-hangzhou.nas.aliyuncs.com"
      path: "/elfk-data/elasticsearch-1"
  mountOptions:
  - nolock,tcp,noresvport  - vers=4  
---apiVersion: v1kind: PersistentVolumemetadata:
  name: alicloud-nas-elfk-pv2  labels:
    alicloud-pvname: alicloud-nas-elfk-pvspec:
  capacity:
    storage: 500Gi  accessModes:
    - ReadWriteMany  csi:
    driver: nasplugin.csi.alibabacloud.com    volumeHandle: alicloud-nas-elfk-pv2    volumeAttributes:
      server: "xxx.cn-hangzhou.nas.aliyuncs.com"
      path: "/elfk-data/elasticsearch-2"
  mountOptions:
  - nolock,tcp,noresvport  - vers=4


elasticsearch-haed

vim elasticsearch-head/head.yaml

apiVersion: extensions/v1beta1kind: Ingressmetadata:
  name: head  namespace: logspec:
  rules:
  - host: head.lzxlinux.cn    http:
      paths:
      - path: /        backend:
          serviceName: head          servicePort: 9100---apiVersion: v1kind: Servicemetadata:
  name: head  namespace: log  labels:
    app: headspec:
  selector:
    app: head  ports:
  - port: 9100
    protocol: TCP    targetPort: 9100
    ---apiVersion: apps/v1kind: Deploymentmetadata:
  name: head  namespace: log  labels:
    app: headspec:
  replicas: 1
  selector:
    matchLabels:
      app: head  template:
    metadata:
      labels:
        app: head    spec:
      containers:
      - name: head        image: mobz/elasticsearch-head:5
        resources:
          limits:
            cpu: 200m            memory: 200Mi          requests:
            cpu: 100m            memory: 100Mi        ports:
        - containerPort: 9100


elasticsearch

vim elasticsearch/elasticsearch.yaml

apiVersion: extensions/v1beta1kind: Ingressmetadata:
  name: elasticsearch  namespace: logspec:
  rules:
  - host: elasticsearch.lzxlinux.cn    http:
      paths:
      - path: /        backend:
          serviceName: elasticsearch          servicePort: 9200---apiVersion: v1kind: Servicemetadata:
  name: elasticsearch  namespace: log  labels:
    app: elasticsearchspec:
  selector:
    app: elasticsearch  ports:
  - name: api    port: 9200
  - name: discovery    port: 9300
  clusterIP: None---apiVersion: apps/v1kind: StatefulSetmetadata:
  name: elasticsearch  namespace: logspec:
  serviceName: elasticsearch  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch  template:
    metadata:
      labels:
        app: elasticsearch    spec:
      initContainers:
      - name: fix-permissions        image: busybox        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: data          mountPath: /usr/share/elasticsearch/data      - name: increase-vm-max-map        image: busybox        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit        image: busybox        command: ["sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true
      containers:
      - name: elasticsearch        image: elasticsearch:7.10.1        env:
        - name: "cluster.name"
          value: "elk"
        - name: "node.master"
          value: "true"
        - name: "node.data"
          value: "true"
        - name: "http.host"
          value: "0.0.0.0"
        - name: "network.host"
          value: "_eth0_"
        - name: node.name          valueFrom:
            fieldRef:
              fieldPath: metadata.name        - name: "bootstrap.memory_lock"
          value: "false"
        - name: "http.port"
          value: "9200"
        - name: "transport.tcp.port"
          value: "9300"
        - name: "discovery.seed_hosts"
          value: "elasticsearch"
        - name: "cluster.initial_master_nodes"
          value: "elasticsearch-0,elasticsearch-1,elasticsearch-2"
        - name: "discovery.seed_resolver.timeout"
          value: "10s"
        - name: "discovery.zen.minimum_master_nodes"
          value: "2"
        - name: "gateway.recover_after_nodes"
          value: "3"
        - name: "http.cors.enabled"
          value: "true"
        - name: "http.cors.allow-origin"
          value: "*"
        - name: "http.cors.allow-headers"
          value: "Authorization,X-Requested-With,Content-Length,Content-Type"
        - name: "xpack.security.enabled"
          value: "true"
        - name: "xpack.security.transport.ssl.enabled"
          value: "true"
        - name: "xpack.security.transport.ssl.verification_mode"
          value: "certificate"
        - name: "xpack.security.transport.ssl.keystore.path"
          value: "elastic-certificates.p12"
        - name: "xpack.security.transport.ssl.truststore.path"
          value: "elastic-certificates.p12"
        - name: "ES_JAVA_OPTS"
          value: "-Xms512m -Xmx512m"
        ports:
        - containerPort: 9200
          name: api          protocol: TCP        - containerPort: 9300
          name: discovery          protocol: TCP        resources:
          limits:
            cpu: 1000m          requests:
            cpu: 500m        volumeMounts:
        - name: cert          mountPath: /usr/share/elasticsearch/config/elastic-certificates.p12          subPath: elastic-certificates.p12          readOnly: true
        - name: data          mountPath: /usr/share/elasticsearch/data      terminationGracePeriodSeconds: 30
      volumes:
      - name: cert        configMap:
          name: elastic-certificates  volumeClaimTemplates:
  - metadata:
      name: data      namespace: log    spec:
      selector:
        matchLabels:
          alicloud-pvname: alicloud-nas-elfk-pv      accessModes: [ "ReadWriteMany" ]
      resources:
        requests:
          storage: 500Gi


kibana

vim kibana/config.yaml

apiVersion: v1kind: ConfigMapmetadata:
  name: kibana-config  namespace: logdata:
  kibana.yml: |
    server.port: 5601
    server.host: "0"
    kibana.index: ".kibana"
    elasticsearch.hosts: ["http://elasticsearch:9200"]
    elasticsearch.username: kibana_system
    elasticsearch.password: elk-2021
    i18n.locale: "zh-CN"

vim kibana/kibana.yaml

apiVersion: extensions/v1beta1kind: Ingressmetadata:
  name: kibana  namespace: logspec:
  rules:
  - host: kibana.lzxlinux.cn    http:
      paths:
      - path: /        backend:
          serviceName: kibana          servicePort: 5601---apiVersion: v1kind: Servicemetadata:
  name: kibana  namespace: log  labels:
    app: kibanaspec:
  selector:
    app: kibana  ports:
  - port: 5601
    protocol: TCP    targetPort: 5601
    ---apiVersion: apps/v1kind: Deploymentmetadata:
  name: kibana  namespace: log  labels:
    app: kibanaspec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana  template:
    metadata:
      labels:
        app: kibana    spec:
      containers:
      - name: kibana        image: kibana:7.10.1        ports:
        - containerPort: 5601
        resources:
          limits:
            cpu: 500m            memory: 500Mi          requests:
            cpu: 500m            memory: 500Mi        volumeMounts:
        - name: config          mountPath: /usr/share/kibana/config      volumes:
      - name: config        configMap:
          name: kibana-config


logstash

vim logstash/config.yaml

apiVersion: v1kind: ConfigMapmetadata:
  name: logstash-config  namespace: logdata:
  logstash.yml: |
    http.host: "0"
    http.port: 9600
    path.config: /usr/share/logstash/pipeline
    config.reload.automatic: true
    xpack.monitoring.enabled: true
    xpack.monitoring.elasticsearch.username: logstash_system
    xpack.monitoring.elasticsearch.password: elk-2021
    xpack.monitoring.elasticsearch.hosts: ["http://elasticsearch:9200"]
    xpack.monitoring.collection.interval: 10s

  logstash.conf: |
    input {
      beats {
        port => 5040
      }
    }

    filter {
      if [type] == "nginx_access" {
        grok {
            match => { "message" => "%{COMBINEDAPACHELOG}" }
        }
      }
      if [type] == "tomcat_access" {
        grok {
            match => { "message" => "(?<timestamp>%{MONTHDAY}-%{MONTH}-%{YEAR} %{TIME}) %{LOGLEVEL:level} \[%{DATA:exception_info}\] %{GREEDYDATA:message}" }
        }
      }
    }

    output {
      if [type] == "nginx_access" {
        elasticsearch {
          hosts => ["elasticsearch:9200"]
          user => "elastic"
          password => "elk-2021"
          index => "nginx-log"        }
      }
      if [type] == "tomcat_access" {
        elasticsearch {
          hosts => ["elasticsearch:9200"]
          user => "elastic"
          password => "elk-2021"
          index => "tomcat-log"        }
      }
    }

vim logstash/logstash.yaml

apiVersion: v1kind: Servicemetadata:
  name: logstash  namespace: logspec:
  selector:
    app: logstash  ports:
  - protocol: TCP    port: 5040
    nodePort: 30040
  type: NodePort---apiVersion: apps/v1kind: Deploymentmetadata:
  name: logstash  namespace: logspec:
  replicas: 3
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  selector:
    matchLabels:
      app: logstash  template:
    metadata:
      labels:
        app: logstash    spec:
      containers:
      - name: logstash        image: logstash:7.10.1        ports:
        - containerPort: 9600
        - containerPort: 5040
        resources:
          limits:
            cpu: 300m            memory: 1000Mi          requests:
            cpu: 300m            memory: 500Mi        volumeMounts:
          - name: config            mountPath: /usr/share/logstash/config          - name: pipeline            mountPath: /usr/share/logstash/pipeline      volumes:
      - name: config        configMap:
          name: logstash-config          items:
          - key: logstash.yml            path: logstash.yml      - name: pipeline        configMap:
          name: logstash-config          items:
          - key: logstash.conf            path: logstash.conf


filebeat

以sidecar方式部署filebeat收集nginx和tomcat日志。

vim filebeat/filebeat-config.yaml

apiVersion: v1kind: ConfigMapmetadata:
  name: filebeat-config  namespace: default  labels:
    app: filebeatdata:
  filebeat.yml: |-
    filebeat.config:
      inputs:
        path: ${path.config}/inputs.d/*.yml        reload.enabled: false
      modules:
        path: ${path.config}/modules.d/*.yml        reload.enabled: false

    filebeat.inputs:
    - type: log      enabled: true
      tail_files: true
      backoff: "1s"
      paths:
        - /nginxlog/*.log      fields:
        pod_name: '${pod_name}'
        POD_IP: '${POD_IP}'
        type: nginx_access      fields_under_root: true
      multiline.pattern: '\d+\.\d+\.\d+\.\d+'
      multiline.negate: true
      multiline.match: after    - type: log      enabled: true
      tail_files: true
      backoff: "1s"
      paths:
        - /tomcatlog/*.log      fields:
        pod_name: '${pod_name}'
        POD_IP: '${POD_IP}'
        type: tomcat_access      fields_under_root: true
      multiline.pattern: '\d+\-\w+\-\d+ \d+:\d+:\d+\.\d+'
      multiline.negate: true
      multiline.match: after    output.logstash:
      hosts: ["logstash.log:5040"]
      enabled: true
      worker: 1
      compression_level: 3

vim filebeat/filebeat-nginx.yaml

apiVersion: v1kind: Servicemetadata:
  name: nginx  namespace: default  labels:
    app: nginxspec:
  selector:
    app: nginx  ports:
  - port: 80
    nodePort: 30080
  type: NodePort---apiVersion: apps/v1kind: Deploymentmetadata:
  name: nginx  namespace: defaultspec:
  replicas: 1
  minReadySeconds: 15
  selector:
    matchLabels:
      app: nginx  template:
    metadata:
      labels:
        app: nginx    spec:
      containers:
      - name: nginx        image: nginx:1.17.0        ports:
        - containerPort: 80
        volumeMounts:
        - name: nginx-log          mountPath: /var/log/nginx      - name: filebeat        image: elastic/filebeat:7.10.1        args: [
          "-c", "/etc/filebeat/filebeat.yml",
          "-e",
        ]
        env:
        - name: POD_IP          valueFrom:
            fieldRef:
              apiVersion: v1              fieldPath: status.podIP        - name: pod_name          valueFrom:
            fieldRef:
              apiVersion: v1              fieldPath: metadata.name        resources:
          limits:
            cpu: 100m            memory: 100Mi          requests:
            cpu: 100m            memory: 100Mi        securityContext:
          runAsUser: 0
        volumeMounts:
        - name: config          mountPath: /etc/filebeat          readOnly: true
        - name: data          mountPath: /usr/share/filebeat/data        - name: nginx-log          mountPath: /nginxlog      volumes:
      - name: config        configMap:
          name: filebeat-config          items:
          - key: filebeat.yml            path: filebeat.yml      - name: data        emptyDir: {}
      - name: nginx-log        emptyDir: {}

vim filebeat/filebeat-tomcat.yaml

apiVersion: v1kind: Servicemetadata:
  name: tomcat  namespace: default  labels:
    app: tomcatspec:
  selector:
    app: tomcat  ports:
  - port: 8080
    nodePort: 30880
  type: NodePort---apiVersion: apps/v1kind: Deploymentmetadata:
  name: tomcat  namespace: defaultspec:
  replicas: 1
  minReadySeconds: 15
  selector:
    matchLabels:
      app: tomcat  template:
    metadata:
      labels:
        app: tomcat    spec:
      terminationGracePeriodSeconds: 30
      containers:
      - name: tomcat        image: tomcat:8.0.51-alpine        ports:
        - containerPort: 8080
        volumeMounts:
        - name: tomcat-log          mountPath: /usr/local/tomcat/logs      - name: filebeat        image: elastic/filebeat:7.10.1        args: [
          "-c", "/etc/filebeat/filebeat.yml",
          "-e",
        ]
        env:
        - name: POD_IP          valueFrom:
            fieldRef:
              apiVersion: v1              fieldPath: status.podIP        - name: pod_name          valueFrom:
            fieldRef:
              apiVersion: v1              fieldPath: metadata.name        resources:
          limits:
            cpu: 100m            memory: 100Mi          requests:
            cpu: 100m            memory: 100Mi        securityContext:
          runAsUser: 0
        volumeMounts:
        - name: config          mountPath: /etc/filebeat          readOnly: true
        - name: data          mountPath: /usr/share/filebeat/data        - name: tomcat-log          mountPath: /tomcatlog      volumes:
      - name: config        configMap:
          name: filebeat-config          items:
          - key: filebeat.yml            path: filebeat.yml      - name: data        emptyDir: {}
      - name: tomcat-log        emptyDir: {}


部署

tree ..├── alicloud-nas-elfk-pv.yaml
├── elasticsearch
│   ├── elastic-certificates.p12
│   └── elasticsearch.yaml
├── elasticsearch-head
│   └── head.yaml
├── filebeat
│   ├── filebeat-config.yaml
│   ├── filebeat-nginx.yaml
│   └── filebeat-tomcat.yaml
├── kibana
│   ├── config.yaml
│   └── kibana.yaml
├── logstash
│   ├── config.yaml
│   └── logstash.yaml
├── namespace.yaml

5 directories, 12 files

kubectl apply -f namespace.yaml

kubectl create configmap -n log elastic-certificates --from-file=elastic-certificates.p12=elasticsearch/elastic-certificates.p12

kubectl apply -f alicloud-nas-elfk-pv.yaml

kubectl apply -f elasticsearch-head/

kubectl apply -f elasticsearch/

kubectl exec -it -n log elasticsearch-0 -- bash/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive              #自定义密码Enter password for [elastic]: elk-2021
Reenter password for [elastic]: elk-2021
Enter password for [apm_system]: elk-2021
Reenter password for [apm_system]: elk-2021
Enter password for [kibana]: elk-2021
Reenter password for [kibana]: elk-2021
Enter password for [logstash_system]: elk-2021
Reenter password for [logstash_system]: elk-2021
Enter password for [beats_system]: elk-2021
Reenter password for [beats_system]: elk-2021
Enter password for [remote_monitoring_user]: elk-2021
Reenter password for [remote_monitoring_user]: elk-2021

仅第一次建立集群时需要自定义密码,只要es集群的pv所对应的后端存储还在,即使后面重建es集群也无需再次自定义密码,当然想要修改es的密码除外。

kubectl apply -f kibana/

kubectl apply -f logstash/

kubectl apply -f filebeat/

kubectl get all -n log

NAME                            READY   STATUS    RESTARTS   AGE
pod/elasticsearch-0             1/1     Running   0          6h5m
pod/elasticsearch-1             1/1     Running   0          6h4m
pod/elasticsearch-2             1/1     Running   0          6h4m
pod/head-5c85b8d699-t9d4w       1/1     Running   0          3d8h
pod/kibana-5cfb7767fd-5x9vb     1/1     Running   0          3h45m
pod/logstash-746ddb77cc-bch5j   1/1     Running   0          25m
pod/logstash-746ddb77cc-mzl8c   1/1     Running   0          25m
pod/logstash-746ddb77cc-t9hvp   1/1     Running   0          25m

NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
service/elasticsearch   ClusterIP   None             <none>        9200/TCP,9300/TCP   6h5m
service/head            ClusterIP   10.103.231.228   <none>        9100/TCP            3d8h
service/kibana          ClusterIP   10.102.83.26     <none>        5601/TCP            32h
service/logstash        NodePort    10.111.98.56     <none>        5040:30040/TCP      25m

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/head       1/1     1            1           3d8h
deployment.apps/kibana     1/1     1            1           32h
deployment.apps/logstash   3/3     3            3           25m

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/head-5c85b8d699       1         1         1       3d8h
replicaset.apps/kibana-5cfb7767fd     1         1         1       3h45m
replicaset.apps/logstash-746ddb77cc   3         3         3       25m

NAME                             READY   AGE
statefulset.apps/elasticsearch   3/3     6h5m


测试

  • 访问head:

任选一个node ip,在本地添加hosts:

192.168.30.130 elasticsearch.lzxlinux.cn
192.168.30.130 head.lzxlinux.cn
192.168.30.130 kibana.lzxlinux.cn

开启x-pack后访问head需要加上账号及密码,

http://head.lzxlinux.cn/?auth_user=elastic&auth_password=elk-2021

直接使用 http://elasticsearch:9200/ 连接es时会报错,使用 http://elasticsearch.lzxlinux.cn/ 连接则正常。

k8s部署 elfk 7.x + x-pack

  • 访问kibana:

k8s部署 elfk 7.x + x-pack

  • 收集nginx日志:

访问nginx页面以产生日志:curl ip:30080

k8s部署 elfk 7.x + x-pack

k8s部署 elfk 7.x + x-pack

收集nginx日志没有问题。

  • 收集tomcat日志:

访问tomcat页面以产生日志:curl ip:30880

k8s部署 elfk 7.x + x-pack

k8s部署 elfk 7.x + x-pack

收集tomcat日志没有问题。

  • kibana @timestamp 时区问题:

在kibana日志展示时,@timestamp 时区可能会与日志的时间戳不一致,Stack Management高级设置Timezone for date formatting,从 Browser 改为 UTC 即可。

如果仍无效,可以修改logstash filter 部分:

filter {
    grok {
        match => [ "message", "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level}" ]
    }

    date {
        match => [ "timestamp", "YYYY-MM-DD HH:mm:ss Z" ]
    }}

之后在kibana上 @timestamp 即为当前时区。

另外,filebeat 如果不是通过 k8s 部署的,连接的 logstash 地址可以填公网 ip:30040,而 logstash 的配置文件中的 5040 端口不变。

至此,k8s部署 elfk 7.x + x-pack 完成。已存放至个人github:kubernetes


上一篇:python struct pack() unpack


下一篇:自动驾驶汽车CAN总线数字孪生建模(一)