[单master节点k8s部署]24.构建EFK日志收集平台(三)

Kibana

Kibana是elasticsearch的可视化界面。

首先创建kibana的服务,yaml文件如下。k8s里的服务分为四种,clusterIP为仅仅为pod分配k8s集群内部的一个虚拟ip,用于集群内的pod通信,而不对外暴露。elasticsearch的服务就是clusterIP。NodePort是对外开放一个端口。所以kibana就是NordPort。其余还有LoadBalancer和ExternalName。

kibana部署代码
[root@master 31efk]# kubectl get svc -n kube-logging31
NAME              TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)             AGE
elasticsearch31   ClusterIP   None          <none>        9200/TCP,9300/TCP   37d
kibana31          NodePort    10.96.72.47   <none>        5601:30159/TCP      33m
[root@master 31efk]# cat kibana_svc.yaml 
apiVersion: v1 
kind: Service 
metadata: 
 name: kibana31 
 namespace: kube-logging31 
 labels: 
   app: kibana 
spec: 
 type: NodePort
 ports: 
 - port: 5601 
 selector: 
   app: kibana

Kibana Pod 通过 ELASTICSEARCH_URL 连接到 Elasticsearch 的 Service,这个 Service 会将流量路由到实际运行的 Elasticsearch Pod。这种通信是在集群内部通过 ClusterIP 完成的,Kibana 不需要知道 Elasticsearch Pod 的具体 IP 地址,只需通过 Service 的 DNS 名称访问它。

[root@master 31efk]# cat kibana_deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana31
  namespace: kube-logging31
  labels:
    app: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:7.2.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 200m
        env:
        - name: ELASTICSEARCH_URL
          value: http://elasticsearch31.kube-logging31.svc.cluster.local:9200
        ports:
        - containerPort: 5601
 调错

在执行的过程中,我们发现kinaba连接不到elasticsearch的服务,查看kibana pod的日志,发现kibana并没有使用我们yaml文件中定义的elasticsearch的服务名称,而是使用了默认的名称:

elasticsearch:9200

[root@master 31efk]# kubectl exec -it kibana31-859c9dcfb7-ptl9z -n kube-logging31 -- cat /usr/share/kibana/config/kibana.yml
#
# ** THIS IS AN AUTO-GENERATED FILE **
#

# Default Kibana configuration for docker target
server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled

所以在yaml文件中,在ELASTICSEARCH_URL下面,再加一个环境变量:

name:ELASTICSEARCH_HOSTS
value: http://elasticsearch31.kube-logging31,svc.cluster.local:9200
kibana界面

kibana和elasticsearch连接成功

fluentd 

fluentd部署代码
第一部分

首先对fluentd进行角色绑定:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd
  namespace: kube-logging31
  labels:
    app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluentd
  labels:
    app: fluentd
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - namespaces
  verbs:
  - get
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd
roleRef:
  kind: ClusterRole
  name: fluentd
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: fluentd
  namespace: kube-logging31
第二部分

使用deamonset来部署fluentd的pod,而且通过卷挂载的方式把每一个物理节点上的日志和容器日志收集起来:
path: /var/log
path: /var/lib/docker/containers

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-logging31
  labels:
    app: fluentd
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
        imagePullPolicy: IfNotPresent
        env:
          - name:  FLUENT_ELASTICSEARCH_HOST
            value: "elasticsearch31.kube-logging31.svc.cluster.local"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          - name: FLUENTD_SYSTEMD_CONF
            value: disable
        resources:
          limits:
            memory: 512Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
部署结果
[root@master 31efk]# kubectl get pods -n kube-logging31
NAME                        READY   STATUS    RESTARTS   AGE
elastic-cluster31-0         1/1     Running   0          3h31m
elastic-cluster31-1         1/1     Running   0          3h31m
elastic-cluster31-2         1/1     Running   0          3h31m
fluentd-8255m               1/1     Running   0          3m58s
fluentd-8rwbz               1/1     Running   0          3m58s
fluentd-sffhh               1/1     Running   0          3m58s
kibana31-8669689dbf-mvntd   1/1     Running   0          101m
kibana界面

此时经过一些配置,kibana的界面就可以显示log数据:

这里的索引为logstash,是因为fluentd默认情况下会将日志写入 Elasticsearch 中的索引名称与 Logstash 命名规则一致。

上一篇:第二百五十九节 JPA教程 - JPA查询选择两个属性示例


下一篇:kibana开启访问登录认证