ECK,也就是Elastic Cloud on Kubernetes的缩写,可以基于K8s operator在Kubernetes集群来自动化部署、管理、编排Elasticsearch、Kibana、APM Server服务。
ECK目前的features:
- λ Elasticsearch, Kibana 和 APM Server的部署;
- λ TLS证书管理;
- λ 安全的Elasticsearch cluster 配置和拓扑变动管理;
- λ 使用PV和PVC;
- λ 定制K8s node上的系统配置和属性;
我们将使用operator方式,部署es组件,并将nfs作为存储。
操作步骤如下:
1,先使用eck的all-in-one.yaml文件部署eck的operator。
文件位置:https://download.elastic.co/downloads/eck/1.0.1/all-in-one.yaml
kubectl apply -f all-in-one.yaml
2,使用nfs-client-provisioner方式,提供集群动态PVC申请功能。
kubectl apply -f nfs-dynamic-pvc.yaml<style></style>
(https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client)
--- apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: demo --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: demo roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: demo rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: demo subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: demo roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: demo spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: harbor.demo.cn/3rd_part/nfs-client-provisioner:v5.5.0 volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 1.2.3.4 - name: NFS_PATH value: /nfs volumes: - name: nfs-client-root nfs: server: 1.2.3.4 path: /nfs --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: demo-es-nfs-storage namespace: demo provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME' reclaimPolicy: Retain parameters: archiveOnDelete: "false" ---
3,使用deploy.yaml文件部署es。
kubectl apply -f deploy.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: eck-es-demo namespace: demo labels: elasticsearch.k8s.elastic.co/cluster-name: es1 spec: version: 7.5.2 image: harbor.demo.cn/official_hub/elasticsearch:7.5.2 http: tls: selfSignedCertificate: disabled: true service: spec: ports: - name: http port: 9200 targetPort: 9200 - name: transport port: 9300 targetPort: 9300 nodeSets: - name: eck-es count: 3 config: node.master: true node.data: true node.ingest: true node.store.allow_mmap: false xpack.security.authc: anonymous: username: anonymous roles: superuser authz_exception: false podTemplate: spec: #volumes: # - name: elasticsearch-data # emptyDir: {} containers: - name: elasticsearch env: - name: ES_JAVA_OPTS value: -Xms2g -Xmx2g resources: limits: memory: 4Gi cpu: 2 volumeClaimTemplates: - metadata: name: elasticsearch-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: demo-es-nfs-storage
如果一切正常,则用kubectl可以看到pv及pvc信息,nfs目录下也可见存储信息。
4,TodoList。
如果后续需要为es提供统一用户名及密码,yaml里无此配置,建议直接修改es的镜像,将用户名和密码置入。
附:
<style></style>再附一个NFS作为静态PV的使用yaml(本次部署未用到,因为eck里的es不支持静态PVC)。
nfs-static-pvc.yaml(PV和PVC之间,自动适配,有了PVC之后,即可挂载到PDO里,不举例)
--- apiVersion: v1 kind: PersistentVolume metadata: name: es-pv-nfs-pv01 spec: capacity: storage: 5Gi accessModes: - ReadWriteMany nfs: path: /nfs server: 1.2.3.4 --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: elasticsearch-data namespace: demo-eck spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi ---
<style></style> <style></style> <style></style> <style></style>