在ACK 集群中使用本地盘创建LVM数据卷

阿里云容器服务团队提供的CSI存储插件支持在容器场景使用多种存储服务,包括云盘、NAS、OSS、LVM、内存卷等;本文介绍如何通过CSI插件在阿里云本地盘上自动化创建lvm数据卷。

介绍:

LVM卷属于本地存储的一种类型,CSI插件提供对多种本地数据卷统一管理的方式:localplugin.csi.alibabacloud.com;lvm卷为local plugin管理的一种存储类型。插件包含两部分:ControllerServer 和 NodeServer;

ControllerServer:实现lvm卷的创建、删除功能;

NodeServer:实现lvm卷的挂载、卸载、格式化、lvm创建、清除等能力;

目前支持的lvm功能包括:

LVM卷的全生命周期管理;

LVM卷的格式化、挂载、删除等基本功能;

自动化创建VG;

实现lvm卷自动化扩容功能;

依赖:

创建Kubernetes集群,推荐使用阿里云ACK服务;

添加有本地盘的ecs节点;

集群版本在1.14或者以上;

部署:

部署CSI NodeServer: (csi-local-plugin)

apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
  name: localplugin.csi.alibabacloud.com
spec:
  attachRequired: false
  podInfoOnMount: true
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: csi-local-plugin
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: csi-local-plugin
  template:
    metadata:
      labels:
        app: csi-local-plugin
    spec:
      tolerations:
        - operator: Exists
      serviceAccount: admin
      priorityClassName: system-node-critical
      hostNetwork: true
      hostPID: true
      containers:
        - name: driver-registrar
          image: registry.cn-hangzhou.aliyuncs.com/acs/csi-node-driver-registrar:v1.1.0
          imagePullPolicy: Always
          args:
            - "--v=5"
            - "--csi-address=/csi/csi.sock"
            - "--kubelet-registration-path=/var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com/csi.sock"
          env:
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: plugin-dir
              mountPath: /csi
            - name: registration-dir
              mountPath: /registration

        - name: csi-localplugin
          securityContext:
            privileged: true
            capabilities:
              add: ["SYS_ADMIN"]
            allowPrivilegeEscalation: true
          image: registry.cn-hangzhou.aliyuncs.com/plugins/csi-plugin:v1.14-9fa71837c
          imagePullPolicy: "Always"
          args :
            - "--endpoint=$(CSI_ENDPOINT)"
            - "--v=5"
            - "--nodeid=$(KUBE_NODE_NAME)"
            - "--driver=localplugin.csi.alibabacloud.com"
          env:
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
            - name: CSI_ENDPOINT
              value: unix://var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com/csi.sock
          volumeMounts:
            - name: pods-mount-dir
              mountPath: /var/lib/kubelet
              mountPropagation: "Bidirectional"
            - mountPath: /dev
              mountPropagation: "HostToContainer"
              name: host-dev
            - mountPath: /var/log/
              name: host-log
      volumes:
        - name: plugin-dir
          hostPath:
            path: /var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com
            type: DirectoryOrCreate
        - name: registration-dir
          hostPath:
            path: /var/lib/kubelet/plugins_registry
            type: DirectoryOrCreate
        - name: pods-mount-dir
          hostPath:
            path: /var/lib/kubelet
            type: Directory
        - name: host-dev
          hostPath:
            path: /dev
        - name: host-log
          hostPath:
            path: /var/log/
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 10%
    type: RollingUpdate

部署ControllerServer: (csi-local-provisioner)

kind: Deployment
apiVersion: apps/v1
metadata:
  name: csi-local-provisioner
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: csi-local-provisioner
  replicas: 2
  template:
    metadata:
      labels:
        app: csi-local-provisioner
    spec:
      tolerations:
      - operator: "Exists"
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            preference:
              matchExpressions:
              - key: node-role.kubernetes.io/master
                operator: Exists
      priorityClassName: system-node-critical
      serviceAccount: admin
      hostNetwork: true
      containers:
        - name: external-local-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/acs/csi-provisioner:v1-5f99079e0-ack
          args:
            - "--csi-address=$(ADDRESS)"
            - "--feature-gates=Topology=True"
            - "--volume-name-prefix=local"
            - "--strict-topology=true"
            - "--timeout=150s"
            - "--extra-create-metadata=true"
            - "--enable-leader-election=true"
            - "--leader-election-type=leases"
            - "--retry-interval-start=500ms"
            - "--v=5"
          env:
            - name: ADDRESS
              value: /socketDir/csi.sock
          imagePullPolicy: "Always"
          volumeMounts:
            - name: socket-dir
              mountPath: /socketDir
      volumes:
        - name: socket-dir
          hostPath:
            path: /var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com
            type: DirectoryOrCreate

插件创建完成:

]# kubectl get pod -nkube-system | grep local
csi-local-plugin-54zvk                             2/2     Running   0          48m
csi-local-plugin-dxxq9                             2/2     Running   0          48m
csi-local-plugin-f5gr4                             2/2     Running   0          48m
csi-local-plugin-fq88g                             2/2     Running   0          48m
csi-local-plugin-tn5vh                             2/2     Running   0          48m
csi-local-provisioner-759699c8d4-6srrq             1/1     Running   3          3h42m
csi-local-provisioner-759699c8d4-fxlb5             1/1     Running   3          3h42m

使用:

1. 集群中添加包含本地盘的节点:

# kubectl get node
cn-shanghai.192.168.2.70   Ready    <none>   86s     v1.16.6-aliyun.1

# ls /dev/vd*
/dev/vda  /dev/vda1  /dev/vdb  /dev/vdc

执行vgdisplay,显示为空;
# vgdisplay

2. 创建StorageClass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
    name: csi-local
provisioner: localplugin.csi.alibabacloud.com
parameters:
    volumeType: LVM
    vgName: volumegroup1
    fsType: ext4
    pvType: "localdisk"
    LvmType: "striping"
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

provisioner:代表使用的驱动类型,即本地存储驱动;

volumeType:定义使用哪种本地存储类型,这里选择为LVM类型;

fsType:定义lvm卷的格式化文件系统类型;

pvType:表示创建vg使用的底层数据盘类型,localdisk标志阿里云本地盘类型;

LvmTypeTag:表示lvm卷类型,支持linear(线性分配)、striping(条带化方案);

volumeBindingMode:表示使用延迟绑定模式,即只有pod使用pvc的时候,才出发自动创建pv;

allowVolumeExpansion:配置是否运行自动扩容功能;

3. 创建PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: lvm-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
  storageClassName: csi-local
# kubectl create -f pvc.yaml
persistentvolumeclaim/lvm-pvc created

由于使用延迟绑定,pvc会等待pod消费他的时候,才触发创建pv,否则一直pending;
# kubectl get pvc
NAME      STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
lvm-pvc   Pending                                      csi-local      11s

4. 创建Pod消费pvc:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-lvm
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        volumeMounts:
          - name: lvm-pvc
            mountPath: "/data"
      volumes:
        - name: lvm-pvc
          persistentVolumeClaim:
            claimName: lvm-pvc

创建pod如下:

# kubectl create -f deploy.yaml
deployment.apps/deployment-lvm created

# kubectl get pvc
NAME      STATUS   VOLUME                                       CAPACITY   ACCESS MODES   STORAGECLASS   AGE
lvm-pvc   Bound    local-d451fbc2-3e13-4105-9e58-8ffaf5837fbf   2Gi        RWO            csi-local      47s

# kubectl get pod
NAME                             READY   STATUS    RESTARTS   AGE
deployment-lvm-9f798687c-4h2bs   1/1     Running   0          56s

LVM 数据卷信息如下:
# kubectl get pv local-d451fbc2-3e13-4105-9e58-8ffaf5837fbf -oyaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: localplugin.csi.alibabacloud.com
  name: local-d451fbc2-3e13-4105-9e58-8ffaf5837fbf
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 2Gi
  csi:
    driver: localplugin.csi.alibabacloud.com
    fsType: ext4
    volumeAttributes:
      LvmTypeTag: striping
      csi.storage.k8s.io/pv/name: local-d451fbc2-3e13-4105-9e58-8ffaf5837fbf
      csi.storage.k8s.io/pvc/name: lvm-pvc
      csi.storage.k8s.io/pvc/namespace: default
      fsType: ext4
      pvType: localdisk
      storage.kubernetes.io/csiProvisionerIdentity: 1584985469710-8081-localplugin.csi.alibabacloud.com
      vgName: volumegroup1
      volume.beta.kubernetes.io/storage-provisioner: localplugin.csi.alibabacloud.com
      volume.kubernetes.io/selected-node: cn-shanghai.192.168.2.70
      volumeType: LVM
    volumeHandle: local-d451fbc2-3e13-4105-9e58-8ffaf5837fbf
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - cn-shanghai.192.168.2.70
  persistentVolumeReclaimPolicy: Delete
  storageClassName: csi-local

5. 登陆节点验证lvm卷:

vg信息如下:

# vgdisplay
  --- Volume group ---
  VG Name               volumegroup1
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               207.99 GiB
  PE Size               4.00 MiB
  Total PE              53246
  Alloc PE / Size       512 / 2.00 GiB
  Free  PE / Size       52734 / 205.99 GiB
  VG UUID               04A5WZ-6dtd-LVPe-EvUh-n9uG-4ndy-fvRfaf

PV信息如下:

# pvdisplay
  --- Physical volume ---
  PV Name               /dev/vdb
  VG Name               volumegroup1
  PV Size               104.00 GiB / not usable 4.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              26623
  Free PE               26111
  Allocated PE          512
  PV UUID               nBDn6O-7lf7-333d-PjvR-mezx-js6Y-pOOd57

  --- Physical volume ---
  PV Name               /dev/vdc
  VG Name               volumegroup1
  PV Size               104.00 GiB / not usable 4.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              26623
  Free PE               26623
  Allocated PE          0
  PV UUID               9ELK0x-Gw6e-VAUq-DcP2-kYgR-Mxva-pAUB48  

LVM信息如下:

# lvdisplay
  --- Logical volume ---
  LV Path                /dev/volumegroup1/local-d451fbc2-3e13-4105-9e58-8ffaf5837fbf
  LV Name                local-d451fbc2-3e13-4105-9e58-8ffaf5837fbf
  VG Name                volumegroup1
  LV UUID                ZHb6N9-XN7t-DxER-OWi0-rUXi-XcbL-YoY1aK
  LV Write Access        read/write
  LV Creation host, time iZuf667ifeuz5zz5j9oteeZ, 2020-03-24 01:44:31 +0800
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           252:0

6. 删除Pod、PVC:

# kubectl delete -f deploy.yaml
deployment.apps "deployment-lvm" deleted

# kubectl delete -f pvc.yaml
persistentvolumeclaim "lvm-pvc" deleted


到节点插件lvm卷:
# vgdisplay
  --- Volume group ---
  VG Name               volumegroup1
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               207.99 GiB
  PE Size               4.00 MiB
  Total PE              53246
  Alloc PE / Size       0 / 0
  Free  PE / Size       53246 / 207.99 GiB
  VG UUID               04A5WZ-6dtd-LVPe-EvUh-n9uG-4ndy-fvRfaf

LVM数据卷已经删除:
# lvdisplay
#
上一篇:为什么密码要以MD5值存储在数据库


下一篇:flume学习(三):flume将log4j日志数据写入到hdfs