Alicloud-Nas-Controller插件升级

您在ACK集群中使用alicloud-nas-controller时,如果安装的版本较低,可以通过如下方式升级组件:

当前集群状态:

如果您使用的nas controller是早期版本,如下:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: alicloud-nas-controller
  name: alicloud-nas-controller
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: alicloud-nas-controller
  template:
    metadata:
      labels:
        app: alicloud-nas-controller
    spec:
      containers:
      - env:
        - name: PROVISIONER_NAME
          value: alicloud/nas
        - name: NFS_SERVER
          value: 2564f49129-**.cn-shenzhen.nas.aliyuncs.com
        - name: NFS_PATH
          value: /
        image: registry.cn-hangzhou.aliyuncs.com/acs/alicloud-nas-controller:v3.1.0-k8s1.11
        imagePullPolicy: IfNotPresent
        name: alicloud-nas-controller
        volumeMounts:
        - mountPath: /persistentvolumes
          name: nfs-client-root
      serviceAccount: admin
      serviceAccountName: admin
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
        operator: Exists
      - effect: NoSchedule
        key: node.cloudprovider.kubernetes.io/uninitialized
        operator: Exists
      volumes:
      - flexVolume:
          driver: alicloud/nas
          options:
            path: /
            server: 2564f49129-**.cn-shenzhen.nas.aliyuncs.com
            vers: "4.0"
        name: nfs-client-root

StorageClass配置如下:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: alicloud-nas
mountOptions:
- vers=4.0
provisioner: alicloud/nas
reclaimPolicy: Retain

集群中有应用使用上述controller配置创建了pvc/pv,并挂载到pod中使用:

# kubectl describe pod web-nas-1 | grep ClaimName
    ClaimName:  html-web-nas-1

# kubectl get pvc html-web-nas-1
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
html-web-nas-1   Bound    pvc-2612b272-14e7-11ea-a9b7-00163e084110   2Gi        RWO            alicloud-nas   85m

这时pv的配置为v4版本;
# kubectl get pv pvc-2612b272-14e7-11ea-a9b7-00163e084110 -oyaml | grep mountOptions -A 6
  mountOptions:
  - vers=4.0
  nfs:
    path: /default-html-web-nas-1-pvc-2612b272-14e7-11ea-a9b7-00163e084110
    server: 2564f49129-**.cn-shenzhen.nas.aliyuncs.com
  persistentVolumeReclaimPolicy: Retain
  storageClassName: alicloud-nas

PV新诉求

登陆Pod所在节点,查看挂载nas的参数为:

# mount | grep nfs
2564f49129-**.cn-shenzhen.nas.aliyuncs.com:/default-html-web-nas-1-pvc-2612b272-14e7-11ea-a9b7-00163e084110 on /var/lib/kubelet/pods/32222e36-14f2-11ea-a9b7-00163e084110/volumes/kubernetes.io~nfs/pvc-2612b272-14e7-11ea-a9b7-00163e084110 type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.25,local_lock=none,addr=192.168.1.152)

如果你希望更新nas挂载参数,期望修改pv的mountOptions参数,例如配置为下面参数:

- nolock,tcp,noresvport
- vers=3

执行编辑命令:

# kubectl edit pv pvc-2612b272-14e7-11ea-a9b7-00163e084110

将mountOptions参数更新为:

  mountOptions:
  - nolock,tcp,noresvport
  - vers=3

重启Pod:

# kubectl delete pod web-nas-1

登陆Pod所在节点插件nas挂载参数:可见已经配置了noresvport等参数;
# mount | grep nfs
2564f49129-**.cn-shenzhen.nas.aliyuncs.com:/default-html-web-nas-1-pvc-2612b272-14e7-11ea-a9b7-00163e084110 on /var/lib/kubelet/pods/ba374a37-14f3-11ea-a9b7-00163e084110/volumes/kubernetes.io~nfs/pvc-2612b272-14e7-11ea-a9b7-00163e084110 type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,noresvport,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.152,mountvers=3,mountport=4002,mountproto=tcp,local_lock=all,addr=192.168.1.152)

通过上面方法,可以对当前已经生成的NAS PV进行更新,并通过重启Pod使其生效。(注意:noresvport参数生效有其特殊性,咨询nas技术支持。)

升级Nas Controller:

上面的文档只是把老版本生成的pv更新挂载参数,只有更新了nas controller版本后才能使后续生成的pv自动使用新挂载参数。

新Controller模板:

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: alicloud-nas-controller
  namespace: kube-system
spec:
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: alicloud-nas-controller
    spec:
      tolerations:
      - operator: Exists
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            preference:
              matchExpressions:
              - key: node-role.kubernetes.io/master
                operator: Exists
      priorityClassName: system-node-critical
      serviceAccount: admin
      hostNetwork: true
      containers:
        - name: nfs-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/acs/alicloud-nas-controller:v1.14.3.8-58bf821-aliyun
          env:
          - name: PROVISIONER_NAME
            value: alicloud/nas
          securityContext:
            privileged: true
          volumeMounts:
          - mountPath: /var/log
            name: log
      volumes:
      - hostPath:
          path: /var/log
        name: log

通过下面命令重建Nas Controller:

# kubectl delete deploy alicloud-nas-controller
# kubectl create -f controller.yaml

StorageClass更新:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: alicloud-nas
mountOptions:
- nolock,tcp,noresvport
- vers=3
parameters:
  server: "2564f49129-**.cn-shenzhen.nas.aliyuncs.com:/"
  driver: nfs
provisioner: alicloud/nas
reclaimPolicy: Retain

通过下面命令更新StorageClass:

# kubectl delete sc alicloud-nas
# kubectl create -f stroageclass.yaml

验证:

扩容应用Pod数量,生成新PVC、PV:

# kubectl get pv default-html-web-nas-5-pvc-91f37aa0-14f6-11ea-a9b7-00163e084110 -oyaml| grep mountOptions -A 6
  mountOptions:
  - nolock,tcp,noresvport
  - vers=3
  nfs:
    path: /default-html-web-nas-5-pvc-91f37aa0-14f6-11ea-a9b7-00163e084110
    server: 2564f49129-**.cn-shenzhen.nas.aliyuncs.com
  persistentVolumeReclaimPolicy: Retain

查看Pod挂载信息,noresvport等参数均已配置成功:

2564f49129-**.cn-shenzhen.nas.aliyuncs.com:/default-html-web-nas-5-pvc-91f37aa0-14f6-11ea-a9b7-00163e084110 on /var/lib/kubelet/pods/4bc4bb3e-14f7-11ea-a9b7-00163e084110/volumes/kubernetes.io~nfs/default-html-web-nas-5-pvc-91f37aa0-14f6-11ea-a9b7-00163e084110 type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,noresvport,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.152,mountvers=3,mountport=4002,mountproto=tcp,local_lock=all,addr=192.168.1.152)
上一篇:【杭州云栖·企业级数据库最佳实践专场】DT时代下 数据库灾备的探索与实践


下一篇:算法学习之路|部分A+B