Kubernetes 部署 nfs

kubernetes使用NFS共享存储有两种方式:

  • 手动方式静态创建所需要的PV和PVC。
  • 通过创建PVC动态地创建对应PV,无需手动创建PV。

作为测试,临时在master节点上部署 NFS 服务器。

在 master-1 节点:

#master节点安装

nfs yum -y install nfs-utils

 

#创建nfs目录

mkdir -p /nfs/data/

 

#修改权限

chmod -R 777 /nfs/data

 

#编辑 export 文件,这个文件就是nfs默认的配置文件

vim /etc/exports

/nfs/data *(rw,no_root_squash,sync)

#配置生效

exportfs -r

 

#查看生效

exportfs

 

#启动rpcbind、nfs服务

systemctl restart rpcbind && systemctl enable rpcbind

systemctl restart nfs && systemctl enable nfs

 

#查看 RPC 服务的注册状况

rpcinfo -p localhost

#showmount测试

showmount -e 192.168.88.111

所有node节点安装客户端,开机启动

yum -y install nfs-utils systemctl start nfs && systemctl enable nfs 

作为准备工作,我们已经在 k8s-master(192.168.0.248) 节点上搭建了一个 NFS 服务器,目录为 /nfs/data

静态申请PV卷

添加pv卷对应目录,这里创建2个pv卷,则添加2个pv卷的目录作为挂载点。

#创建pv卷对应的目录

mkdir -p /nfs/data/pv001

#配置exportrs(我觉得可以不用这步,因为父目录/nfs/data,已经设为共享文件夹)

vim /etc/exports

/nfs/data *(rw,no_root_squash,sync)

/nfs/data/pv001 *(rw,no_root_squash,sync)

#配置生效

exportfs -r

#重启rpcbind、nfs服务

systemctl restart rpcbind && systemctl restart nfs

 

创建PV

vim nfs-pv001.yaml

apiVersion: v1

kind: PersistentVolume

metadata:

name: nfs-pv001

labels:

pv: nfs-pv001

spec:

capacity:

storage: 200M #磁盘大小200M

accessModes:

- ReadWriteOnce #多客户可读写

persistentVolumeReclaimPolicy: Recycle

storageClassName: nfs

nfs:

server: /nfs/data/pv001

path: "/usr/local/kubernetes/redis/pv1"

配置说明:

  • capacity 指定 PV 的容量为 1G。
  • accessModes 指定访问模式为 ReadWriteOnce,支持的访问模式有:
    • ReadWriteOnce – PV 能以 read-write 模式 mount 到单个节点。
  • ReadOnlyMany – PV 能以 read-only 模式 mount 到多个节点。
  • ReadWriteMany – PV 能以 read-write 模式 mount 到多个节点。
  • persistentVolumeReclaimPolicy 指定当 PV 的回收策略为 Recycle,支持的策略有:
    • Retain – 需要管理员手工回收。
  • Recycle – 清除 PV 中的数据,效果相当于执行 rm -rf /thevolume/*。
  • Delete – 删除 Storage Provider 上的对应存储资源,例如 AWS EBS、GCE PD、Azure Disk、OpenStack Cinder Volume 等。
  • storageClassName 指定 PV 的 class 为 nfs。相当于为 PV 设置了一个分类,PVC 可以指定 class 申请相应 class 的 PV。
  • 指定 PV 在 NFS 服务器上对应的目录。

创建pv

[root@sam-master-1 ~]# kubectl apply -f nfs-pv001.yaml

persistentvolume/nfs-pv001 created

[root@sam-master-1 ~]# kubectl get pv

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE

nfs-pv001 200M RWO Recycle Available nfs 17s

STATUS 为 Available,表示 pv就绪,可以被 PVC 申请。

创建PVC

接下来创建一个名为pvc001的PVC,配置文件 nfs-pvc001.yaml 如下:

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: nfs-pvc001

spec:

accessModes:

- ReadWriteOnce

resources:

requests:

storage: 200M

storageClassName: nfs

selector:

matchLabels:

pv: nfs-pv001

[root@sam-master-1 ~]# kubectl apply -f nfs-pvc001.yaml

persistentvolumeclaim/nfs-pvc001 created

[root@sam-master-1 ~]#

[root@sam-master-1 ~]# kubectl get pvc

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE

nfs-pvc001 Bound nfs-pv001 200M RWO nfs 6s

[root@sam-master-1 ~]#

[root@sam-master-1 ~]# kubectl get pv

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE

nfs-pv001 200M RWO Recycle Bound default/nfs-pvc001 nfs 7m44s

这里从 kubectl get pvc 和 kubectl get pv 的输出可以看到 pvc001绑定到pv001,申请成功。注意pvc绑定到对应pv通过labels标签方式实现,也可以不指定,将随机绑定到pv。

验证

Pod 中使用存储了,Pod 配置文件 nfs-pod001.yaml 如下:

kind: Pod

apiVersion: v1

metadata:

name: nfs-pod001

spec:

containers:

- name: myfrontend

image: nginx

volumeMounts:

- mountPath: "/var/www/html"

name: nfs-pv001

volumes:

- name: nfs-pv001

persistentVolumeClaim:

claimName: nfs-pvc001

与使用普通 Volume 的格式类似,在 volumes 中通过 persistentVolumeClaim 指定使用nfs-pvc001申请的 Volume。

执行yaml文件创建nfs-pdo001

[root@sam-master-1 ~]# kubectl apply -f nfs-pod001.yaml

pod/nfs-pod001 created

[root@sam-master-1 ~]# kubectl get pod |grep nfs

nfs-pod001 1/1 Running 0 26s

验证 PV 是否可用

验证

[root@sam-master-1 ~]# kubectl exec nfs-pod001 touch /var/www/html/index001.html

kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.

[root@sam-master-1 ~]#

[root@sam-master-1 ~]# ls /nfs/data/pv001/

index001.html

[root@sam-master-1 ~]#

进入pod查看挂载情况

 

 

删除 pv

删除 pod,pv 和 pvc 不会被删除,nfs存储的数据不会被删除。

[root@sam-master-1 ~]# kubectl delete -f nfs-pod001.yaml

pod "nfs-pod001" deleted

[root@sam-master-1 ~]# kubectl get pv

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE

nfs-pv001 200M RWO Recycle Bound default/nfs-pvc001 nfs 17m

[root@sam-master-1 ~]#

[root@sam-master-1 ~]# kubectl get pvc

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE

nfs-pvc001 Bound nfs-pv001 200M RWO nfs 12m

[root@sam-master-1 ~]#

[root@sam-master-1 ~]# ls /nfs/data/pv001/

index001.html

[root@sam-master-1 ~]#

删除 pvc

删除 pvc,pv将被释放,处于 Available 可用状态,并且 nfs 存储中的数据被删除。

[root@sam-master-1 ~]# kubectl get pv

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE

nfs-pv001 200M RWO Recycle Available nfs 22m

[root@sam-master-1 ~]#

[root@sam-master-1 ~]# ls /nfs/data/pv001/

[root@sam-master-1 ~]#

[root@sam-master-1 ~]#

继续删除 pv

[root@sam-master-1 ~]# kubectl delete -f nfs-pv001.yaml

persistentvolume "nfs-pv001" deleted

[root@sam-master-1 ~]#

动态申请 PV 卷

K8S的外部 NFS 驱动,可以按照其工作方式(是作为 NFS server 还是 NFS client)分为两类:

本文将介绍使用nfs-client-provisioner这个应用,利用NFS Server给Kubernetes作为持久存储的后端,并且动态提供PV。前提条件是有已经安装好的NFS服务器,并且NFS服务器与Kubernetes的Slave节点都能网络连通。将nfs-client驱动做一个deployment部署到K8S集群中,然后对外提供存储服务。

nfs-client-provisioner 是一个Kubernetes的简易NFS的外部provisioner,本身不提供NFS,需要现有的NFS服务器提供存储

部署 nfs-client-provisioner

在 master上操作,即 192.168.0.248

官方:https://github.com/kubernetes-incubator/external-storage.git

修改 deployment.yaml

这里修改的参数包括 NFS 服务器所在的IP地址(192.168.0.248),以及NFS服务器共享的路径(/nfs/data),两处都需要修改为你实际的NFS服务器和共享目录。

apiVersion: apps/v1

kind: Deployment

metadata:

name: nfs-client-provisioner

labels:

app: nfs-client-provisioner

# replace with namespace where provisioner is deployed

namespace: default

spec:

replicas: 1

strategy:

type: Recreate

selector:

matchLabels:

app: nfs-client-provisioner

template:

metadata:

labels:

app: nfs-client-provisioner

spec:

serviceAccountName: nfs-client-provisioner

containers:

- name: nfs-client-provisioner

image: quay.io/external_storage/nfs-client-provisioner:latest

volumeMounts:

- name: nfs-client-root

mountPath: /persistentvolumes

env:

- name: PROVISIONER_NAME

value: fuseim.pri/ifs

- name: NFS_SERVER

value: 192.168.0.248

- name: NFS_PATH

value: /nfs/data

volumes:

- name: nfs-client-root

nfs:

server: 192.168.0.248

path: /nfs/data

部署 deployment.yaml

kubectl apply -f deployment.yaml

创建 StorageClass

storage class的定义,需要注意的是:provisioner属性要等于驱动所传入的环境变量PROVISIONER_NAME的值。否则,驱动不知道知道如何绑定storage class。

此处可以不修改,或者修改provisioner的名字,需要与上面的deployment的PROVISIONER_NAME名字一致。

此处无需修改

apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

name: managed-nfs-storage

provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'

parameters:

archiveOnDelete: "false"

部署yaml文件

[root@sam-master-1 ~]# kubectl apply -f class.yaml

[root@sam-master-1 ~]# kubectl get sc

NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE

nfs-client cluster.local/nfs-nfs-client-provisioner Delete Immediate true 141m

配置授权

如果集群启用了RBAC,则必须执行如下命令授权provisioner。(k8s1.6+默认开启)

此处无需修改

kind: ServiceAccount

apiVersion: v1

metadata:

name: nfs-client-provisioner

---

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: nfs-client-provisioner-runner

rules:

- apiGroups: [""]

resources: ["persistentvolumes"]

verbs: ["get", "list", "watch", "create", "delete"]

- apiGroups: [""]

resources: ["persistentvolumeclaims"]

verbs: ["get", "list", "watch", "update"]

- apiGroups: ["storage.k8s.io"]

resources: ["storageclasses"]

verbs: ["get", "list", "watch"]

- apiGroups: [""]

resources: ["events"]

verbs: ["create", "update", "patch"]

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: run-nfs-client-provisioner

subjects:

- kind: ServiceAccount

name: nfs-client-provisioner

namespace: default

roleRef:

kind: ClusterRole

name: nfs-client-provisioner-runner

apiGroup: rbac.authorization.k8s.io

---

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: leader-locking-nfs-client-provisioner

rules:

- apiGroups: [""]

resources: ["endpoints"]

verbs: ["get", "list", "watch", "create", "update", "patch"]

---

kind: RoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: leader-locking-nfs-client-provisioner

subjects:

- kind: ServiceAccount

name: nfs-client-provisioner

# replace with namespace where provisioner is deployed

namespace: default

roleRef:

kind: Role

name: leader-locking-nfs-client-provisioner

apiGroup: rbac.authorization.k8s.io

部署yaml文件

kubectl create -f rbac.yaml

测试

https://github.com/kubernetes-retired/external-storage/blob/master/nfs-client/deploy/test-claim.yaml

创建 test-claim.yaml

kind: PersistentVolumeClaim

apiVersion: v1

metadata:

name: test-claim

annotations:

volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"

spec:

accessModes:

- ReadWriteMany

resources:

requests:

storage: 1Mi

kubectl create -f test-claim.yaml

查看创建的 PVC

可以看到PVC状态为Bound,绑定的volume为 pvc-c6a546cd-68a9-4d73-8eec-f0426df62a9d

[root@sam-master-1 ~]# kubectl get pvc |grep test-claim

test-claim Bound pvc-c6a546cd-68a9-4d73-8eec-f0426df62a9d 1Mi RWX managed-nfs-storage 18s

[root@sam-master-1 ~]#

创建测试Pod

test-pod.yaml

kind: Pod

apiVersion: v1

metadata:

name: test-pod

spec:

containers:

- name: test-pod

image: willdockerhub/busybox:1.24

command:

- "/bin/sh"

args:

- "-c"

- "touch /mnt/SUCCESS && exit 0 || exit 1"

volumeMounts:

- name: nfs-pvc

mountPath: "/mnt"

restartPolicy: "Never"

volumes:

- name: nfs-pvc

persistentVolumeClaim:

claimName: test-claim

清理测试环境

删除测试POD

kubectl delete -f test-pod.yaml

删除测试PVC

kubectl delete -f test-claim.yaml

遇到的问题

[root@sam-master-1 ~]# kubectl describe pvc test-claim

Name: test-claim

Namespace: default

StorageClass: managed-nfs-storage

Status: Pending

Volume:

Labels: <none>

Annotations: volume.beta.kubernetes.io/storage-class: managed-nfs-storage

volume.beta.kubernetes.io/storage-provisioner: fuseim.pri/ifs

Finalizers: [kubernetes.io/pvc-protection]

Capacity:

Access Modes:

VolumeMode: Filesystem

Used By: <none>

Events:

Type Reason Age From Message

---- ------ ---- ---- -------

Normal ExternalProvisioning 8s (x3 over 35s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "fuseim.pri/ifs" or manually created by system administrator

[root@sam-master-1 ~]#

https://github.com/kubernetes-incubator/external-storage/issues/754

试着执行此 yaml

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: ClusterRoleBinding

metadata:

name: default-admin-rbac (or whatever)

subjects:

- kind: ServiceAccount

name: default

namespace: default

roleRef:

kind: ClusterRole

name: cluster-admin

apiGroup: rbac.authorization.k8s.io

上一篇:5.1总结


下一篇:P6292 区间本质不同子串个数