Volume的生命周期和Pod是一致的。当Pod被销毁时,Volume也会被销毁。
创建一个volume.yaml文件:包含1个Volume和2个容器,一个镜像是centos,会输出hello到/pod-data/index.html文件中,sleep 3000是为了避免容器立刻退出;另一个镜像是nginx,请求/usr/share/nginx/html可以访问centos镜像pod-data文件夹下的文件。centos容器和nginx容器共享一个Volume。
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: muti-container
name: muti-container
spec:
containers:
- image: nginx:1.9
name: nginx
volumeMounts:
- name: task-pv-storage
mountPath: "/usr/share/nginx/html"
resources: {}
- image: centos
name: centos
imagePullPolicy: IfNotPresent
command:
- "bin/sh"
- "-c"
- "echo hello>>/pod-data/index.html&&sleep 3000"
volumeMounts:
- name: task-pv-storage
mountPath: "/pod-data"
resources: {}
volumes:
- name: task-pv-storage
emptyDir: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
kubectl get pod -owide
kubectl exec -it muti-container -c centos – sh
kubectl exec -it muti-container -c nginx – sh
volumes添加一个宿主机的路径,容器在mountPath添加的内容会同步到宿主机,在Pod删除后也不会丢失。
volumes:
- name: task-pv-storage
hostPath:
path: /etc/yum.repos.d/yaml/volumn
NFS
安装nfs服务端,我的服务端和客户端是同一台机器。
yum install nfs-utils rpcbind -y
systemctl enable rpcbind nfs-server
systemctl start rpcbind nfs-server
systemctl status rpcbind nfs-server
mkdir /nfs-server
vi /etc/exports
输入以下内容:
/nfs-server *
exportfs -a
客户端:
mount -t nfs iZwz903eefgw1nuwzx28cdZ:/nfs-server /home
挂载nfs:将nfs服务器iZwz903eefgw1nuwzx28cdZ的nfs-server目录挂载在本机的/home目录
创建pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
path: "/nfs-server"
server: 192.168.1.1(这里写自己服务器的IP)
创建pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
创建pod
kind: Pod
apiVersion: v1
metadata:
name: nfs-test
spec:
containers:
- name: my-busybox
image: busybox
volumeMounts:
- mountPath: "/data"
name: sample-volume
command: ["sleep", "1000000"]
imagePullPolicy: IfNotPresent
volumes:
- name: sample-volume
persistentVolumeClaim:
claimName: nfs-pvc
在nfs-test的data目录下,输出hello.world到1.txt文件。文件会同步到nfs服务器的nfs-server目录。在nfs-server目录创建一个文件,也同样会同步到容器的data目录下。
kubectl logs nfs-client-provisioner-699b6c8d99-c85x5
查看nfs服务器的日志。
问题一:github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:668: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User “system:serviceaccount:default:nfs-provisioner” can not list resource “storageclasses” in API group “storage.k8s.io” at the cluster scope
参考:
NFS动态存储供应
k8s基于NFS创建动态存储StorageClass
k8s配置storage-class
volumn-nfs
Kubernetes nfs provider selfLink was empty
参考几篇文章后发现是我的ClusterRoleBinding的命名空间写错了,其他Role、RoleBinding、ServiceAccount和ClusterRole都没有指定命名空间,默认命名空间是default,只有ClusterRoleBinding是指定到kube-system,修改为default后可以解决上面的问题。
问题二:provision “default/nfs1” class “nfs”: unexpected error getting claim reference: selfLink was empty, can’t make reference
这个问题在github有issue。
Using Kubernetes v1.20.0, getting “unexpected error getting claim reference: selfLink was empty, can’t make reference” #25
按照下面的提示可以解决我的问题。
修改了kube-apiserver.yaml文件后,我执行kubectl apply -f kube-apiserver.yaml 出现:
The connection to the server 172.17.66.104:6443 was refused - did you specify the right host or port?
问题解决后日志输出:
进入到容器的data目录,输出hello,world到hello.txt。
查看nfs服务器的内容,自动生成了一个default-nfs1-pvc-49efab86-9453-4623-aa9e-7421f133879a文件夹,文件夹格式:命名空间-pvc名称-volume名称。
文件夹下面可以查看到容器data目录创建的hello.txt文件。