一、k8s使用存储的原因
k8s中的副本控制器保证了pod的始终存储,却保证不了pod中的数据。只有启动一个新pod的,之前pod中的数据会随着容器的删掉而丢失!
二、共享存储机制
k8s对于有状态的容器应用或者对数据需要持久化的应用,不仅需要将容器内的目录挂载到宿主机的目录或者emptyDir临时存储卷,而且需要更加可靠的存储来保存应用产生的重要数据,以便容器应用在重建之后,仍然可以使用之前的数据。k8s引入pv和pvc两个资源对象实现对存储的管理子系统。
三、pv和pvc概念
PersistentVolume(一些简称PV):是对底层网络共享存储的抽象,将共享存储定义为一种“资源“。由管理员添加的的一个存储的描述,是一个全局资源,包含存储的类型,存储的大小和访问模式等。它的生命周期独立于Pod,例如当使用它的Pod销毁时对PV没有影响。它与共享存储的具体实现直接相关。例如GlusterFS、iSCSI、RBD或GCE/AWS公有云提供的共享存储,通过插件式的机制完成与共享存储的对接,以供应用访问和使用。
PersistentVolumeClaim(一些简称PVC):是用户对于存储资源的一个“申请”,是Namespace里的资源,描述对PV的一个请求。就像Pod“消费”Node资源一样,PVC会消费PV资源。PVC可以申请特定的存储空间和访问模式。
四、创建pv
PV作为存储资源,主要包括存储能力、访问模式、存储类型、回放策略、后端存储类型等关键信息的设置。
1. 安装nfs服务端和客户端
服务端:192.168.0.212
客户端:192.168.0.184/192.168.0.208
[root@kub_master ~]# yum install nfs-utils -y
服务端
[root@kub_master ~]# vim /etc/exports [root@kub_master ~]# cat /etc/exports /data 192.168.0.0/24(rw,async,no_root_squash,no_all_squash) [root@kub_master ~]# mkdir /data [root@kub_master ~]# mkdir /data/k8s [root@kub_master ~]# systemctl restart rpcbind [root@kub_master ~]# systemctl restart nfs
客户端查看
[root@kub_node1 ~]# showmount -e 192.168.0.212 Export list for 192.168.0.212: /data 192.168.0.0/24
[root@kub_node2 ~]# showmount -e 192.168.0.212 Export list for 192.168.0.212: /data 192.168.0.0/24
2. 创建pv
[root@kub_master ~]# cd k8s/ [root@kub_master k8s]# mkdir volume [root@kub_master k8s]# cd volume/ [root@kub_master volume]# vim test-pv.yaml [root@kub_master volume]# cat test-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: test labels: type: test spec: capacity: storage: 10Gi #存储空间 accessModes: - ReadWriteMany #访问模式 persistentVolumeReclaimPolicy: Recycle #回收策略 nfs: path: "/data/k8s" server: 192.168.0.212 readOnly: false
[root@kub_master volume]# kubectl create -f test-pv.yaml persistentvolume "test" created [root@kub_master volume]# kubectl get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE test 10Gi RWX Recycle Available 11s
#再创建一个5G的PV
[root@kub_master volume]# vim test-pv.yaml [root@kub_master volume]# cat test-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: test1 labels: type: test spec: capacity: storage: 5Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle nfs: path: "/data/k8s" server: 192.168.0.212 readOnly: false [root@kub_master volume]# kubectl create -f test-pv.yaml persistentvolume "test1" created [root@kub_master volume]# kubectl get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE test 10Gi RWX Recycle Available 2m test1 5Gi RWX Recycle Available 5s
3. PV的关键配置参数
1)存储能力(Capacity)
描述存储设备具备的能力,目前仅支持对存储空间的设置(storage=xx),未来可能加入IOPS、吞吐率等指标的设置
2)访问模式
对PV进行访问模式的设置,用于描述用户应用对存储资源的访问权限,访问模式如下:
ReadWriteOnce(简写RWO):读写权限,并且只能被单个Node挂载
ReadOnlyMany(简写ROX):只读权限,允许被多个Node挂载
ReadWriteMany(简写RWX):读写权限,允许被多个Node挂载
3)存储类别(Class)
PV可以设定其存储类型(Class),通过storage ClassName 参数指定一个StorageClass资源对象的名称。具有特定“类别”的PV只能与请求了该“类别”的PVC进行绑定。未设定“类别”的PV则只能与不请求任何“类别”的PVC进行绑定。
4)回收策略
目前支持如下三种回收策略:
保留(Retain):保留数据,需要手工处理
回收空间(Recycle):简单清除文件的操作
删除:与PV相连的后端存储完成volume的删除操作。
目前,只有NFS和HostPath两种类型的存储支持“Recycle”策略;AWS EBS、GCE PD、Azure Disk 和Cinder volume支持“Delete”策略。
4. PV生命周期的各个阶段(Phase)
某个PV在生命周期中,可能处于以下4个阶段之一:
Available:可用状态,还未与某个PVC绑定
Bound:已与某个PVC绑定
Released:绑定的PVC已经删除,资源已释放,但没有被集群回收
Failed:自动资源回收失败
5. PV的挂载参数
在将PV挂载到一个Node上时,根据后端存储的特点,可能需要设置额外的挂载参数,目前可以通过在PV的定义中,设置一个名为“volume.beta.kubernetes.io/mount-options”的annotation来实现。
五、创建PVC
1. PVC作为用户对存储资源的需求申请,主要包括存储空间请求,访问模式,PV选择条件和存储类别等信息的设置。
[root@kub_master volume]# vim test-pvc.yaml [root@kub_master volume]# cat test-pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi #申请1G的存储空间 [root@kub_master volume]# kubectl create -f test-pvc.yaml persistentvolumeclaim "myclaim" created [root@kub_master volume]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE myclaim Bound test1 5Gi RWX 6s [root@kub_master volume]# kubectl get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE test 10Gi RWX Recycle Available 7m test1 5Gi RWX Recycle Bound default/myclaim 5m
2. PVC关键配置参数说明如下:
资源请求:描述对存储资源的请求,目前仅支持request.storage的设置,即存储空间大小。
访问模式:PVC也可以设置访问模式,用于描述用户应用对存储资源的访问权限。可以设置的三种访问模式与PV的设置相同。
PV选择条件:通过Label Selector 的设置,可使PVC对于系统中已存在的各种PV进行筛选。系统将根据标签选择出合适的PV与该PVC进行绑定。选择条件可以使用matchLabels和matchExpression 进行设置,如果两个字段都设置了,则Selector的逻辑将是两组条件同时满足才能完成匹配。
存储类别:PVC在定义时可以设定需要的后端存储的“类别”(通过storageClassName字段指定),以降低对后端存储特性的详细信息的依赖。只有设置了该Class的PV才能被系统选出,并与该PVC进行绑定。
注意:PVC和PV都受限于namespace,pvc在选择pv时受到namespace的限制,只有相同的namespace中的PV才可能与PVC绑定。Pod在引用PVC时同样受namespace的限制,只有相同的namespace中的PVC才能挂载到pod内。
当Selector和Class都进行了设置时,系统将选择两个条件同时满足的pv与之匹配。
六、持久化存储实战
1.创建tomcat+mysql项目
[root@kub_master ~]# cd k8s/tomcat_demo_volume/ [root@kub_master tomcat_demo_volume]# ll total 16 -rw-r--r-- 1 root root 420 Oct 4 16:47 mysql-rc.yaml -rw-r--r-- 1 root root 145 Oct 4 16:47 mysql-svc.yaml -rw-r--r-- 1 root root 487 Oct 4 16:48 tomcat-rc.yaml -rw-r--r-- 1 root root 162 Sep 26 17:03 tomcat-svc.yaml [root@kub_master tomcat_demo_volume]# kubectl create -f . replicationcontroller "mysql" created service "mysql" created replicationcontroller "myweb" created service "myweb" created [root@kub_master tomcat_demo_volume]# kubectl get all NAME DESIRED CURRENT READY AGE rc/mysql 1 1 1 6s rc/myweb 2 2 2 6s NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kubernetes 192.168.0.1 <none> 443/TCP 13d svc/mysql 192.168.213.147 <none> 3306/TCP 6s svc/myweb 192.168.58.131 <nodes> 8080:30001/TCP 5s NAME READY STATUS RESTARTS AGE po/mysql-t8qmt 1/1 Running 0 6s po/myweb-48t21 1/1 Running 0 5s po/myweb-znzkb 1/1 Running 0 5s
测试访问
插入数据
数据将存储在pod mysql-t8qmt中,将该pod删除,检测数据是否还存在
[root@kub_master tomcat_demo_volume]# kubectl delete pod mysql-t8qmt pod "mysql-t8qmt" deleted [root@kub_master tomcat_demo_volume]# kubectl get all NAME DESIRED CURRENT READY AGE rc/mysql 1 1 1 5m rc/myweb 2 2 2 5m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kubernetes 192.168.0.1 <none> 443/TCP 13d svc/mysql 192.168.213.147 <none> 3306/TCP 5m svc/myweb 192.168.58.131 <nodes> 8080:30001/TCP 5m NAME READY STATUS RESTARTS AGE po/mysql-b6833 1/1 Running 0 2s po/mysql-t8qmt 1/1 Terminating 0 5m po/myweb-48t21 1/1 Running 0 5m po/myweb-znzkb 1/1 Running 0 5m
删除后,重新启动了一个pod,但是之前插入的数据没有了
2. 创建相应的pv和pvc
为了让数据持久化,创建pv和pvc
[root@kub_master tomcat_demo_volume]# vim mysql-pv.yaml [root@kub_master tomcat_demo_volume]# cat mysql-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: mysql labels: type: mysql spec: capacity: storage: 10Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle nfs: path: "/data/mysql" server: 192.168.0.212 readOnly: false [root@kub_master tomcat_demo_volume]# vim mysql-pvc.yaml [root@kub_master tomcat_demo_volume]# cat mysql-pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mysql spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi
#创建存储目录 [root@kub_master tomcat_demo_volume]# mkdir /data/mysql
[root@kub_master tomcat_demo_volume]# kubectl create -f mysql-pv.yaml persistentvolume "mysql" created [root@kub_master tomcat_demo_volume]# kubectl create -f mysql-pvc.yaml persistentvolumeclaim "mysql" created [root@kub_master tomcat_demo_volume]# kubectl get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE mysql 10Gi RWX Recycle Bound default/mysql 9s [root@kub_master tomcat_demo_volume]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE mysql Bound mysql 10Gi RWX 8s
3. pod使用pvc
[root@kub_master tomcat_demo_volume]# kubectl apply -f mysql-rc-pvc.yaml replicationcontroller "mysql" configured [root@kub_master tomcat_demo_volume]# kubectl get all NAME DESIRED CURRENT READY AGE rc/mysql 1 1 1 23m rc/myweb 2 2 2 23m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kubernetes 192.168.0.1 <none> 443/TCP 13d svc/mysql 192.168.213.147 <none> 3306/TCP 23m svc/myweb 192.168.58.131 <nodes> 8080:30001/TCP 23m NAME READY STATUS RESTARTS AGE po/mysql-b6833 1/1 Running 0 18m po/myweb-48t21 1/1 Running 0 23m po/myweb-znzkb 1/1 Running 0 23m [root@kub_master tomcat_demo_volume]# kubectl delete pod mysql-b6833 pod "mysql-b6833" deleted [root@kub_master tomcat_demo_volume]# kubectl get all NAME DESIRED CURRENT READY AGE rc/mysql 1 1 1 23m rc/myweb 2 2 2 23m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kubernetes 192.168.0.1 <none> 443/TCP 13d svc/mysql 192.168.213.147 <none> 3306/TCP 23m svc/myweb 192.168.58.131 <nodes> 8080:30001/TCP 23m NAME READY STATUS RESTARTS AGE po/mysql-6rpwq 1/1 Running 0 2s po/myweb-48t21 1/1 Running 0 23m po/myweb-znzkb 1/1 Running 0 23m
#查看pv和pvc应用
[root@kub_master tomcat_demo_volume]# kubectl get pvc -o wide NAME STATUS VOLUME CAPACITY ACCESSMODES AGE mysql Bound mysql 10Gi RWX 8m [root@kub_master tomcat_demo_volume]# kubectl get pv -o wide NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE mysql 10Gi RWX Recycle Bound default/mysql 8m [root@kub_master tomcat_demo_volume]# kubectl get pod mysql-6rpwq -o wide NAME READY STATUS RESTARTS AGE IP NODE mysql-6rpwq 1/1 Running 0 1m 172.16.46.3 192.168.0.184
#查看挂载情况
[root@kub_node1 ~]# df -h |grep mysql 192.168.0.212:/data/mysql 99G 12G 83G 12% /var/lib/kubelet/pods/c31f3bdd-0621-11eb-8a8e-fa163e38ad0d/volumes/kubernetes.io~nfs/mysql
#查看存储目录,已有文件
[root@kub_master tomcat_demo_volume]# ll /data/mysql/ total 188448 -rw-r----- 1 polkitd input 56 Oct 4 17:12 auto.cnf -rw-r----- 1 polkitd input 1329 Oct 4 17:12 ib_buffer_pool -rw-r----- 1 polkitd input 79691776 Oct 4 17:12 ibdata1 -rw-r----- 1 polkitd input 50331648 Oct 4 17:12 ib_logfile0 -rw-r----- 1 polkitd input 50331648 Oct 4 17:12 ib_logfile1 -rw-r----- 1 polkitd input 12582912 Oct 4 17:13 ibtmp1 drwxr-x--- 2 polkitd input 4096 Oct 4 17:12 mysql drwxr-x--- 2 polkitd input 4096 Oct 4 17:12 performance_schema drwxr-x--- 2 polkitd input 12288 Oct 4 17:12 sys
4. 测试访问,添加数据,查看数据是否会丢失
#删除当前pod
[root@kub_master tomcat_demo_volume]# kubectl delete pod mysql-6rpwq pod "mysql-6rpwq" deleted [root@kub_master tomcat_demo_volume]# kubectl get all NAME DESIRED CURRENT READY AGE rc/mysql 1 1 1 29m rc/myweb 2 2 2 29m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kubernetes 192.168.0.1 <none> 443/TCP 13d svc/mysql 192.168.213.147 <none> 3306/TCP 29m svc/myweb 192.168.58.131 <nodes> 8080:30001/TCP 29m NAME READY STATUS RESTARTS AGE po/mysql-6rpwq 1/1 Terminating 0 6m po/mysql-79gfq 1/1 Running 0 2s po/myweb-48t21 1/1 Running 0 29m po/myweb-znzkb 1/1 Running 0 29m #重启一个新的pod
[root@kub_master tomcat_demo_volume]# kubectl get all NAME DESIRED CURRENT READY AGE rc/mysql 1 1 1 29m rc/myweb 2 2 2 29m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kubernetes 192.168.0.1 <none> 443/TCP 13d svc/mysql 192.168.213.147 <none> 3306/TCP 29m svc/myweb 192.168.58.131 <nodes> 8080:30001/TCP 29m NAME READY STATUS RESTARTS AGE po/mysql-79gfq 1/1 Running 0 7s po/myweb-48t21 1/1 Running 0 29m po/myweb-znzkb 1/1 Running 0 29m
查看插入的数据是否存在
[root@kub_master tomcat_demo_volume]# ll /data/mysql/ total 188452 -rw-r----- 1 polkitd input 56 Oct 4 17:12 auto.cnf drwxr-x--- 2 polkitd input 4096 Oct 4 17:17 HPE_APP -rw-r----- 1 polkitd input 698 Oct 4 17:18 ib_buffer_pool -rw-r----- 1 polkitd input 79691776 Oct 4 17:18 ibdata1 -rw-r----- 1 polkitd input 50331648 Oct 4 17:18 ib_logfile0 -rw-r----- 1 polkitd input 50331648 Oct 4 17:12 ib_logfile1 -rw-r----- 1 polkitd input 12582912 Oct 4 17:19 ibtmp1 drwxr-x--- 2 polkitd input 4096 Oct 4 17:12 mysql drwxr-x--- 2 polkitd input 4096 Oct 4 17:12 performance_schema drwxr-x--- 2 polkitd input 12288 Oct 4 17:12 sys
七、分布式文件系统GlusterFS
Glusterfs是一个开源分布式文件系统,具有强大的横向扩展能力,可支持数PB存储容量和数千客户端,通过网络互联成一个并行的网络文件系统。具有可扩展性、高性能、高可用性等特点。
1. 安装glusterfs(所有节点)
[root@kub_master ~]# yum install centos-release-gluster -y
[root@kub_master ~]# yum install install glusterfs-server -y
[root@kub_master ~]# systemctl start glusterd.service [root@kub_master ~]# systemctl enable glusterd.service [root@kub_master ~]# systemctl status glusterd.service ● glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2020-10-04 17:29:16 CST; 44s ago Docs: man:glusterd(8) Main PID: 22388 (glusterd) CGroup: /system.slice/glusterd.service └─22388 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO Oct 04 17:29:16 kub_master systemd[1]: Starting GlusterFS, a clustered file-system server... Oct 04 17:29:16 kub_master systemd[1]: Started GlusterFS, a clustered file-system server.
#为gluster集群增加存储单元
[root@kub_master ~]# mkdir -p /gfs/test1
[root@kub_master ~]# mkdir -p /gfs/test2
2. 添加存储资源池
#master节点
[root@kub_master ~]# gluster pool list UUID Hostname State 165199a9-1e89-47f4-97f1-6c0a54376ba9 localhost Connected [root@kub_master ~]# gluster peer probe 192.168.0.184 peer probe: success. [root@kub_master ~]# gluster peer probe 192.168.0.208 peer probe: success. [root@kub_master ~]# gluster pool list UUID Hostname State 2dd3e723-ec1b-404a-8ba3-eb78eabcf0cd 192.168.0.184 Connected 9e6c240c-0564-4aaf-861b-3ddfad6bf614 192.168.0.208 Connected 165199a9-1e89-47f4-97f1-6c0a54376ba9 localhost Connected
3.glusterfs卷管理
1)创建分布式复制卷
[root@kub_master ~]# gluster volume create test replica 2 192.168.0.212:/gfs/test1 192.168.0.212:/gfs/test2 192.168.0.184:/gfs/test1 192.168.0.184:/gfs/test2 force volume create: test: success: please start the volume to access data
2)启动卷
[root@kub_master ~]# gluster volume start test
volume start: test: success
3)查看卷
[root@kub_master ~]# gluster volume info test Volume Name: test Type: Distributed-Replicate Volume ID: 879fd8dc-bb14-4231-9169-42440edcc950 Status: Started Snapshot Count: 0 Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 192.168.0.212:/gfs/test1 Brick2: 192.168.0.212:/gfs/test2 Brick3: 192.168.0.184:/gfs/test1 Brick4: 192.168.0.184:/gfs/test2 Options Reconfigured: transport.address-family: inet storage.fips-mode-rchecksum: on nfs.disable: on performance.client-io-threads: off
4)挂载卷
#在任意节点进行挂载
[root@kub_node1 ~]# mount -t glusterfs 192.168.0.212:/test /mnt [root@kub_node1 ~]# df -h /mnt Filesystem Size Used Avail Use% Mounted on 192.168.0.212:/test 99G 9.2G 86G 10% /mnt
4.分布式复制卷扩容
#扩容命令
[root@kub_master ~]# gluster volume add-brick test 192.168.0.208:/gfs/test1 192.168.0.208:/gfs/test2 force volume add-brick: success
#扩容后查看容量,明显增大
[root@kub_node1 ~]# df -h /mnt Filesystem Size Used Avail Use% Mounted on 192.168.0.212:/test 148G 14G 130G 10% /mnt
5. 上传资料到/mnt目录下,查看数据分布
[root@kub_node1 ~]# cd /mnt [root@kub_node1 mnt]# rz [root@kub_node1 mnt]# ^C [root@kub_node1 mnt]# ll total 89 -rw-r--r-- 1 root root 91014 Oct 4 18:07 xiaoniaofeifei.zip [root@kub_node1 mnt]# unzip xiaoniaofeifei.zip Archive: xiaoniaofeifei.zip inflating: sound1.mp3 creating: img/ inflating: img/bg1.jpg inflating: img/bg2.jpg inflating: img/number1.png inflating: img/number2.png inflating: img/s1.png inflating: img/s2.png inflating: 21.js inflating: 2000.png inflating: icon.png inflating: index.html
[root@kub_node1 mnt]# tree /gfs/ /gfs/ ├── test1 │ └── img │ ├── bg1.jpg │ └── s2.png └── test2 └── img ├── bg1.jpg └── s2.png 4 directories, 4 files
[root@kub_node2 ~]# tree /gfs/ /gfs/ ├── test1 │ └── img │ └── bg2.jpg └── test2 └── img └── bg2.jpg 4 directories, 2 files
[root@kub_master ~]# tree /gfs/ /gfs/ ├── test1 │ ├── 2000.png │ ├── 21.js │ ├── icon.png │ ├── img │ │ ├── number1.png │ │ ├── number2.png │ │ └── s1.png │ ├── index.html │ ├── sound1.mp3 │ └── xiaoniaofeifei.zip └── test2 ├── 2000.png ├── 21.js ├── icon.png ├── img │ ├── number1.png │ ├── number2.png │ └── s1.png ├── index.html ├── sound1.mp3 └── xiaoniaofeifei.zip 4 directories, 18 files
八、glusterfs对接k8s后端存储
1. 创建endpoint
[root@kub_master ~]# cd k8s/ [root@kub_master k8s]# mkdir glusterfs-volume [root@kub_master k8s]# cd glusterfs-volume/ [root@kub_master glusterfs-volume]# vi glusterfs-ep.yaml [root@kub_master glusterfs-volume]# cat glusterfs-ep.yaml apiVersion: v1 kind: Endpoints metadata: name: glusterfs namespace: default subsets: - addresses: - ip: 192.168.0.212 - ip: 192.168.0.208 - ip: 192.168.0.184 ports: - port: 49152 #默认端口 protocol: TCP
[root@kub_master glusterfs-volume]# kubectl create -f glusterfs-ep.yaml endpoints "glusterfs" created [root@kub_master glusterfs-volume]# kubectl get ep NAME ENDPOINTS AGE glusterfs 192.168.0.184:49152,192.168.0.208:49152,192.168.0.212:49152 8s kubernetes 192.168.0.212:6443 13d mysql 172.16.46.5:3306 1h myweb 172.16.46.4:8080,172.16.66.2:8080 1h
2. 创建service
[root@kub_master glusterfs-volume]# vi glusterfs-svc.yaml [root@kub_master glusterfs-volume]# cat glusterfs-svc.yaml apiVersion: v1 kind: Service metadata: name: glusterfs namespace: default spec: ports: - port: 49152 protocol: TCP targetPort: 49152 sessionAffinity: None type: ClusterIP [root@kub_master glusterfs-volume]# kubectl create -f glusterfs-svc.yaml service "glusterfs" created [root@kub_master glusterfs-volume]# kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE glusterfs 192.168.19.6 <none> 49152/TCP 6s kubernetes 192.168.0.1 <none> 443/TCP 13d mysql 192.168.213.147 <none> 3306/TCP 1h myweb 192.168.58.131 <nodes> 8080:30001/TCP 1h [root@kub_master glusterfs-volume]# kubectl describe svc glusterfs Name: glusterfs #依靠名字关联 Namespace: default Labels: <none> Selector: <none> Type: ClusterIP IP: 192.168.19.6 Port: <unset> 49152/TCP Endpoints: 192.168.0.184:49152,192.168.0.208:49152,192.168.0.212:49152 Session Affinity: None No events.
3. 创建gluster类型pv
[root@kub_master glusterfs-volume]# gluster volume list test [root@kub_master glusterfs-volume]# vim glusterfs-pv.yaml [root@kub_master glusterfs-volume]# cat glusterfs-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: gluster labels: type: glusterfs spec: capacity: storage: 50Gi accessModes: - ReadWriteMany glusterfs: endpoints: "glusterfs" path: "test" readOnly: false [root@kub_master glusterfs-volume]# kubectl create -f glusterfs-pv.yaml persistentvolume "gluster" created [root@kub_master glusterfs-volume]# kubectl get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE gluster 50Gi RWX Retain Available 14s mysql 10Gi RWX Recycle Bound default/mysql 1h
4.创建gluster类型pvc
[root@kub_master glusterfs-volume]# vim glusterfs-pvc.yaml [root@kub_master glusterfs-volume]# cat glusterfs-pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: gluster spec: accessModes: - ReadWriteMany resources: requests: storage: 15Gi [root@kub_master glusterfs-volume]# kubectl create -f glusterfs-pvc.yaml persistentvolumeclaim "gluster" created [root@kub_master glusterfs-volume]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE gluster Bound gluster 50Gi RWX 6s mysql Bound mysql 10Gi RWX 1h [root@kub_master glusterfs-volume]# kubectl get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE gluster 50Gi RWX Retain Bound default/gluster 3m mysql 10Gi RWX Recycle Bound default/mysql 1h
5. 在pod中使用gluster
[root@kub_master glusterfs-volume]# vim nginx-pod-gluster.yaml [root@kub_master glusterfs-volume]# cat nginx-pod-gluster.yaml apiVersion: v1 kind: Pod metadata: name: test labels: app: web spec: containers: - name: test image: 192.168.0.212:5000/nginx:1.13 ports: - containerPort: 80 volumeMounts: - name: nfs-vol2 mountPath: /usr/share/nginx/html volumes: - name: nfs-vol2 persistentVolumeClaim: claimName: gluster [root@kub_master glusterfs-volume]# kubectl create -f nginx-pod-gluster.yaml pod "test" created [root@kub_master glusterfs-volume]# kubectl get pod NAME READY STATUS RESTARTS AGE mysql-79gfq 1/1 Running 0 1h myweb-48t21 1/1 Running 0 1h myweb-znzkb 1/1 Running 0 1h test 1/1 Running 0 5s [root@kub_master glusterfs-volume]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE mysql-79gfq 1/1 Running 0 1h 172.16.46.5 192.168.0.184 myweb-48t21 1/1 Running 0 1h 172.16.66.2 192.168.0.208 myweb-znzkb 1/1 Running 0 1h 172.16.46.4 192.168.0.184 test 1/1 Running 0 20s 172.16.46.3 192.168.0.184 [root@kub_master glusterfs-volume]# curl 172.16.46.3 <!DOCTYPE HTML> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <meta id="viewport" name="viewport" content="width=device-width,user-scalable=no" /> <script type="text/javascript" src="21.js"></script> <title>小鸟飞飞飞-文章库小游戏</title> <style type="text/css"> body { margin:0px; } </style> <script language=javascript> var mebtnopenurl = ‘http://www.wenzhangku.com/weixin/‘; window.shareData = { "imgUrl": "http://www.wenzhangku.com/weixin/xiaoniaofeifei/icon.png", "timeLineLink": "http://www.wenzhangku.com/weixin/xiaoniaofeifei/", "tTitle": "小鸟飞飞飞-文章库小游戏", "tContent": "从前有一只鸟,飞着飞着就死了。" }; document.addEventListener(‘WeixinJSBridgeReady‘, function onBridgeReady() { WeixinJSBridge.on(‘menu:share:appmessage‘, function(argv) { WeixinJSBridge.invoke(‘sendAppMessage‘, { "img_url": window.shareData.imgUrl, "link": window.shareData.timeLineLink, "desc": window.shareData.tContent, "title": window.shareData.tTitle }, function(res) { document.location.href = mebtnopenurl; }) }); WeixinJSBridge.on(‘menu:share:timeline‘, function(argv) { WeixinJSBridge.invoke(‘shareTimeline‘, { "img_url": window.shareData.imgUrl, "img_width": "640", "img_height": "640", "link": window.shareData.timeLineLink, "desc": window.shareData.tContent, "title": window.shareData.tTitle }, function(res) { document.location.href = mebtnopenurl; }); }); }, false); function dp_submitScore(a,b){ if(a&&b>=a&&b>10){ //alert("新纪录哦!你过了"+b+"关!") dp_share(b) } } function dp_Ranking(){ document.location.href = mebtnopenurl; } function dp_share(t){ document.title = "我玩小鸟飞飞飞过了"+t+"关!你能超过洒家我吗?"; document.getElementById("share").style.display=""; window.shareData.tTitle = document.title; } </script> </head> <body> <div style="text-align:center;"> <canvas id="linkScreen"> 很遗憾,您的浏览器不支持HTML5,请使用支持HTML5的浏览器。 </canvas> </div> <div id="mask_container" align="center" style="width: 100%; height: 100%; position: absolute; left: 0px; top: 0px; display: none; z-index: 100000; background-color: rgb(255, 255, 255);"> <img id="p2l" src="img/p2l.jpg" style="position: absolute;left: 50%;top: 50%;-webkit-transform:translateX(-50%) translateY(-50%);transform:translateX(-50%) translateY(-50%)" > </div> <div id=share style="display:none"> <img width=100% src="2000.png" style="position:absolute;top:0;left:0;display:" onclick="document.getElementById(‘share‘).style.display=‘none‘;"> </div> <div style="display:none;"> 这里加统计 </div> </body> </html>
添加端口映射,浏览器访问
[root@kub_master glusterfs-volume]# vim nginx-pod-gluster.yaml [root@kub_master glusterfs-volume]# cat nginx-pod-gluster.yaml apiVersion: v1 kind: Pod metadata: name: nginx labels: app: web spec: containers: - name: nginx image: 192.168.0.212:5000/nginx:1.13 ports: - containerPort: 80 hostPort: 81 volumeMounts: - name: nfs-vol2 mountPath: /usr/share/nginx/html volumes: - name: nfs-vol2 persistentVolumeClaim: claimName: gluster [root@kub_master glusterfs-volume]# kubectl create -f nginx-pod-gluster.yaml pod "nginx" created [root@kub_master glusterfs-volume]# kubectl get pod NAME READY STATUS RESTARTS AGE mysql-79gfq 1/1 Running 0 1h myweb-48t21 1/1 Running 0 2h myweb-znzkb 1/1 Running 0 2h nginx 1/1 Running 0 4s test 1/1 Running 0 5m [root@kub_master glusterfs-volume]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE mysql-79gfq 1/1 Running 0 1h 172.16.46.5 192.168.0.184 myweb-48t21 1/1 Running 0 2h 172.16.66.2 192.168.0.208 myweb-znzkb 1/1 Running 0 2h 172.16.46.4 192.168.0.184 nginx 1/1 Running 0 11s 172.16.81.3 192.168.0.212 test 1/1 Running 0 5m 172.16.46.3 192.168.0.184
#测试访问
[root@kub_master glusterfs-volume]# curl 192.168.0.212:81 <!DOCTYPE HTML> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <meta id="viewport" name="viewport" content="width=device-width,user-scalable=no" /> <script type="text/javascript" src="21.js"></script> <title>小鸟飞飞飞-文章库小游戏</title> <style type="text/css"> body { margin:0px; } </style> <script language=javascript> var mebtnopenurl = ‘http://www.wenzhangku.com/weixin/‘; window.shareData = { "imgUrl": "http://www.wenzhangku.com/weixin/xiaoniaofeifei/icon.png", "timeLineLink": "http://www.wenzhangku.com/weixin/xiaoniaofeifei/", "tTitle": "小鸟飞飞飞-文章库小游戏", "tContent": "从前有一只鸟,飞着飞着就死了。" }; document.addEventListener(‘WeixinJSBridgeReady‘, function onBridgeReady() { WeixinJSBridge.on(‘menu:share:appmessage‘, function(argv) { WeixinJSBridge.invoke(‘sendAppMessage‘, { "img_url": window.shareData.imgUrl, "link": window.shareData.timeLineLink, "desc": window.shareData.tContent, "title": window.shareData.tTitle }, function(res) { document.location.href = mebtnopenurl; }) }); WeixinJSBridge.on(‘menu:share:timeline‘, function(argv) { WeixinJSBridge.invoke(‘shareTimeline‘, { "img_url": window.shareData.imgUrl, "img_width": "640", "img_height": "640", "link": window.shareData.timeLineLink, "desc": window.shareData.tContent, "title": window.shareData.tTitle }, function(res) { document.location.href = mebtnopenurl; }); }); }, false); function dp_submitScore(a,b){ if(a&&b>=a&&b>10){ //alert("新纪录哦!你过了"+b+"关!") dp_share(b) } } function dp_Ranking(){ document.location.href = mebtnopenurl; } function dp_share(t){ document.title = "我玩小鸟飞飞飞过了"+t+"关!你能超过洒家我吗?"; document.getElementById("share").style.display=""; window.shareData.tTitle = document.title; } </script> </head> <body> <div style="text-align:center;"> <canvas id="linkScreen"> 很遗憾,您的浏览器不支持HTML5,请使用支持HTML5的浏览器。 </canvas> </div> <div id="mask_container" align="center" style="width: 100%; height: 100%; position: absolute; left: 0px; top: 0px; display: none; z-index: 100000; background-color: rgb(255, 255, 255);"> <img id="p2l" src="img/p2l.jpg" style="position: absolute;left: 50%;top: 50%;-webkit-transform:translateX(-50%) translateY(-50%);transform:translateX(-50%) translateY(-50%)" > </div> <div id=share style="display:none"> <img width=100% src="2000.png" style="position:absolute;top:0;left:0;display:" onclick="document.getElementById(‘share‘).style.display=‘none‘;"> </div> <div style="display:none;"> 这里加统计 </div> </body> </html>