亲测!K8S集群跨节点挂载CephFS(下)

由于CephFS和CephRBD师出同门,所以当CephRBD无法满足跨节点挂载需求时,CephFS就成为了首要考察目标。接上篇,让我们继续考察:


3


创建fs并测试挂载


我们在ceph上创建一个fs:


# ceph osd pool create cephfs_data 128

pool 'cephfs_data' created


# ceph osd pool create cephfs_metadata 128

pool 'cephfs_metadata' created


# ceph fs new test_fs cephfs_metadata cephfs_data

new fs with metadata pool 2 and data pool 1


# ceph fs ls

name: test_fs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]


不过,ceph当前正式版功能中仅支持一个fs,对多个fs的支持仅存在于实验feature中:


# ceph osd pool create cephfs1_data 128

# ceph osd pool create cephfs1_metadata 128

# ceph fs new test_fs1 cephfs1_metadata cephfs1_data

Error EINVAL: Creation of multiple filesystems is disabled.  To enable this experimental feature, use 'ceph fs flag set enable_multiple true'



在物理机上挂载cephfs可以使用mount命令、mount.ceph(apt-get install ceph-fs-common)或ceph-fuse(apt-get install ceph-fuse),我们先用mount命令挂载:


我们将上面创建的cephfs挂载到主机的/mnt下:

#mount -t ceph ceph_mon_host:6789:/ /mnt -o name=admin,secretfile=admin.secret

# cat admin.secret //ceph.client.admin.keyring中的key

AQDITghZD+c/DhAArOiWWQqyMAkMJbWmHaxjgQ==


查看cephfs信息:


# df -h

ceph_mon_host:6789:/   79G   45G   35G  57% /mnt


可以看出:cephfs将两个物理节点上的磁盘全部空间作为了自己的空间。


通过ceph-fuse挂载,还可以限制对挂载路径的访问权限,我们来创建用户foo,让其仅仅拥有对/ceph-volume1-test路径具有只读访问权限:


# ceph auth get-or-create client.foo mon 'allow *' mds 'allow r path=/ceph-volume1-test' osd 'allow *'

# ceph-fuse -n client.foo -m 10.47.217.91:6789 /mnt -r /ceph-volume1-test

ceph-fuse[10565]: starting ceph client2017-05-03 16:07:25.958903 7f1a14fbff00 -1 init, newargv = 0x557e350defc0 newargc=11

ceph-fuse[10565]: starting fuse


查看挂载路径,并尝试创建文件:


# cd /mnt

root@yypdnode:/mnt# ls

1.txt

root@yypdnode:/mnt# touch 2.txt

touch: cannot touch '2.txt': Permission denied


由于foo用户只拥有对 /ceph-volume1-test的只读权限,因此创建文件失败了!



4


Kubernetes跨节点挂载CephFS


在K8s中,至少可以通过两种方式挂载CephFS,一种是通过Pod直接挂载;另外一种则是通过pv和pvc挂载。我们分别来看。


1、Pod直接挂载CephFS


//ceph-pod2-with-secret.yaml

apiVersion: v1

kind: Pod

metadata:

  name: ceph-pod2-with-secret

spec:

  containers:

  - name: ceph-ubuntu2

    image: ubuntu:14.04

    command: ["tail", "-f", "/var/log/bootstrap.log"]

    volumeMounts:

    - name: ceph-vol2

      mountPath: /mnt/cephfs/data

      readOnly: false

  volumes:

  - name: ceph-vol2

    cephfs:

      monitors:

      - ceph_mon_host:6789

      user: admin

      secretFile: "/etc/ceph/admin.secret"

      readOnly: false


注意:保证每个节点上都存在/etc/ceph/admin.secret文件。


查看Pod挂载的内容:


# docker ps|grep pod

bc96431408c7        ubuntu:14.04                                                  "tail -f /var/log/boo"   About a minute ago   Up About a minute                                                                        k8s_ceph-ubuntu2.66c44128_ceph-pod2-with-secret_default_3d8a05f8-33c3-11e7-bcd9-6640d35a0e90_fc483b8a

bcc65ab82069        gcr.io/google_containers/pause-amd64:3.0                      "/pause"                 About a minute ago   Up About a minute                                                                        k8s_POD.d8dbe16c_ceph-pod2-with-secret_default_3d8a05f8-33c3-11e7-bcd9-6640d35a0e90_02381204


root@yypdnode:~# docker exec bc96431408c7 ls /mnt/cephfs/data

1.txt

apps

ceph-volume1-test

test1.txt


我们再在另外一个node上启动挂载同一个cephfs的Pod,看是否可以跨节点挂载:


# kubectl get pods


NAMESPACE                    NAME                                    READY     STATUS    RESTARTS   AGE       IP             NODE

default                      ceph-pod2-with-secret                   1/1       Running   0          3m        172.30.192.2   iz2ze39jeyizepdxhwqci6z

default                      ceph-pod2-with-secret-on-master         1/1       Running   0          3s        172.30.0.51    iz25beglnhtz

... ...


# kubectl exec ceph-pod2-with-secret-on-master ls /mnt/cephfs/data

1.txt

apps

ceph-volume1-test

test1.txt


可以看到不同节点可以挂载同一CephFS。我们在一个pod中操作一下挂载的cephfs:


# kubectl exec ceph-pod2-with-secret-on-master -- bash -c "for i in {1..10}; do sleep 1; echo 'pod2-with-secret-on-master: Hello, World'>> /mnt/cephfs/data/foo.txt ; done "

root@yypdmaster:~/k8stest/cephfstest/footest# kubectl exec ceph-pod2-with-secret-on-master cat /mnt/cephfs/data/foo.txt

pod2-with-secret-on-master: Hello, World

pod2-with-secret-on-master: Hello, World

pod2-with-secret-on-master: Hello, World

pod2-with-secret-on-master: Hello, World

pod2-with-secret-on-master: Hello, World

pod2-with-secret-on-master: Hello, World

pod2-with-secret-on-master: Hello, World

pod2-with-secret-on-master: Hello, World

pod2-with-secret-on-master: Hello, World

pod2-with-secret-on-master: Hello, World


2、通过PV和PVC挂载CephFS


挂载cephfs的pv和pvc在写法方面与上面挂载rbd的类似:


//ceph-pv.yaml

apiVersion: v1

kind: PersistentVolume

metadata:

  name: foo-pv

spec:

  capacity:

    storage: 512Mi

  accessModes:

    - ReadWriteMany

  cephfs:

    monitors:

      - ceph_mon_host:6789

    path: /

    user: admin

    secretRef:

      name: ceph-secret

    readOnly: false

  persistentVolumeReclaimPolicy: Recycle


//ceph-pvc.yaml


kind: PersistentVolumeClaim

apiVersion: v1

metadata:

  name: foo-claim

spec:

  accessModes:

    - ReadWriteMany

  resources:

    requests:

      storage: 512Mi


使用pvc的pod:


//ceph-pod2.yaml

apiVersion: v1

kind: Pod

metadata:

  name: ceph-pod2

spec:

  containers:

  - name: ceph-ubuntu2

    image: ubuntu:14.04

    command: ["tail", "-f", "/var/log/bootstrap.log"]

    volumeMounts:

    - name: ceph-vol2

      mountPath: /mnt/cephfs/data

      readOnly: false

  volumes:

  - name: ceph-vol2

    persistentVolumeClaim:

      claimName: foo-claim


创建pv、pvc:


# kubectl create -f ceph-pv.yaml

persistentvolume "foo-pv" created

# kubectl create -f ceph-pvc.yaml

persistentvolumeclaim "foo-claim" created


# kubectl get pvc

NAME        STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE

foo-claim   Bound     foo-pv    512Mi      RWX           4s

# kubectl get pv

NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM               REASON    AGE

foo-pv    512Mi      RWX           Recycle         Bound     default/foo-claim             24s


启动pod,通过exec命令查看挂载情况:


# docker ps|grep pod

a6895ec0274f        ubuntu:14.04                                                  "tail -f /var/log/boo"   About a minute ago   Up About a minute                                                                        k8s_ceph-ubuntu2.66c44128_ceph-pod2_default_4e4fc8d4-33c6-11e7-bcd9-6640d35a0e90_1b37ed76

52b6811a6584        gcr.io/google_containers/pause-amd64:3.0                      "/pause"                 About a minute ago   Up About a minute                                                                        k8s_POD.d8dbe16c_ceph-pod2_default_4e4fc8d4-33c6-11e7-bcd9-6640d35a0e90_27e5f988

55b96edbf4bf        ubuntu:14.04                                                  "tail -f /var/log/boo"   14 minutes ago       Up 14 minutes                                                                            k8s_ceph-ubuntu2.66c44128_ceph-pod2-with-secret_default_9d383b0c-33c4-11e7-bcd9-6640d35a0e90_1656e5e0

f8b699bc0459        gcr.io/google_containers/pause-amd64:3.0                      "/pause"                 14 minutes ago       Up 14 minutes                                                                            k8s_POD.d8dbe16c_ceph-pod2-with-secret_default_9d383b0c-33c4-11e7-bcd9-6640d35a0e90_effdfae7

root@yypdnode:~# docker exec a6895ec0274f ls /mnt/cephfs/data

1.txt

apps

ceph-volume1-test

foo.txt

test1.txt


# docker exec a6895ec0274f cat /mnt/cephfs/data/foo.txt

pod2-with-secret-on-master: Hello, World

pod2-with-secret-on-master: Hello, World

pod2-with-secret-on-master: Hello, World

pod2-with-secret-on-master: Hello, World

pod2-with-secret-on-master: Hello, World

pod2-with-secret-on-master: Hello, World

pod2-with-secret-on-master: Hello, World

pod2-with-secret-on-master: Hello, World

pod2-with-secret-on-master: Hello, World

pod2-with-secret-on-master: Hello, World



5



pv的状态


如果你不删除pvc,一切都安然无事:


# kubectl get pv

NAME                CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                        REASON    AGE

foo-pv              512Mi      RWX           Recycle         Bound     default/foo-claim                      1h


# kubectl get pvc

NAME                 STATUS    VOLUME              CAPACITY   ACCESSMODES   AGE

foo-claim            Bound     foo-pv              512Mi      RWX           1h


但是如果删除pvc,pv的状态将变成failed:


删除pvc:

# kubectl get pv

NAME                CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                        REASON    AGE

foo-pv              512Mi      RWX           Recycle         Failed    default/foo-claim                      2h


# kubectl describe pv/foo-pv

Name:        foo-pv

Labels:        <none>

Status:        Failed

Claim:        default/foo-claim

Reclaim Policy:    Recycle

Access Modes:    RWX

Capacity:    512Mi

Message:    No recycler plugin found for the volume!

Source:

    Type:        RBD (a Rados Block Device mount on the host that shares a pod's lifetime)

    CephMonitors:    [xx.xx.xx.xx:6789]

    RBDImage:        foo1

    FSType:        ext4

    RBDPool:        rbd

    RadosUser:        admin

    Keyring:        /etc/ceph/keyring

    SecretRef:        &{ceph-secret}

    ReadOnly:        false

Events:

  FirstSeen    LastSeen    Count    From                SubobjectPath    Type        Reason            Message

  ---------    --------    -----    ----                -------------    --------    ------            -------

  29s        29s        1    {persistentvolume-controller }            Warning        VolumeFailedRecycle    No recycler plugin found for the volume!


我们在pv中指定的persistentVolumeReclaimPolicy是Recycle,但无论是cephrbd还是cephfs都不没有对应的recycler plugin,导致pv的status变成了failed,只能手工删除重建。


上一篇:kiali无法登陆的问题


下一篇:.NET(C#):获取进程的内存私有工作集