k8s集成ceph(StorageClass方式)

  1. ceph 集群创建存储池

    ceph osd pool create k8s 128 128
    
  2. 获取 key

    $ ceph auth get-key client.admin | base64
    QVFEMjVxVmhiVUNJRHhBQUxwdmVHbUdNTWtXZjB6VXovbWlBY3c9PQ==
    
  3. k8s 集群节点安装 ceph-common,版本需和 ceph 集群一致

    rpm -ivh http://download.ceph.com/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm
    sed -i 's#download.ceph.com#mirrors.aliyun.com/ceph#g' /etc/yum.repos.d/ceph.repo
    yum install epel-release -y
    yum install -y ceph-common
    
  4. 编辑 yaml 文件

    $ vi ceph-sc.yaml
    apiVersion: v1
    kind: Secret
    metadata:
      name: ceph-storageclass-secret
      namespace: default
    data:
      key: QVFEMjVxVmhiVUNJRHhBQUxwdmVHbUdNTWtXZjB6VXovbWlBY3c9PQ==
    type:
      kubernetes.io/rbd
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: ceph-storageclass
      namespace: default
      annotations:
        storageclass.kubernetes.io/is-default-class: "false"
    provisioner: kubernetes.io/rbd
    parameters:
      #monitors: 10.10.10.51:6789,10.10.10.53:6789,10.10.10.53:6789
      monitors: ceph01:6789,ceph02:6789,ceph03:6789
      adminId: admin
      adminSecretName: ceph-storageclass-secret
      pool: k8s
      userId: admin
      userSecretName: ceph-storageclass-secret
      imageFormat: "2"
      imageFeatures: "layering"
    

    测试 yaml:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: ceph-pvc-test1
      namespace: default
      annotations:
        volume.beta.kubernetes.io/storage-class: ceph-storageclass
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
    
    # 或者
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: ceph-pvc-test2
      namespace: default
    spec:
      storageClassName: ceph-storageclass
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
    
  5. 执行

     kubectl apply -f .
    
  6. 验证

    $ kubectl get sc
    NAME                PROVISIONER         RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    ceph-storageclass   kubernetes.io/rbd   Delete          Immediate           false                  28s
    
    $ kubectl get pvc -A
    NAMESPACE   NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
    default     ceph-pvc-test1   Bound    pvc-069bd7d7-cb5c-4f70-a760-691c64330dda   1Gi        RWO            ceph-storageclass   34s
    default     ceph-pvc-test2   Bound    pvc-9adb2d07-e72c-4bda-9012-1fc8e5389d1c   1Gi        RWO            ceph-storageclass   34s
    

注意:以上方法只适用于二进制方式安装的 k8s 集群,如果是使用的 pod 方式运行 kube-controller-manager,则会遇到以下错误:

rbd: create volume failed, err: failed to create rbd image: executable file not found in $PATH:

出现这个报错问题的原因其实很简单:gcr.io中自带的kube-controller-manager镜像没有自带rbd子命令。

解决方法是定义外部 provisioner:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: rbd-provisioner
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rbd-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: rbd-provisioner
    spec:
      containers:
      - name: rbd-provisioner
        image: "quay.io/external_storage/rbd-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/rbd
      serviceAccountName: persistent-volume-binder

然后定义 storageClass 时:provisioner 指定为 provisioner: ceph.com/rbd 即可

···
provisioner: ceph.com/rbd
···

参考 Error creating rbd image: executable file not found in $PATH · Issue #38923 · kubernetes/kubernetes (github.com)

上一篇:数据结构应用 - 二叉树


下一篇:Crystal Reports 2008(水晶报表) JDBC连接mysql数据库