openstack 管理二十三 - nova compute 连接 ceph 集群

前提

配置 nova compute, 令其他与 ceph 集群进行连接
最终目的能够允许 instances 利用 ceph RBD 当作外部卷使用
nova compute 默认已经能够正常工作, 当前只添加 ceph 连接部分

安装软件

yum install -y python-ceph ceph

额外的目录

mkdir -p /var/run/ceph/guests/ /var/log/qemu/
chown qemu:qemu /var/run/ceph/guests /var/log/qemu/

确认版本

确保 qemu 支持 rbd 协议, 利用 ldd 检查是否支持 librbd.so 动态库

[root@hh-yun-compute-130133 ~]# ldd /usr/bin/qemu-img  | grep rbd
        librbd.so.1 => /lib64/librbd.so.1 (0x00007fa708216000)

也可以通过命令行进行检测

[root@hh-yun-compute-130133 ~]# qemu-img  -h | grep rbd
Supported formats: vvfat vpc vmdk vhdx vdi sheepdog rbd raw host_cdrom host_floppy host_device file qed qcow2 qcow parallels nbd iscsi gluster dmg cloop bochs blkverify blkdebug

当前使用的版本

qemu-img-1.5.3-86.el7_1.2.x86_64 qemu-kvm-1.5.3-86.el7_1.2.x86_64 qemu-kvm-common-1.5.3-86.el7_1.2.x86_64

配置

修改 nova compute 下的 ceph.conf

[global]
fsid = dc4f91c1-8792-4948-b68f-2fcea75f53b9
mon initial members = XXX.XXXX.XXXXX
mon host = XXX.XXX.XXX
public network = XXX.XXX.XXX.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
filestore xattr use omap = true
osd pool default size = 3
osd pool default min size = 1
osd pool default pg num = 10240
osd pool default pgp num = 10240
osd crush chooseleaf type = 1

[client]
    rbd cache = true
    rbd cache writethrough until flush = true
    admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
    log file = /var/log/qemu/qemu-guest-$pid.log
    rbd concurrent management ops = 20

修改 nova compute 连接 ceph 方法配置

[libvirt]
libvirt_images_type = rbd
libvirt_images_rbd_pool = volumes
libvirt_images_rbd_ceph_conf = /etc/ceph/ceph.conf
libvirt_disk_cachemodes="network=writeback"
rbd_user = cinder
rbd_secret_uuid = dc4f91c1-8792-4948-b68f-2fcea75f53b9
libvirt_inject_password = false
libvirt_inject_key = false
libvirt_inject_partition = -2
libvirt_live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"

复制 ceph 服务器中 ceph.client.cinder.keyring 到 /etc/ceph 目录下 (参考上面 rbd_user=cinder)

添加 ceph 配置信息

/etc/libvirt/secrets/dc4f91c1-8792-4948-b68f-2fcea75f53b9.base64 (连接 ceph 的用户密钥定义)

AQADxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==

/etc/libvirt/secrets/dc4f91c1-8792-4948-b68f-2fcea75f53b9.xml (连接 ceph 的用户定义)

<secret ephemeral="no" private="no">
  <uuid>dc4f91c1-8792-4948-b68f-2fcea75f53b9</uuid>
  <usage type="ceph">
    <name>client.volumes secret</name>
  </usage>
</secret>

重启服务

分别重启下面两个服务

systemctl restart libvirtd
systemctl restart openstack-nova-compute
上一篇:在 Linux 下使用 TCP 封装器来加强网络服务安全


下一篇:MyBatis学习笔记(一)