1、环境准备
·准备三台虚机,每个虚机配合三块数据盘,2块网卡,一个网卡设置外网,一个网卡设置成内网
·配置文件设置(根据实际情况配置每一个节点)
-编辑 letc/sysconfig/network-scripts/ifcfg-ethO 文件(外网),添加
ONBOOT=yes
BOOTPROTO=dhcp
-编辑 letc/sysconfig/network scripts/ifcfg-eth 文件(内网),添加:
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.20.180
NETMASK=255.255.255.0
-修改每个节点的hostname
[root@ceph-node-1 ceph]# cat /etc/sysconfig/network
HOSTNAME=ceph-node1
-编辑 /etc/hosts 文件,添加:
192.168.20.180 ceph-node1
192.168.20.40 ceph-node2
192.168.20.81 ceph-node3
·配置三个节点免密登录(每一个节点都需要操作,一下是在节点ceph-node1上执行的例子)
#ssh-keygen
#sh-copy-id ceph-node2 ceph-node3
·所有节点增加yum配置文件ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
priority=1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
gpgcheck=0
priority=1
2、安装ceph-depoy
在ceph-node1节点上执行yum install -y ceph-deploy
3、在管理节点创建monitor服务
mkdir /etc/ceph && cd /etc/ceph
ceph-deploy new ceph-node1
执行成功后,该目录下面会生成如下配置文件
[root@ceph-node-1 ceph]# ls
ceph.conf ceph-deploy-ceph.log ceph.log ceph.mon.keyring rbdmap
修改ceph配置文件的副本个数
[root@ceph-node-1 ceph]# cat ceph.conf
[global]
fsid = a28ff2de-6ea9-459f-aca1-e7ee21efc075
mon_initial_members = ceph-node1
mon_host = 192.168.20.180
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 2
4、在所有节点上面安装ceph
[root@ceph-node-1 ceph]# ceph-deploy install ceph-node1 ceph-node2 ceph-node3
执行结束之后,执行ceph --version来检查ceph版本和健康状况
5、创建第一个monitor
ceph-deploy mon create-initial
安装成功后执行ceph status
[root@ceph-node-1 ceph]# ceph status
cluster a28ff2de-6ea9-459f-aca1-e7ee21efc075
health HEALTH_ERR
64 pgs are stuck inactive for more than 300 seconds
64 pgs stuck inactive
64 pgs stuck unclean
no osds
monmap e1: 1 mons at {ceph-node1=192.168.20.180:6789/0}
election epoch 3, quorum 0 ceph-node1
osdmap e1: 0 osds: 0 up, 0 in
flags sortbitwise,require_jewel_osds
pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
64 creating
6、创建osd
查询disk 列表
ceph-deploy disk list ceph-node1 ceph-node2 ceph-node3
删除已有的磁盘分区和数据
ceph-deploy disk zap ceph-node1:vdb ceph-node1:vdc ceph-node1:vdd
[root@ceph-node-1 ceph]# ceph status
cluster a28ff2de-6ea9-459f-aca1-e7ee21efc075
health HEALTH_ERR
64 pgs are stuck inactive for more than 300 seconds
64 pgs stuck inactive
64 pgs stuck unclean
no osds
monmap e1: 1 mons at {ceph-node1=192.168.20.180:6789/0}
election epoch 3, quorum 0 ceph-node1
osdmap e1: 0 osds: 0 up, 0 in
flags sortbitwise,require_jewel_osds
pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
64 creating
创建新的文件系统
ceph-deploy osd create ceph-node1:vdb ceph-node1:vdc ceph-node1:vdd
[root@ceph-node-1 ceph]# ceph status
cluster a28ff2de-6ea9-459f-aca1-e7ee21efc075
health HEALTH_WARN
24 pgs degraded
64 pgs stuck unclean
24 pgs undersized
too few PGs per OSD (21 < min 30)
monmap e1: 1 mons at {ceph-node1=192.168.20.180:6789/0}
election epoch 3, quorum 0 ceph-node1
osdmap e15: 3 osds: 3 up, 3 in; 40 remapped pgs
flags sortbitwise,require_jewel_osds
pgmap v28: 64 pgs, 1 pools, 0 bytes data, 0 objects
322 MB used, 15004 MB / 15326 MB avail
40 active
24 active+undersized+degraded
同样的方法在其他节点创建osd,所有的操作都是在节点ceph-node1上执行,只在一个节点上配置osd,集群的状态是不健康的,需要在其他节点构造冗余的osd,集群才会健康
7、纵向扩展ceph,添加monitor和osd
osd只需要使用和步骤6同样的步骤在其他磁盘或者其他节点的磁盘创建osd即可
下面说下monitor的扩展,首先需要将monitor所在的几个节点的防火墙关闭
# service iptables stop
# chkconfig iptables off
# ssh ceph-node2 service iptables stop
# ssh ceph-node2 chkconfig iptables off
# ssh ceph- node3 service iptables stop
# ssh ceph-node3 chkconfig iptables off
在其他节点上部署monitor
现在配置文件中添加public网段配置
[root@ceph-node-1 ceph]# cat /etc/ceph/ceph.conf
[global]
fsid = a28ff2de-6ea9-459f-aca1-e7ee21efc075
mon_initial_members = ceph-node1
mon_host = 192.168.20.180
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 2
public network = 10.100.3.0/24
将配置同步到别的节点
ceph-deploy --overwrite-conf config push ceph-node1 ceph-node2 ceph-node3
添加monitor
ceph-deploy mon create ceph-node2 ceph-node3
执行status查看结果
[root@ceph-node-1 ceph]# ceph status
cluster a28ff2de-6ea9-459f-aca1-e7ee21efc075
health HEALTH_WARN
too few PGs per OSD (14 < min 30)
monmap e3: 3 mons at {ceph-node1=192.168.20.180:6789/0,ceph-node2=10.100.3.226:6789/0,ceph-node3=10.100.3.192:6789/0}
election epoch 8, quorum 0,1,2 ceph-node3,ceph-node2,ceph-node1
osdmap e47: 9 osds: 9 up, 9 in
flags sortbitwise,require_jewel_osds
pgmap v117: 64 pgs, 1 pools, 0 bytes data, 0 objects
971 MB used, 45009 MB / 45980 MB avail
64 active+clean
至此,ceph集群搭建完毕 共有3个监听器和9个osd