主机名 | IP | 磁盘 | 角色 |
ceph01 | 10.10.20.55 | ||
ceph02 | 10.10.20.66 | ||
chph03 | 10.10.20.77 |
systemctl stop ceph-mon@ceph01
systemctl stop ceph-mon@ceph02
systemctl stop ceph-mon@ceph03
[root@ceph02 ~]# parted /dev/sdb mklabel gpt
Information: You may need to update /etc/fstab.
[root@ceph02 ~]# parted /dev/sdb mkpart primary 1M 50%
Information: You may need to update /etc/fstab.
[root@ceph02 ~]# parted /dev/sdb mkpart primary 50% 100%
Information: You may need to update /etc/fstab.
[root@ceph02 ~]# chown ceph.ceph /dev/sdb1
[root@ceph02 ~]# chown ceph.ceph /dev/sdb2
初始化清空磁盘数据(仅ceph01操作即可)
[root@ceph01 ceph-cluster]# ceph-deploy disk zap ceph01 /dev/sd{c,d}
[root@ceph01 ceph-cluster]# ceph-deploy disk zap ceph01 /dev/sd{c,d}
[root@ceph01 ceph-cluster]# ceph-deploy disk zap ceph01 /dev/sd{c,d}
[root@ceph01 ceph-cluster]# ceph-deploy disk zap ceph03 /dev/sd{c,d} [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap ceph03 /dev/sdc /dev/sdd [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] debug : False [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] subcommand : zap [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f1635a2fbd8> [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] host : ceph03 [ceph_deploy.cli][INFO ] func : <function disk at 0x7f1635a7a578> [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.cli][INFO ] disk : ['/dev/sdc', '/dev/sdd'] [ceph_deploy.osd][DEBUG ] zapping /dev/sdc on ceph03 [ceph03][DEBUG ] connected to host: ceph03 [ceph03][DEBUG ] detect platform information from remote host [ceph03][DEBUG ] detect machine type [ceph03][DEBUG ] find the location of an executable [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.7.1908 Core [ceph03][DEBUG ] zeroing last few blocks of device [ceph03][DEBUG ] find the location of an executable [ceph03][INFO ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdc [ceph03][WARNIN] --> Zapping: /dev/sdc [ceph03][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table [ceph03][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdc bs=1M count=10 conv=fsync [ceph03][WARNIN] stderr: 10+0 records in [ceph03][WARNIN] 10+0 records out [ceph03][WARNIN] stderr: 10485760 bytes (10 MB) copied, 0.0125001 s, 839 MB/s [ceph03][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdc> [ceph_deploy.osd][DEBUG ] zapping /dev/sdd on ceph03 [ceph03][DEBUG ] connected to host: ceph03 [ceph03][DEBUG ] detect platform information from remote host [ceph03][DEBUG ] detect machine type [ceph03][DEBUG ] find the location of an executable [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.7.1908 Core [ceph03][DEBUG ] zeroing last few blocks of device [ceph03][DEBUG ] find the location of an executable [ceph03][INFO ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdd [ceph03][WARNIN] --> Zapping: /dev/sdd [ceph03][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table [ceph03][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdd bs=1M count=10 conv=fsync [ceph03][WARNIN] stderr: 10+0 records in [ceph03][WARNIN] 10+0 records out [ceph03][WARNIN] 10485760 bytes (10 MB) copied [ceph03][WARNIN] stderr: , 0.00957528 s, 1.1 GB/s [ceph03][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdd>View Code
创建OSD存储空间(仅node1操作即可)
# 创建osd存储设备,vdc为集群提供存储空间,vdb1提供JOURNAL缓存
# 一个存储设备对应一个缓存设备,缓存需要SSD,不需要很大
[root@ceph01 ceph-cluster]# ceph-deploy osd create ceph01 --data /dev/sdc --journal /dev/sdb1
[root@ceph01 ceph-cluster]# ceph-deploy osd create ceph01 --data /dev/sdd --journal /dev/sdb2
[root@ceph01 ceph-cluster]# ceph-deploy osd create ceph02 --data /dev/sdc --journal /dev/sdb1
[root@ceph01 ceph-cluster]# ceph-deploy osd create ceph02 --data /dev/sdd --journal /dev/sdb2
[root@ceph01 ceph-cluster]# ceph-deploy osd create ceph03 --data /dev/sdc --journal /dev/sdb1
[root@ceph01 ceph-cluster]# ceph-deploy osd create ceph03 --data /dev/sdd --journal /dev/sdb2
验证测试,可观察到osd由0变为6了
[root@ceph01 ceph-cluster]# ceph -s cluster: id: fbc66f50-ced8-4ad1-93f7-2453cdbf59ba health: HEALTH_WARN no active mgr services: mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 10m) mgr: no daemons active osd: 6 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: