项目框架
节点规划
节点名字 | ip | ceph |
owncloud | 192.168.64.128 | xxxx |
master1 | 192.168.64.150 | mon,mgr,mds |
master2 | 192.168.64.151 | osd |
master3 | 192.168.64.152 | osd |
部署环境准备
所有节点更改hosts文件,关闭防火墙,关闭selinux,时间同步,添加ceph源和docker源,这个就不演示了,搭建过k8s集群必做的操作。
关于ceph源
centos7
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/x86_64/
enabled=1
gpgcheck=0
priority=1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/
enabled=1
gpgcheck=0
priority=1
centos8
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el8/x86_64/
enabled=1
gpgcheck=0
priority=1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el8/noarch/
enabled=1
gpgcheck=0
priority=1
ceph集群部署
根据节点规划首先配置master1到所有节点免密功能
[root@master1 ceph-install]# ssh-keygen
[root@master1 ceph-install]# ssh-copy-id master2
[root@master1 ceph-install]# ssh-copy-id master3
[root@master1 ceph-install]# ssh-copy-id owncloud
在master1节点安装ceph部署工具
注意: 只在ceph_node1上安装,因为它是部署节点,其它节点不用安装
[root@master1 ceph-install]# yum install ceph-deploy -y
在master1上创建集群
必须建立一个集群配文件
[root@master1 ceph-install]# pwd
/root/ceph-install
[root@master1 ceph-install]# ls -al
总用量 8
drwxr-xr-x 2 root root 6 10月 17 14:37 .
dr-xr-x---. 31 root root 4096 10月 17 14:21 ..
[root@master1 ceph-install]#
在管理节点master1上创建集群
[root@master1 ceph-install]# ceph-deploy new master1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy new master1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] func : <function new at 0x7fef452f9de8>
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fef44a717a0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : ['master1']
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[master1][DEBUG ] connected to host: master1
[master1][DEBUG ] detect platform information from remote host
[master1][DEBUG ] detect machine type
[master1][DEBUG ] find the location of an executable
[master1][INFO ] Running command: /usr/sbin/ip link show
[master1][INFO ] Running command: /usr/sbin/ip addr show
[master1][DEBUG ] IP addresses found: [u'172.21.0.1', u'172.20.0.1', u'172.18.0.1', u'172.19.0.1', u'192.168.64.150', u'172.17.0.1']
[ceph_deploy.new][DEBUG ] Resolving host master1
[ceph_deploy.new][DEBUG ] Monitor master1 at 192.168.64.150
[ceph_deploy.new][DEBUG ] Monitor initial members are ['master1']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.64.150']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[root@master1 ceph-install]# ls -al
总用量 20
drwxr-xr-x 2 root root 75 10月 17 14:39 .
dr-xr-x---. 31 root root 4096 10月 17 14:39 ..
-rw-r--r-- 1 root root 199 10月 17 14:39 ceph.conf
-rw-r--r-- 1 root root 3021 10月 17 14:39 ceph-deploy-ceph.log
-rw------- 1 root root 73 10月 17 14:39 ceph.mon.keyring
[root@master1 ceph-install]#
说明:
ceph.conf:集群配置文件
ceph-deploy-ceph.log:使用ceph-deploy部署的日志记录
ceph.mon.keyring:验证key文件
在所有ceph节点安装ceph软件
即在master1,master2,master3安装ceph和ceph-radosgw软件包
[root@master1 ceph-install]# yum install ceph ceph-radosgw -y
[root@master1 ceph-install]# ceph -v
ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)
注意:版本安装需要一致
方然也可以直接使用命令安装,网速好的话,可以用ceph-deploy install master1 master2 master3 命令来安装。ceph-deploy命令会自动通过公网官方的源安装(网速不给力就不要用这种)
在ceph客户端安装ceph-common软件
这里的客户端就是即将使用ceph这个集群的节点,即我们的owncloud节点
创建mon监控组件
创建配置文件目录,并创建一个配置文件,并手动创建一个集群的UUID,再增加监控网络,网段为试验集群的物理网络。
[root@master1 ceph-install]# touch /etc/ceph/ceph.conf
[root@master1 ceph-install]# uuidgen
1c983db6-36a6-48a8-82b3-e9c32d10c27b
[root@master1 ceph-install]# cat /etc/ceph/ceph.conf
[global]
fsid = 1c983db6-36a6-48a8-82b3-e9c32d10c27b # 生成的UUID号
mon initial members = master1 # 主机名
mon host = 192.168.64.150 # 对应的IP
public network = 192.168.64.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
[root@master1 ceph-install]#
监控节点master1初始化
[root@master1 ceph-install]# ceph-deploy mon create-initial
这里由于之前在这台机器上搭建过ceph,所以报错了,此时使用命令
[root@master1 ceph-install]# ceph-deploy --overwrite-conf config push master1
然后使用ceph -s命令测试,报错如下:
这是因为没有将配置信息同步到其他节点上,将配置文件信息同步到所有ceph集群节点