原文:https://www.fullstackmemo.com/2018/10/11/cephfs-ha-mount-storage/
服务器硬件配置及环境
项目 |
说明 |
CPU |
1核 |
内存 |
1GB |
硬盘 |
40GB |
系统 |
CentOS 7.5 |
时间同步服务 |
chrony |
ceph |
13.2.2-0 |
节点部署图
节点功能简介
项目 |
说明 |
yum repo |
如果部署环境每个节点都可以访问外网, 则无需做任何操作, 部署脚本会自动添加外网环境的yum源. 如果部署无法访问外网, 需要自行部署centos, epel, ceph三个yum源. 每个节点都必须可以访问所提到的所有yum源 |
时间同步服务器 |
每个节点都必须可以访问, 如果部署环境无法访问外网需要自行搭建时间同步服务器 |
client-x |
需要挂载存储的设备, 需要同时可以访问每个storage-ha-x和yum源, 时间服务器 |
storage-deploy-1 |
用于统一部署ceph集群的工作机, 系统为CentOS 7.5 |
storage-ha-x |
承载ceph各项服务的服务器节点, 系统为CentOS 7.5 |
mon |
Monitors, 节点映射管理, 身份验证管理, 需要达到冗余和高可用至少需要3个节点 |
osd |
object storage daemon, 对象存储服务, 需要达到冗余和高可用至少需要3个节点 |
mgr |
Manager, 用于跟踪运行指标和集群状态, 性能. |
mds |
Metadata Serve, 提供cephfs的元数据存储 |
参考:
默认端口
项目 |
说明 |
ssh |
tcp: 22 |
mon |
tcp: 6789 |
mds/mgr/osd |
tcp: 6800~7300 |
参考:
默认路径
项目 |
说明 |
主配置文件 |
/etc/ceph/ceph.conf |
配置文件夹 |
/etc/ceph |
日志文件夹 |
/var/log/ceph |
各服务认证key文件 |
/var/lib/ceph/{server name}/{hostname}/keyring |
admin认证key文件 |
ceph.client.admin.keyring |
部署脚本说明
- node-init.sh: storage-ha-x节点初期运行的初始化脚本
- admin-init.sh: storage-deploy-1节点初期运行的初始化脚本, 必须要在每个storage-ha-x节点都运行完node-init.sh之后才能运行.
- ceph-deploy.sh: ceph部署脚本, 仅在storage-deploy-1节点上运行即可, 需要在
node-init.sh
和admin-init.sh
运行完成且成功后运行.
PS: 脚本中涉及到的ip和其它不同信息请先自行修改后再运行.
脚本运行命令
请将 附录: 脚本内容
章节或脚本Git库
章节中的各个脚本放到各个对应服务器任意位置并使用以下命令按照顺序运行.
PS: 需严格按照部署脚本说明
章节中的持续顺序执行脚本.
PS: 脚本中涉及到不同于当前环境的信息(如: ip, yum源, 密码, 主机名等)请先自行修改后再运行.
执行命令结果
1
|
/bin/bash admin-init.sh
|
执行命令结果
1
|
/bin/bash ceph-deploy.sh
|
执行命令结果
可以看到上方的pgs
下方有个creating+peering
, 这表示OSDs在创建和准备同步中.需要等待
这时可以在任意有admin角色的storage-ha-x
节点上执行以下命令看查是否完成准备
当pgs
显示为下图的active+clean
代表各个节点同步完成.
如果一直无法达到active+clean
状态, 请参考以下操作文章:
TROUBLESHOOTING PGS
挂载存储
创建测试用户
以下命令在任意一个storage-ha-x服务器上运行
1 2 3 4 5
|
# 此命令含义是创建一个名为client.fs-test-1的用户, 对于挂载的根目录'/'只有可读权限, 对于挂载的'/test_1'目录有读写权限. ceph fs authorize cephfs client.fs-test-1 / r /test_1 rw # 命令输入完成后会返回类如下信息: # [client.fs-test-1] # key = AQA0Cr9b9afRDBAACJ0M8HxsP41XmLhbSxWkqA==
|
获取用户授权信息
以下命令在任意一个添加过admin
角色的storage-ha-x服务器上运行
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
|
# 获取client.admin用户的授权信息 ceph auth get client.admin # 命令输入后会返回类似如下信息 # [client.admin] # key = AQAm4L5b60alLhAARxAgr9jQDLopr9fbXfm87w== # caps mds = "allow *" # caps mgr = "allow *" # caps mon = "allow *" # caps osd = "allow *"
# 获取client.fs-test-1用户的授权信息 ceph auth get client.fs-test-1 # 命令输入后会返回类似如下信息 # [client.fs-test-1] # key = AQA0Cr9b9afRDBAACJ0M8HxsP41XmLhbSxWkqA== # caps mds = "allow r, allow rw path=/test-1" # caps mon = "allow r" # caps osd = "allow rw tag cephfs data=cephfs"
|
挂载方式
挂载方式分为两种, 分别是cephfs和fuse. 选择其中一种方式进行挂载即可.
两种挂载方式的区别和优势请参考以下文章:
WHICH CLIENT?
cephfs方式
以下命令在任意需要挂载存储的client下执行
PS: 此挂载方式依赖于ceph, 需要先添加ceph和epel的yum源.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
|
# cephfs方式挂载 yum install ceph -y mkdir -p /etc/ceph mkdir -p /mnt/mycephfs # 以下写入的secret请根据'获取用户授权信息'章节中获取到的'key'进行修改 cat > /etc/ceph/admin_secret.key << EOF AQAm4L5b60alLhAARxAgr9jQDLopr9fbXfm87w== EOF
# 以下写入的secret请根据'获取用户授权信息'章节中获取到的'key'进行修改 cat > /etc/ceph/test_cephfs_1_secret.key << EOF AQA0Cr9b9afRDBAACJ0M8HxsP41XmLhbSxWkqA== EOF
# 使用'admin'用户挂载cephfs的根目录 # ip或主机名请根据实际情况修改 # 这里填写的'name=admin'是'client.admin'点后面的'admin'. mount.ceph 192.168.60.111:6789,192.168.60.112:6789,192.168.60.113:6789:/ /mnt/mycephfs -o name=admin,secretfile=/etc/ceph/admin_secret.key
# 使用只读的用户挂载 mkdir -p /mnt/mycephfs/test_1 mkdir -p /mnt/test_cephfs_1 # 使用'fs-test-1'用户挂载cephfs的根目录 # ip或主机名请根据实际情况修改 # 这里填写的'name=fs-test-1'是'client.fs-test-1'点后面的'fs-test-1'. mount.ceph 192.168.60.111:6789,192.168.60.112:6789,192.168.60.113:6789:/ /mnt/test_cephfs_1 -o name=fs-test-1,secretfile=/etc/ceph/test_cephfs_1_secret.key
# 开机自动挂载 cat >> /etc/fstab << EOF 192.168.60.111:6789,192.168.60.112:6789,192.168.60.113:6789:/ /mnt/mycephfs ceph name=admin,secretfile=/etc/ceph/secret.key,noatime,_netdev 0 2 EOF
|
fuse方式
以下命令在任意需要挂载存储的client下执行
PS: 此挂载方式依赖于ceph-fuse, 需要先添加ceph和epel的yum源.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
|
yum install ceph-fuse -y mkdir -p /etc/ceph mkdir -p /mnt/mycephfs
# 获取storage-ha-x任意一个节点上的ceph配置文件 scp storage@storage-ha-1:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
# 以下写入的secret请根据'获取用户授权信息'章节中获取到的'key'进行修改 cat > /etc/ceph/ceph.keyring << EOF [client.admin] key = AQAm4L5b60alLhAARxAgr9jQDLopr9fbXfm87w== caps mds = "allow *" caps mgr = "allow *" caps mon = "allow *" caps osd = "allow *" [client.fs-test-1] key = AQA0Cr9b9afRDBAACJ0M8HxsP41XmLhbSxWkqA== caps mds = "allow r, allow rw path=/test-1" caps mon = "allow r" caps osd = "allow rw tag cephfs data=cephfs" EOF
# 使用'admin'用户挂载cephfs的根目录 # ip或主机名请根据实际情况修改 ceph-fuse -m 192.168.60.111:6789,192.168.60.112:6789,192.168.60.113:6789 /mnt/mycephfs # 开机自动挂载 cat >> /etc/fstab << EOF none /mnt/ceph fuse.ceph ceph.id=admin,ceph.conf=/etc/ceph/ceph.conf,_netdev,defaults 0 0 EOF
# 使用只读的用户挂载 mkdir -p /mnt/mycephfs/test_1 mkdir -p /mnt/test_cephfs_1 # 使用'fs-test-1'用户挂载cephfs的根目录 # ip或主机名请根据实际情况修改 # 这里填写的'-n client.fs-test-1'是完整的'client.fs-test-1'. ceph-fuse -m 192.168.60.111:6789,192.168.60.112:6789,192.168.60.113:6789 -n client.fs-test-1 /mnt/test_cephfs_1
|
挂载结果
挂载结果可以使用以下命令查看
运维命令
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
|
# 查看集群整体状态 ceph -s
# 查看集群健康状态 ceph health
# 查看集群健康状态详情 ceph health detail
# 查看cephfs列表 ceph fs ls
# 查看mds状态 ceph mds stat
# 查看 osd节点状态 ceph osd tree
# 查看监视器情况 ceph quorum_status --format json-pretty
|
1 2
|
# 在挂载了存储的client下简单测试写性能 time dd if=/dev/zero of=/mnt/mycephfs/test.dbf bs=8k count=3000 oflag=direct
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
|
# 删除cephfs前需要的操作 # 停止每个mds节点的mds服务 # 每个mds节点上都要执行 systemctl stop ceph-mds.target
# 仅在任意一台'storage-ha-x'上执行 # 删除cephfs ceph fs rm cephfs --yes-i-really-mean-it
# 删除pool # 需要删除pool的时候需要写两次pool名外带'--yes-i-really-really-mean-it'参数 ceph osd pool rm cephfs_data cephfs_data --yes-i-really-really-mean-it ceph osd pool rm cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it
# 停止每个mds节点的mds服务 # 每个mds节点上都要执行 systemctl start ceph-mds.target
|
1 2 3 4 5 6 7 8 9 10 11
|
# 同步的配置文件 # 如果节点上有配置文件且和当前要同步的配置文件不通, 需要带'--overwrite-conf'参数 # 此命令会把执行此命令目录下的'ceph.conf'文件同步到各个指定节点上 ceph-deploy --overwrite-conf config push storage-ha-1 storage-ha-2 storage-ha-3
# 重启每个节点的cepf相关服务 # 需要在有对应功能节点的节点上分别运行以下命令 systemctl restart ceph-osd.target systemctl restart ceph-mds.target systemctl restart ceph-mon.target systemctl restart ceph-mgr.target
|
FAQ
-
Q: health_warn:clock skew detected on mon
A: 使用chrony同步每台服务器节点的时间
-
Q: Error ERANGE: pg_num “*“ size “*“ would mean “*“ total pgs, which exceeds max “*“ (mon_max_pg_per_osd 250 num_in_osds “\“)
A: ceph.conf配置文件中加入mon_max_pg_per_osd = 1000
(参数中的数值自己根据实际情况修改)并用同步ceph配置文件
方式上传到各个节点, 并重启ceph-mon.target
-
Q: too many PGs per OSD
A: ceph.conf配置文件中加入mon_max_pg_per_osd = 1000
(参数中的数值自己根据实际情况修改)并用同步ceph配置文件
方式上传到各个节点, 并重启ceph-mon.target
参考
ceph cephx认证参考
设置cephfs访问权限
ceph用户管理
ceph-fuse方式挂载
Ceph 运维手册
Red Hat Ceph存储—《深入理解Ceph架构》
Ceph常规操作及常见问题梳理
脚本Git库
https://github.com/x22x22/cephfs-verify-script
附录: 脚本内容
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172
|
#!/bin/bash
# 禁用ipv6, 加大pid限制 cat >>/etc/sysctl.conf <<EOF net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 kernel.pid_max = 4194303 EOF
sysctl -p sysctl -w net.ipv6.conf.all.disable_ipv6=1 sysctl -w net.ipv6.conf.default.disable_ipv6=1
# 简单代替dns服务器写入当前环境中的主机名和ip的对应关系 cat >>/etc/hosts <<EOF
192.168.60.110 storage-deploy-1 192.168.60.111 storage-ha-1 192.168.60.112 storage-ha-2 192.168.60.113 storage-ha-3 EOF
systemctl stop firewalld systemctl disable firewalld setenforce 0 sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
# 添加一个storage用户, 用于ceph-deploy工具进行节点的安装和操作 useradd -d /home/storage -m storage echo 'fullstackmemo***' | passwd --stdin storage echo "storage ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/storage chmod 0440 /etc/sudoers.d/storage
# 添加ceph的yum源, 如果无法访问外网请自行搭建并修改 cat >/etc/yum.repos.d/ceph.repo <<'EOF' [Ceph] name=Ceph packages for $basearch baseurl=http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/$basearch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc priority=1
[Ceph-noarch] name=Ceph noarch packages baseurl=http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc priority=1
[ceph-source] name=Ceph source packages baseurl=http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/SRPMS enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc priority=1
EOF
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# 修改CentOS的yum基础源, 如果无法访问外网请自行搭建并修改 cat >/etc/yum.repos.d/CentOS-Base.repo <<'EOF' # CentOS-Base.repo # # The mirror system uses the connecting IP address of the client and the # update status of each mirror to pick mirrors that are updated to and # geographically close to the client. You should use this for CentOS updates # unless you are manually picking other mirrors. # # If the mirrorlist= does not work for you, as a fall back you can try the # remarked out baseurl= line instead. # #
[base] name=CentOS-$releasever - Base baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/os/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#released updates [updates] name=CentOS-$releasever - Updates baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/updates/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#additional packages that may be useful [extras] name=CentOS-$releasever - Extras baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/extras/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#additional packages that extend functionality of existing packages [centosplus] name=CentOS-$releasever - Plus baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/centosplus/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus gpgcheck=1 enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 EOF
yum makecache fast # 安装CentOS的yum epel源 yum install -y epel-release
# 修改CentOS的yum epel源, 如果无法访问外网请自行搭建并修改 cat >/etc/yum.repos.d/epel.repo <<'EOF' [epel] name=Extra Packages for Enterprise Linux 7 - $basearch baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch failovermethod=priority enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
[epel-debuginfo] name=Extra Packages for Enterprise Linux 7 - $basearch - Debug baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch/debug #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch failovermethod=priority enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 gpgcheck=1
[epel-source] name=Extra Packages for Enterprise Linux 7 - $basearch - Source baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/SRPMS #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch failovermethod=priority enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 gpgcheck=1 EOF
yum makecache yum install yum-plugin-priorities chrony parted xfsprogs -y mv /etc/chrony.conf /etc/chrony.conf.bk
# 添加时间同步服务器, 如果无法访问外网请自行搭建并修改 # 添加时间同步服务器, 如果无法访问外网请更换成yum.yfb.sunline.cn和nexus.yfb.sunline.cn cat > /etc/chrony.conf << EOF server ntp.api.bz iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony EOF
systemctl enable chronyd systemctl restart chronyd sleep 10 chronyc activity chronyc sources -v hwclock -w
# 这里将/dev/sdb作为ceph的存储池, 所以先格式化/dev/sdb, 请根据自己实际情况修改 parted -s /dev/sdb mklabel gpt mkpart primary xfs 0% 100% partprobe /dev/sdb mkfs.xfs /dev/sdb -f
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189
|
#!/bin/bash
# 禁用ipv6 cat >>/etc/sysctl.conf <<EOF net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 EOF
sysctl -p sysctl -w net.ipv6.conf.all.disable_ipv6=1 sysctl -w net.ipv6.conf.default.disable_ipv6=1
# 简单代替dns服务器写入当前环境中的主机名和ip的对应关系 cat >>/etc/hosts <<EOF
192.168.60.110 storage-deploy-1 192.168.60.111 storage-ha-1 192.168.60.112 storage-ha-2 192.168.60.113 storage-ha-3 EOF
systemctl stop firewalld systemctl disable firewalld setenforce 0 sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
# 添加ceph的yum源, 如果无法访问外网请自行搭建并修改 cat >/etc/yum.repos.d/ceph.repo <<'EOF' [Ceph] name=Ceph packages for $basearch baseurl=http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/$basearch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc priority=1
[Ceph-noarch] name=Ceph noarch packages baseurl=http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc priority=1
[ceph-source] name=Ceph source packages baseurl=http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/SRPMS enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc priority=1
EOF
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# 修改CentOS的yum基础源, 如果无法访问外网请自行搭建并修改 cat >/etc/yum.repos.d/CentOS-Base.repo <<'EOF' # CentOS-Base.repo # # The mirror system uses the connecting IP address of the client and the # update status of each mirror to pick mirrors that are updated to and # geographically close to the client. You should use this for CentOS updates # unless you are manually picking other mirrors. # # If the mirrorlist= does not work for you, as a fall back you can try the # remarked out baseurl= line instead. # #
[base] name=CentOS-$releasever - Base baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/os/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#released updates [updates] name=CentOS-$releasever - Updates baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/updates/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#additional packages that may be useful [extras] name=CentOS-$releasever - Extras baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/extras/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#additional packages that extend functionality of existing packages [centosplus] name=CentOS-$releasever - Plus baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/centosplus/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus gpgcheck=1 enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 EOF
yum makecache fast # 安装CentOS的yum epel源 yum install -y epel-release
# 修改CentOS的yum epel源, 如果无法访问外网请自行搭建并修改 cat >/etc/yum.repos.d/epel.repo <<'EOF' [epel] name=Extra Packages for Enterprise Linux 7 - $basearch baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch failovermethod=priority enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
[epel-debuginfo] name=Extra Packages for Enterprise Linux 7 - $basearch - Debug baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch/debug #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch failovermethod=priority enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 gpgcheck=1
[epel-source] name=Extra Packages for Enterprise Linux 7 - $basearch - Source baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/SRPMS #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch failovermethod=priority enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 gpgcheck=1 EOF
yum makecache yum install yum-plugin-priorities chrony sshpass ceph-deploy ceph -y mv /etc/chrony.conf /etc/chrony.conf.bk
# 添加时间同步服务器, 如果无法访问外网请自行搭建并修改 cat > /etc/chrony.conf << EOF server ntp.api.bz iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony EOF
systemctl enable chronyd systemctl restart chronyd sleep 10 chronyc activity chronyc sources -v hwclock -w
rm -f "${HOME}"/.ssh/ceph_id_rsa ssh-keygen -t rsa -b 4096 -f "${HOME}"/.ssh/ceph_id_rsa -N '' cat >"${HOME}"/.ssh/config <<EOF Host storage-ha-1 Hostname storage-ha-1 User storage IdentityFile ${HOME}/.ssh/ceph_id_rsa IdentitiesOnly yes StrictHostKeyChecking no Host storage-ha-2 Hostname storage-ha-2 User storage IdentityFile ${HOME}/.ssh/ceph_id_rsa IdentitiesOnly yes StrictHostKeyChecking no Host storage-ha-3 Hostname storage-ha-3 User storage IdentityFile ${HOME}/.ssh/ceph_id_rsa IdentitiesOnly yes StrictHostKeyChecking no EOF chmod 0400 "${HOME}"/.ssh/config sshpass -p "fullstackmemo***" ssh-copy-id -i ~/.ssh/ceph_id_rsa storage@storage-ha-1 sshpass -p "fullstackmemo***" ssh-copy-id -i ~/.ssh/ceph_id_rsa storage@storage-ha-2 sshpass -p "fullstackmemo***" ssh-copy-id -i ~/.ssh/ceph_id_rsa storage@storage-ha-3
mkdir -p "${HOME}"/ceph-cluster cd "${HOME}"/ceph-cluster || exit
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62
|
#!/bin/bash
mkdir -p "${HOME}"/ceph-cluster cd "${HOME}"/ceph-cluster || exit ceph-deploy new storage-ha-1 storage-ha-2 storage-ha-3
cat >>ceph.conf <<EOF # 'public network': # 整个集群所存在的网段 # 这里需要根据实际情况修改 public network = 192.168.60.0/24 osd pool default size = 3 osd pool default min size = 2 osd pool default pg num = 100 osd pool default pgp num = 100 # 'mon allow pool delete': # 此设置允许删除pool的操作, poc环境为方便操作加上此选项, 生产环境建议注释 mon allow pool delete = true
[osd] osd_max_backfills = 1 osd_recovery_max_active = 1 osd_recovery_op_priority = 1 EOF
# 在各个节点上安装ceph, 并指定了外网的ceph yum源, 如果无法访问外网请自行搭建并修改 ceph-deploy install storage-ha-1 storage-ha-2 storage-ha-3 --repo-url http://mirrors.ustc.edu.cn/ceph/rpm-mimic/el7 --gpg-url 'http://mirrors.ustc.edu.cn/ceph/keys/release.asc' # 初始化mon服务和key信息 ceph-deploy mon create-initial ceph-deploy mon add storage-ha-2 ceph-deploy mon add storage-ha-3 ceph-deploy admin storage-ha-1 storage-ha-2 storage-ha-3 ceph-deploy mgr create storage-ha-1 storage-ha-2 storage-ha-3
# 添加存储服务节点上的裸盘到存储池中 ceph-deploy osd create --data /dev/sdb storage-ha-1 ceph-deploy osd create --data /dev/sdb storage-ha-2 ceph-deploy osd create --data /dev/sdb storage-ha-3
ceph-deploy mds create storage-ha-1 storage-ha-2 storage-ha-3
ssh storage@storage-ha-1 << EOF # 创建两个pool, 服务于cephfs, cephfs至少需要两个pool, 分别做metadata和data sudo ceph osd pool create cephfs_data 100 # 使用raid 5方式存储数据即erasure类型, 当单个文件平均大小大于8k时erasure比replicated有优势. # sudo ceph osd pool create cephfs_data 100 100 erasure # sudo ceph osd pool set cephfs_data allow_ec_overwrites true # sudo metadata pool必须使用replicated类型. sudo ceph osd pool create cephfs_metadata 100 # 如果使用了erasure类型, 此步骤跳过 sudo ceph osd pool set cephfs_data size 3
sudo ceph osd pool set cephfs_metadata size 3 sudo ceph fs new cephfs cephfs_metadata cephfs_data
# 查看集群各项信息 sudo ceph quorum_status --format json-pretty sudo ceph fs ls sudo ceph mds stat sudo ceph health sudo ceph -s EOF
|