六 CephFS使用
- https://docs.ceph.com/en/pacific/cephfs/
6.1 部署MDS服务
6.1.1 安装ceph-mds
点击查看代码
root@ceph-mgr-01:~# apt -y install ceph-mds
6.1.2 创建MDS服务
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy mds create ceph-mgr-01
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mds create ceph-mgr-01
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f82357dae60>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mds at 0x7f82357b9350>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] mds : [('ceph-mgr-01', 'ceph-mgr-01')]
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts ceph-mgr-01:ceph-mgr-01
ceph@ceph-mgr-01's password:
[ceph-mgr-01][DEBUG ] connection detected need for sudo
ceph@ceph-mgr-01's password:
sudo: unable to resolve host ceph-mgr-01
[ceph-mgr-01][DEBUG ] connected to host: ceph-mgr-01
[ceph-mgr-01][DEBUG ] detect platform information from remote host
[ceph-mgr-01][DEBUG ] detect machine type
[ceph_deploy.mds][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to ceph-mgr-01
[ceph-mgr-01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mgr-01][WARNIN] mds keyring does not exist yet, creating one
[ceph-mgr-01][DEBUG ] create a keyring file
[ceph-mgr-01][DEBUG ] create path if it doesn't exist
[ceph-mgr-01][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph-mgr-01 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ceph-mgr-01/keyring
[ceph-mgr-01][INFO ] Running command: sudo systemctl enable ceph-mds@ceph-mgr-01
[ceph-mgr-01][WARNIN] Created symlink /etc/systemd/system/ceph-mds.target.wants/ceph-mds@ceph-mgr-01.service → /lib/systemd/system/ceph-mds@.service.
[ceph-mgr-01][INFO ] Running command: sudo systemctl start ceph-mds@ceph-mgr-01
[ceph-mgr-01][INFO ] Running command: sudo systemctl enable ceph.target
6.2 创建CephFS metadata和data存储池
6.2.1 创建元数据存储池
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool create cephfs-metadata 32 32
pool 'cephfs-metadata' created
6.2.2 创建数据存储池
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool create cephfs-data 64 64
pool 'cephfs-data' created
6.2.3 查看Ceph集群状态
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph -s cluster: id: 6e521054-1532-4bc8-9971-7f8ae93e8430 health: HEALTH_WARN 3 daemons have recently crashed
services:
mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 7m)
mgr: ceph-mgr-01(active, since 16m), standbys: ceph-mgr-02
mds: 1/1 daemons up
osd: 9 osds: 9 up (since 44h), 9 in (since 44h)
data:
volumes: 1/1 healthy
pools: 4 pools, 161 pgs
objects: 43 objects, 24 MiB
usage: 1.4 GiB used, 179 GiB / 180 GiB avail
pgs: 161 active+clean
6.3 创建CephFS文件系统
6.3.1 创建CephFS文件系统命令格式
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs new -h
General usage:
usage: ceph [-h] [-c CEPHCONF] [-i INPUT_FILE] [-o OUTPUT_FILE]
[--setuser SETUSER] [--setgroup SETGROUP] [--id CLIENT_ID]
[--name CLIENT_NAME] [--cluster CLUSTER]
[--admin-daemon ADMIN_SOCKET] [-s] [-w] [--watch-debug]
[--watch-info] [--watch-sec] [--watch-warn] [--watch-error]
[-W WATCH_CHANNEL] [--version] [--verbose] [--concise]
[-f {json,json-pretty,xml,xml-pretty,plain,yaml}]
[--connect-timeout CLUSTER_TIMEOUT] [--block] [--period PERIOD]Ceph administration tool
optional arguments:
-h, --help request mon help
-c CEPHCONF, --conf CEPHCONF
ceph configuration file
-i INPUT_FILE, --in-file INPUT_FILE
input file, or "-" for stdin
-o OUTPUT_FILE, --out-file OUTPUT_FILE
output file, or "-" for stdout
--setuser SETUSER set user file permission
--setgroup SETGROUP set group file permission
--id CLIENT_ID, --user CLIENT_ID
client id for authentication
--name CLIENT_NAME, -n CLIENT_NAME
client name for authentication
--cluster CLUSTER cluster name
--admin-daemon ADMIN_SOCKET
submit admin-socket commands ("help" for help)
-s, --status show cluster status
-w, --watch watch live cluster changes
--watch-debug watch debug events
--watch-info watch info events
--watch-sec watch security events
--watch-warn watch warn events
--watch-error watch error events
-W WATCH_CHANNEL, --watch-channel WATCH_CHANNEL
watch live cluster changes on a specific channel
(e.g., cluster, audit, cephadm, or '*' for all)
--version, -v display version
--verbose make verbose
--concise make less verbose
-f {json,json-pretty,xml,xml-pretty,plain,yaml}, --format {json,json-pretty,xml,xml-pretty,plain,yaml}
--connect-timeout CLUSTER_TIMEOUT
set a timeout for connecting to the cluster
--block block until completion (scrub and deep-scrub only)
--period PERIOD, -p PERIOD
polling period, default 1.0 second (for polling
commands only)Local commands:
ping <mon.id> Send simple presence/life test to a mon
<mon.id> may be 'mon.*' for all mons
daemon {type.id|path} <cmd>
Same as --admin-daemon, but auto-find admin socket
daemonperf {type.id | path} [stat-pats] [priority] [<interval>] [<count>]
daemonperf {type.id | path} list|ls [stat-pats] [priority]
Get selected perf stats from daemon/admin socket
Optional shell-glob comma-delim match string stat-pats
Optional selection priority (can abbreviate name):
critical, interesting, useful, noninteresting, debug
List shows a table of all available stats
Run <count> times (default forever),
once per <interval> seconds (default 1)Monitor commands:
fs new <fs_name> <metadata> <data> [--force] [--allow-dangerous-metadata-overlay] make new filesystem using named pools <metadata> and <data>
6.3.2 创建CephFS文件系统
- 一个数据池只能创建一个cephfs文件系统。
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs new wgscephfs cephfs-metadata cephfs-data
new fs with metadata pool 7 and data pool 8
6.3.3 查看创建的CephFS文件系统
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs ls
name: wgscephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
6.3.4 查看指定CephFS文件系统状态
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status wgscephfs
wgscephfs - 0 clients
=========
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active ceph-mgr-01 Reqs: 0 /s 10 13 12 0
POOL TYPE USED AVAIL
cephfs-metadata metadata 96.0k 56.2G
cephfs-data data 0 56.2G
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)
6.3.5 启用多个文件系统
- 每创建一个cephfs文件系统需要一个新的数据池。
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs flag set enable_multiple true
6.4 验证CephFS服务状态
ceph@ceph-deploy:~/ceph-cluster$ ceph mds stat
wgscephfs:1 {0=ceph-mgr-01=up:active}
6.5 创建客户端账户
6.5.1 创建账户
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph auth add client.wgs mon 'allow rw' mds 'allow rw' osd 'allow rwx pool=cephfs-data'
added key for client.wgs
6.5.2 验证账户信息
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph auth get client.wgs
[client.wgs]
key = AQCrhk5htve9AxAAED3UAwf2P/5YFjBPVoNayw==
caps mds = "allow rw"
caps mon = "allow rw"
caps osd = "allow rwx pool=cephfs-data"
exported keyring for client.wgs
6.5.3 创建用户keyring文件
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph auth get client.wgs -o ceph.client.wgs.keyring
exported keyring for client.wgs
6.5.4 创建key文件
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph auth print-key client.wgs > wgs.key
6.5.5 验证用户keyring文件
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ cat ceph.client.wgs.keyring
[client.wgs]
key = AQCrhk5htve9AxAAED3UAwf2P/5YFjBPVoNayw==
caps mds = "allow rw"
caps mon = "allow rw"
caps osd = "allow rwx pool=cephfs-data"
6.6 安装ceph客户端
6.6.1 客户端ceph-client-centos7-01
6.6.1.1 配置仓库
点击查看代码
[root@ceph-client-centos7-01 ~]# yum -y install epel-release
[root@ceph-client-centos7-01 ~]# yum -y install https://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm
6.6.1.2 安装ceph-common
点击查看代码
[root@ceph-client-centos7-01 ~]# yum -y install ceph-common
6.6.2 客户端ceph-client-ubuntu20.04-01
6.6.2.1 配置仓库
点击查看代码
root@ceph-client-ubuntu20.04-01:~# wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add -
OK
root@ceph-client-ubuntu20.04-01:~# echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific $(lsb_release -cs) main" >> /etc/apt/sources.list
root@ceph-client-ubuntu20.04-01:~# apt -y update && apt -y upgrade
6.6.2.2 安装ceph-common
点击查看代码
root@ceph-client-ubuntu20.04-01:~# apt -y install ceph-common
6.7 同步客户端认证文件
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ scp ceph.conf ceph.client.wgs.keyring wgs.key root@ceph-client-ubuntu20.04-01:/etc/ceph ceph@ceph-deploy:~/ceph-cluster$ scp ceph.conf ceph.client.wgs.keyring wgs.key root@ceph-client-centos7-01:/etc/ceph
6.8 客户端验证权限
6.8.1 客户端ceph-client-centos7-01
点击查看代码
[root@ceph-client-centos7-01 ~]# ceph --id wgs -s cluster: id: 6e521054-1532-4bc8-9971-7f8ae93e8430 health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 6h)
mgr: ceph-mgr-01(active, since 17h), standbys: ceph-mgr-02
mds: 1/1 daemons up
osd: 9 osds: 9 up (since 2d), 9 in (since 2d)
data:
volumes: 1/1 healthy
pools: 4 pools, 161 pgs
objects: 43 objects, 24 MiB
usage: 1.4 GiB used, 179 GiB / 180 GiB avail
pgs: 161 active+clean
6.8.2 客户端ceph-client-ubuntu20.04-01
点击查看代码
root@ceph-client-ubuntu20.04-01:~# ceph --id wgs -s cluster: id: 6e521054-1532-4bc8-9971-7f8ae93e8430 health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 6h)
mgr: ceph-mgr-01(active, since 17h), standbys: ceph-mgr-02
mds: 1/1 daemons up
osd: 9 osds: 9 up (since 2d), 9 in (since 2d)
data:
volumes: 1/1 healthy
pools: 4 pools, 161 pgs
objects: 43 objects, 24 MiB
usage: 1.4 GiB used, 179 GiB / 180 GiB avail
pgs: 161 active+clean
6.9 内核空间挂载cephfs(推荐使用)
6.9.1 验证客户端是否可以挂载cephfs
6.9.1.1 验证客户端ceph-client-centos7-01
点击查看代码
[root@ceph-client-centos7-01 ~]# stat /sbin/mount.ceph File: ‘/sbin/mount.ceph’
Size: 195512 Blocks: 384 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 51110858 Links: 1
Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2021-09-25 11:52:07.544069156 +0800
Modify: 2021-08-06 01:48:44.000000000 +0800
Change: 2021-09-23 13:57:21.674953501 +0800
Birth: -
6.9.1.1 验证客户端ceph-client-ubuntu20.04-01
点击查看代码
root@ceph-client-ubuntu20.04-01:~# stat /sbin/mount.ceph
File: /sbin/mount.ceph
Size: 260520 Blocks: 512 IO Block: 4096 regular file
Device: fc02h/64514d Inode: 402320 Links: 1
Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2021-09-25 11:54:38.642951083 +0800
Modify: 2021-09-16 22:38:17.000000000 +0800
Change: 2021-09-22 18:01:23.708934550 +0800
Birth: -
6.9.2 客户端通过key文件挂载cephfs
6.9.2.1 通过key文件挂载cephfs命令格式(建议使用)
点击查看代码
mount -t ceph {mon01:socket},{mon02:socket},{mon03:socket}:/ {mount-point} -o name={name},secretfile={key_path}
6.9.2.2 客户端ceph-client-centos7-01挂载cephfs
点击查看代码
[root@ceph-client-centos7-01 ~]# mkdir /data/cephfs-data
[root@ceph-client-centos7-01 ~]# mount -t ceph 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ /data/cephfs-data -o name=wgs,secretfile=/etc/ceph/wgs.key
[root@ceph-client-centos7-01 ~]# df -TH
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 2.0G 194M 1.9G 10% /run
tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vda1 xfs 22G 5.9G 16G 28% /
/dev/vdb xfs 215G 9.1G 206G 5% /data
tmpfs tmpfs 399M 0 399M 0% /run/user/1000
tmpfs tmpfs 399M 0 399M 0% /run/user/1003
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ ceph 61G 0 61G 0% /data/cephfs-data
[root@ceph-client-centos7-01 ~]# stat -f /data/cephfs-data/
File: "/data/cephfs-data/"
ID: de6f23f7f8cf0cfc Namelen: 255 Type: ceph
Block size: 4194304 Fundamental block size: 4194304
Blocks: Total: 14397 Free: 14397 Available: 14397
Inodes: Total: 1 Free: -1
6.9.2.3 客户端ceph-client-ubuntu20.04-01挂载cephfs
点击查看代码
root@ceph-client-ubuntu20.04-01:~# mkdir /data/cephfs-data
root@ceph-client-ubuntu20.04-01:~# mount -t ceph 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ /data/cephfs-data -o name=wgs,secretfile=/etc/ceph/wgs.key
root@ceph-client-ubuntu20.04-01:~# df -TH
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 4.1G 0 4.1G 0% /dev
tmpfs tmpfs 815M 1.1M 814M 1% /run
/dev/vda2 ext4 22G 13G 7.9G 61% /
tmpfs tmpfs 4.1G 0 4.1G 0% /dev/shm
tmpfs tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs tmpfs 4.1G 0 4.1G 0% /sys/fs/cgroup
/dev/vdb ext4 528G 19G 483G 4% /data
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ ceph 61G 0 61G 0% /data/cephfs-data
root@ceph-client-ubuntu20.04-01:~# stat -f /data/cephfs-data/
File: "/data/cephfs-data/"
ID: de6f23f7f8cf0cfc Namelen: 255 Type: ceph
Block size: 4194304 Fundamental block size: 4194304
Blocks: Total: 14397 Free: 14397 Available: 14397
Inodes: Total: 1 Free: -1
6.9.3 客户端通过key挂载cephfs
6.9.3.1 通过key挂载cephfs命令格式
点击查看代码
挂载cephfs文件根目录
mount -t ceph {mon01:socket},{mon02:socket},{mon03:socket}:/ {mount-point} -o name={name},secret={value}
挂载cephfs文件子目录
mount -t ceph {mon01:socket},{mon02:socket},{mon03:socket}:/{subvolume/dir1/dir2} {mount-point} -o name={name},secret={value}
6.9.3.2 查看key
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ cat wgs.key
AQCrhk5htve9AxAAED3UAwf2P/5YFjBPVoNayw==
6.9.3.3 客户端ceph-client-centos7-01挂载cephfs
点击查看代码
[root@ceph-client-centos7-01 ~]# mkdir /data/cephfs-data
[root@ceph-client-centos7-01 ~]# mount -t ceph 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ /data/cephfs-data -o name=wgs,secret=AQCrhk5htve9AxAAED3UAwf2P/5YFjBPVoNayw==
[root@ceph-client-centos7-01 ~]# df -TH
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 2.0G 194M 1.9G 10% /run
tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vda1 xfs 22G 5.9G 16G 28% /
/dev/vdb xfs 215G 9.1G 206G 5% /data
tmpfs tmpfs 399M 0 399M 0% /run/user/1000
tmpfs tmpfs 399M 0 399M 0% /run/user/1003
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ ceph 61G 0 61G 0% /data/cephfs-data
[root@ceph-client-centos7-01 ~]# stat -f /data/cephfs-data/
File: "/data/cephfs-data/"
ID: de6f23f7f8cf0cfc Namelen: 255 Type: ceph
Block size: 4194304 Fundamental block size: 4194304
Blocks: Total: 14397 Free: 14397 Available: 14397
Inodes: Total: 1 Free: -1
6.9.3.4 客户端ceph-client-ubuntu20.04-01挂载cephfs
点击查看代码
root@ceph-client-ubuntu20.04-01:~# mkdir /data/cephfs-data
root@ceph-client-ubuntu20.04-01:~# mount -t ceph 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ /data/cephfs-data -o name=wgs,secret=AQCrhk5htve9AxAAED3UAwf2P/5YFjBPVoNayw==
root@ceph-client-ubuntu20.04-01:~# df -TH
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 4.1G 0 4.1G 0% /dev
tmpfs tmpfs 815M 1.1M 814M 1% /run
/dev/vda2 ext4 22G 13G 7.9G 61% /
tmpfs tmpfs 4.1G 0 4.1G 0% /dev/shm
tmpfs tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs tmpfs 4.1G 0 4.1G 0% /sys/fs/cgroup
/dev/vdb ext4 528G 19G 483G 4% /data
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ ceph 61G 0 61G 0% /data/cephfs-data
root@ceph-client-ubuntu20.04-01:~# stat -f /data/cephfs-data/
File: "/data/cephfs-data/"
ID: de6f23f7f8cf0cfc Namelen: 255 Type: ceph
Block size: 4194304 Fundamental block size: 4194304
Blocks: Total: 14397 Free: 14397 Available: 14397
Inodes: Total: 1 Free: -1
6.9.4 客户端写入数据并验证
6.9.4.1 客户端ceph-client-centos7-01写入数据
点击查看代码
[root@ceph-client-centos7-01 ~]# cd /data/cephfs-data/
[root@ceph-client-centos7-01 cephfs-data]# echo "ceph-client-centos7-01" > ceph-client-centos7-01
[root@ceph-client-centos7-01 cephfs-data]# ls -l
total 1
-rw-r--r-- 1 root root 23 Sep 25 12:28 ceph-client-centos7-01
6.9.4.1 客户端ceph-client-ubuntu20.04-01验证数据共享
点击查看代码
root@ceph-client-ubuntu20.04-01:~# cd /data/cephfs-data/
root@ceph-client-ubuntu20.04-01:/data/cephfs-data# ls -l
total 1
-rw-r--r-- 1 root root 23 Sep 25 12:28 ceph-client-centos7-01
root@ceph-client-ubuntu20.04-01:/data/cephfs-data# cat ceph-client-centos7-01
ceph-client-centos7-01
6.9.5 客户端卸载cephfs
6.9.5.1 客户端ceph-client-centos7-01卸载cephfs
点击查看代码
[root@ceph-client-centos7-01 ~]# umount /data/cephfs-data/
6.9.5.2 客户端ceph-client-ubuntu20.04-01卸载cephfs
点击查看代码
root@ceph-client-ubuntu20.04-01:~# umount /data/cephfs-data
6.9.6 客户端开机挂载cephfs
6.9.6.1 客户端ceph-client-centos7-01开机挂载cephfs
点击查看代码
[root@ceph-client-centos7-01 ~]# cat /etc/fstab
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ /data/cephfs-data ceph defautls,name=wgs,secretfile=/etc/ceph/wgs.key,noatime,_netdev 0 2
6.9.6.2 客户端ceph-client-centos7-01开机挂载cephfs
点击查看代码
root@ceph-client-ubuntu20.04-01:~# cat /etc/fstab
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ /data/cephfs-data ceph defautls,name=wgs,secretfile=/etc/ceph/wgs.key,noatime,_netdev 0 2
6.10 用户空间挂载cephfs
- 从 Ceph 10.x (Jewel) 开始,至少使用 4.x 内核。如果使用较旧的内核,则应使用 fuse 客户端而不是内核客户端。
6.10.1 客户端配置仓库
6.10.1.1 客户端ceph-client-centos7-01配置yum源
点击查看代码
[root@ceph-client-centos7-01 ~]# yum -y install epel-release
[root@ceph-client-centos7-01 ~]# yum -y install https://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm
6.10.1.2 客户端ceph-client-ubuntu20.04-01添加仓库
点击查看代码
root@ceph-client-ubuntu20.04-01:~# wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add -
OK
root@ceph-client-ubuntu20.04-01:~# echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific $(lsb_release -cs) main" >> /etc/apt/sources.list
root@ceph-client-ubuntu20.04-01:~# apt -y update && apt -y upgrade
6.10.2 客户端安装ceph-fuse
6.10.2.1 客户端ceph-client-centos7-01安装ceph-fuse
点击查看代码
[root@ceph-client-centos7-01 ~]# yum -y install ceph-common ceph-fuse
6.10.2.2 客户端ceph-client-centos7-01安装ceph-fuse
点击查看代码
root@ceph-client-ubuntu20.04-01:~# apt -y install ceph-common fuse
6.10.3 客户端同步认证文件
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ scp ceph.conf ceph.client.wgs.keyring root@ceph-client-ubuntu20.04-01:/etc/ceph
ceph@ceph-deploy:~/ceph-cluster$ scp ceph.conf ceph.client.wgs.keyring root@ceph-client-centos7-01:/etc/ceph
6.10.4 客户端用ceph-fuse挂载cephfs
6.10.4.1 ceph-fuse用法
点击查看代码
root@ceph-client-centos7-01:~# ceph-fuse -h usage: ceph-fuse [-n client.username] [-m mon-ip-addr:mon-port] <mount point> [OPTIONS] --client_mountpoint/-r <sub_directory> use sub_directory as the mounted root, rather than the full Ceph tree.
usage: ceph-fuse mountpoint [options]
general options:
-o opt,[opt...] mount options
-h --help print help
-V --version print versionFUSE options:
-d -o debug enable debug output (implies -f)
-f foreground operation
-s disable multi-threaded operation--conf/-c FILE read configuration from the given configuration file
--id ID set ID portion of my name
--name/-n TYPE.ID set name
--cluster NAME set cluster name (default: ceph)
--setuser USER set uid to user or uid (and gid to user's gid)
--setgroup GROUP set gid to group or gid
--version show version and quit
-o opt,[opt...] 安装选项。-d 在前台运行,将所有日志输出发送到 stderr 并启用 FUSE 调试 (-o debug)。
-c ceph.conf, --conf=ceph.conf 在启动期间使用ceph.conf配置文件而不是默认配置文件 /etc/ceph/ceph.conf来确定监视器地址。
-m monaddress[:port] 连接到指定的mon节点(而不是通过 ceph.conf 查找)。
-n client.{cephx-username} 传递密钥用于挂载的 CephX 用户的名称。
-k <path-to-keyring> 提供keyring的路径;当它在标准位置不存在时很有用。
--client_mountpoint/-r root_directory 使用 root_directory 作为挂载的根目录,而不是完整的 Ceph 树。
-f 在前台运行。不生成pid文件。
-s 禁用多线程操作。
使用样例
ceph-fuse --id {name} -m {mon01:socket},{mon02:socket},{mon03:socket} {mountpoint}
指定挂载cephfs文件系统目录
ceph-fuse --id wgs -r /path/to/dir /data/cephfs-data
指定用户keyring文件路径
ceph-fuse --id wgs -k /path/to/keyring /data/cephfs-data
有多个cephfs文件系统指定挂载
ceph-fuse --id wgs --client_fs mycephfs2 /data/cephfs-data
6.10.4.2 客户端ceph-client-centos7-01用ceph-fuse挂载cephfs
点击查看代码
[root@ceph-client-centos7-01 ~]# ceph-fuse --id wgs -m 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789 /data/cephfs-data
ceph-fuse[8979]: starting ceph client
2021-09-25T14:24:32.258+0800 7f2934e9df40 -1 init, newargv = 0x556e4ebb1300 newargc=9
ceph-fuse[8979]: starting fuse
[root@ceph-client-centos7-01 ~]# df -TH
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 2.0G 194M 1.9G 10% /run
tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vda1 xfs 22G 6.0G 16G 28% /
/dev/vdb xfs 215G 9.2G 206G 5% /data
tmpfs tmpfs 399M 0 399M 0% /run/user/1000
tmpfs tmpfs 399M 0 399M 0% /run/user/1003
ceph-fuse fuse.ceph-fuse 61G 0 61G 0% /data/cephfs-data
6.10.4.3 客户端ceph-client-ubuntu20.04-01用ceph-fuse挂载cephfs
点击查看代码
root@ceph-client-ubuntu20.4-01:~# ceph-fuse --id wgs -m 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789 /data/cephfs-data
2021-09-25T14:26:17.664+0800 7f2473c04080 -1 init, newargv = 0x560939e8b8c0 newargc=15
ceph-fuse[8696]: starting ceph client
ceph-fuse[8696]: starting fuse
root@ceph-client-ubuntu20.4-01:~# df -TH
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 4.1G 0 4.1G 0% /dev
tmpfs tmpfs 815M 1.1M 814M 1% /run
/dev/vda2 ext4 22G 13G 7.9G 61% /
tmpfs tmpfs 4.1G 0 4.1G 0% /dev/shm
tmpfs tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs tmpfs 4.1G 0 4.1G 0% /sys/fs/cgroup
/dev/vdb ext4 528G 21G 480G 5% /data
tmpfs tmpfs 815M 0 815M 0% /run/user/1001
ceph-fuse fuse.ceph-fuse 61G 0 61G 0% /data/cephfs-data
6.10.5 客户端写入数据并验证
6.10.5.1 客户端ceph-client-centos7-01写入数据
点击查看代码
[root@ceph-client-centos7-01 ~]# cd /data/cephfs-data/
[root@ceph-client-centos7-01 cephfs-data]# mkdir -pv test/test1
mkdir: created directory 'test'
mkdir: created directory 'test/test1'
6.10.5.2 客户端ceph-client-ubuntu20.04-01验证数据
点击查看代码
root@ceph-client-ubuntu20.4-01:~# cd /data/cephfs-data/ root@ceph-client-ubuntu20.4-01:/data/cephfs-data# tree . . └── test └── test1
2 directories, 0 files
6.10.6 客户端卸载cephfs
6.10.6.1 客户端ceph-client-centos7-01卸载cephfs
点击查看代码
[root@ceph-client-centos7-01 ~]# umount /data/cephfs-data/
6.10.6.2 客户端ceph-client-ubuntu20.04-01卸载cephfs
点击查看代码
root@ceph-client-ubuntu20.04-01:~# umount /data/cephfs-data
6.10.7 客户端开机挂载cephfs
6.10.7.1 客户端ceph-client-centos7-01开机挂载cephfs
点击查看代码
[root@ceph-client-centos7-01 ~]# cat /etc/fstab
none /data/cephfs-data fuse.ceph ceph.id=wgs,ceph.conf=/etc/ceph/ceph.conf,_netdev,defaults 0 0
6.10.7.2 客户端ceph-client-centos7-01开机挂载cephfs
点击查看代码
root@ceph-client-ubuntu20.04-01:~# cat /etc/fstab
none /data/cephfs-data fuse.ceph ceph.id=wgs,ceph.conf=/etc/ceph/ceph.conf,_netdev,defaults 0 0
6.11 删除cephfs文件系统(多个文件系统)
6.11.1 查看cephfs文件系统信息
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs ls
name: wgscephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
name: wgscephfs01, metadata pool: cephfs-metadata01, data pools: [cephfs-data02 ]
6.11.2 查看cephfs文件系统是否正在被挂载
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status wgscephfs
wgscephfs - 1 clients
=========
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active ceph-mgr-01 Reqs: 0 /s 13 15 14 2
POOL TYPE USED AVAIL
cephfs-metadata metadata 216k 56.2G
cephfs-data data 0 56.2G
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)
6.11.3 查找挂载cephfs的客户端
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph tell mds.0 client ls
2021-09-25T18:30:09.856+0800 7f0fbdffb700 0 client.1094242 ms_handle_reset on v2:172.16.10.225:6802/1724959904
2021-09-25T18:30:09.884+0800 7f0fbdffb700 0 client.1084719 ms_handle_reset on v2:172.16.10.225:6802/1724959904
[
{
"id": 1094171,
"entity": {
"name": {
"type": "client",
"num": 1094171
},
"addr": {
"type": "v1",
"addr": "172.16.0.126:0",
"nonce": 1257114724
}
},
"state": "open",
"num_leases": 0,
"num_caps": 2,
"request_load_avg": 0,
"uptime": 274.89986021499999,
"requests_in_flight": 0,
"num_completed_requests": 0,
"num_completed_flushes": 0,
"reconnecting": false,
"recall_caps": {
"value": 0,
"halflife": 60
},
"release_caps": {
"value": 0,
"halflife": 60
},
"recall_caps_throttle": {
"value": 0,
"halflife": 1.5
},
"recall_caps_throttle2o": {
"value": 0,
"halflife": 0.5
},
"session_cache_liveness": {
"value": 1.6026981127772033,
"halflife": 300
},
"cap_acquisition": {
"value": 0,
"halflife": 10
},
"delegated_inos": [],
"inst": "client.1094171 v1:172.16.0.126:0/1257114724",
"completed_requests": [],
"prealloc_inos": [],
"client_metadata": {
"client_features": {
"feature_bits": "0x00000000000001ff"
},
"metric_spec": {
"metric_flags": {
"feature_bits": "0x"
}
},
"entity_id": "wgs",
"hostname": "bj2d-prod-eth-star-boot-03",
"kernel_version": "4.19.0-1.el7.ucloud.x86_64",
"root": "/"
}
}
]
6.11.4 客户端卸载cephfs
6.11.4.1 客户端主动卸载cephfs
点击查看代码
[root@ceph-client-ubuntu20.04-01 ~]# umount /data/cephfs-data/
6.11.4.2 客户端手动被驱逐
6.11.4.2.1 使用客户端用其唯一 ID
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph tell mds.0 client evict id=1094171
2021-09-25T18:31:02.895+0800 7fc5eeffd700 0 client.1094254 ms_handle_reset on v2:172.16.10.225:6802/1724959904
2021-09-25T18:31:03.671+0800 7fc5eeffd700 0 client.1084740 ms_handle_reset on v2:172.16.10.225:6802/1724959904
6.11.4.2.2 客户端被驱逐后查看挂载点状态
点击查看代码
root@ceph-client-ubuntu20.04-01:~# ls -l /data/
ls: cannot access '/data/cephfs-data': Permission denied
total 32
d????????? ? ? ? ? ? cephfs-data
6.11.4.2.3 客户端解决办法
点击查看代码
root@ceph-client-ubuntu20.04-01:~# umount /data/cephfs-data
6.11.4.2.4 客户端验证挂载点状态
点击查看代码
root@ceph-client-ubuntu20.04-01:~# ls -l
total 4
drwxr-xr-x 2 root root 6 Sep 25 11:46 cephfs-data
6.11.5 删除cephfs文件系统
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs rm wgscephfs01 --yes-i-really-mean-it
6.11.6 验证删除cephfs文件系统
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs ls
name: wgscephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
6.12 删除cephfs文件系统(单个文件系统)
6.12.1 查看cephfs文件系统信息
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs ls
name: wgscephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
6.12.2 查看cephfs文件系统是否正在被挂载
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status wgscephfs
wgscephfs - 1 clients
=========
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active ceph-mgr-01 Reqs: 0 /s 13 15 14 2
POOL TYPE USED AVAIL
cephfs-metadata metadata 216k 56.2G
cephfs-data data 0 56.2G
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)
6.12.3 查找挂载cephfs的客户端
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph tell mds.0 client ls
2021-09-25T18:30:09.856+0800 7f0fbdffb700 0 client.1094242 ms_handle_reset on v2:172.16.10.225:6802/1724959904
2021-09-25T18:30:09.884+0800 7f0fbdffb700 0 client.1084719 ms_handle_reset on v2:172.16.10.225:6802/1724959904
[
{
"id": 1094171,
"entity": {
"name": {
"type": "client",
"num": 1094171
},
"addr": {
"type": "v1",
"addr": "172.16.0.126:0",
"nonce": 1257114724
}
},
"state": "open",
"num_leases": 0,
"num_caps": 2,
"request_load_avg": 0,
"uptime": 274.89986021499999,
"requests_in_flight": 0,
"num_completed_requests": 0,
"num_completed_flushes": 0,
"reconnecting": false,
"recall_caps": {
"value": 0,
"halflife": 60
},
"release_caps": {
"value": 0,
"halflife": 60
},
"recall_caps_throttle": {
"value": 0,
"halflife": 1.5
},
"recall_caps_throttle2o": {
"value": 0,
"halflife": 0.5
},
"session_cache_liveness": {
"value": 1.6026981127772033,
"halflife": 300
},
"cap_acquisition": {
"value": 0,
"halflife": 10
},
"delegated_inos": [],
"inst": "client.1094171 v1:172.16.0.126:0/1257114724",
"completed_requests": [],
"prealloc_inos": [],
"client_metadata": {
"client_features": {
"feature_bits": "0x00000000000001ff"
},
"metric_spec": {
"metric_flags": {
"feature_bits": "0x"
}
},
"entity_id": "wgs",
"hostname": "bj2d-prod-eth-star-boot-03",
"kernel_version": "4.19.0-1.el7.ucloud.x86_64",
"root": "/"
}
}
]
6.12.4 客户端卸载cephfs
6.12.4.1 客户端主动卸载cephfs
点击查看代码
[root@ceph-client-ubuntu20.04-01 ~]# umount /data/cephfs-data/
6.12.4.2 客户端手动被驱逐
6.12.4.2.1 使用客户端用其唯一 ID
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph tell mds.0 client evict id=1094171
2021-09-25T18:31:02.895+0800 7fc5eeffd700 0 client.1094254 ms_handle_reset on v2:172.16.10.225:6802/1724959904
2021-09-25T18:31:03.671+0800 7fc5eeffd700 0 client.1084740 ms_handle_reset on v2:172.16.10.225:6802/1724959904
6.12.4.2.2 客户端被驱逐后查看挂载点状态
点击查看代码
root@ceph-client-ubuntu20.04-01:~# ls -l /data/
ls: cannot access '/data/cephfs-data': Permission denied
total 32
d????????? ? ? ? ? ? cephfs-data
6.12.4.2.3 客户端解决办法
点击查看代码
root@ceph-client-ubuntu20.04-01:~# umount /data/cephfs-data
6.12.4.2.4 客户端验证挂载点状态
点击查看代码
root@ceph-client-ubuntu20.04-01:~# ls -l
total 4
drwxr-xr-x 2 root root 6 Sep 25 11:46 cephfs-data
6.12.5 删除Cephfs文件系统
6.12.5.1 查看cephfs服务状态
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph mds stat
wgscephfs:1 {0=ceph-mgr-01=up:active}
6.12.5.2 关闭cephfs文件系统
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs fail wgscephfs
wgscephfs marked not joinable; MDS cannot join the cluster. All MDS ranks marked failed.
6.12.5.3 查看ceph集群状态
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph -s cluster: id: 6e521054-1532-4bc8-9971-7f8ae93e8430 health: HEALTH_ERR 1 filesystem is degraded 1 filesystem is offline
services:
mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 97m)
mgr: ceph-mgr-01(active, since 25h), standbys: ceph-mgr-02
mds: 0/1 daemons up (1 failed), 1 standby
osd: 9 osds: 9 up (since 2d), 9 in (since 2d)
data:
volumes: 0/1 healthy, 1 failed
pools: 6 pools, 257 pgs
objects: 44 objects, 24 MiB
usage: 1.4 GiB used, 179 GiB / 180 GiB avail
pgs: 257 active+clean
6.12.5.4 查看Cephfs服务状态
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph mds stat
wgscephfs:0/1 1 up:standby, 1 failed
6.12.5.5 删除cephfs文件系统
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs rm wgscephfs --yes-i-really-mean-it
6.12.5.6 查看cephfs文件系统
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs ls
No filesystems enabled
6.12.5.7 查看ceph集群状态
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph -s cluster: id: 6e521054-1532-4bc8-9971-7f8ae93e8430 health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 9m)
mgr: ceph-mgr-01(active, since 25h), standbys: ceph-mgr-02
osd: 9 osds: 9 up (since 2d), 9 in (since 2d)
data:
pools: 6 pools, 257 pgs
objects: 43 objects, 24 MiB
usage: 1.4 GiB used, 179 GiB / 180 GiB avail
pgs: 257 active+clean
6.12.5.8 查看cephfs服务状态
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph mds stat
1 up:standby
6.13 Ceph MDS高可用
6.13.1 Ceph MDS高可用介绍
- Ceph mds作为ceph的访问入口,需要实现高性能以及数据备份。
6.13.2 Ceph MDS高可用架构图
- 两主两备
6.13.3 Ceph MDS配置常用选项
- mds_standby_replay: 值为true表示开启relplay模式,这种模式下从MDS内的数据将实时与主MDS同步,如果主宕机,从可以快速的切换。如果值为false只有宕机时从MDS才会同步数据,有一段时间的中断。
- mds_standby_for_name: 设置当前MDS进程只用于备份于指定名称的MDS。
- mds_standby_for_rank:设置当前MDS进程只用于备份那个Rank,通常为Rank编号,另外在存在这个cephfs文件系统中,还可以使用mds_standby_fir _fscid参数来指定不同的文件系统。
- mds_standby_for_fscid: 指定cephfs文件系统ID,需要联合mds_standby_for_rank生效,如果设置mds_standby_for_rank,那么久是用于指定文件的Rank,如果没有设置,就是指定文件系统的所有Rank。
6.13.4 当前mds服务状态
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph mds stat
wgscephfs:1 {0=ceph-mgr-01=up:active}
6.13.5 添加mds服务器
6.13.5.1 安装ceph-mds
点击查看代码
root@ceph-mgr-02:~# apt -y install ceph-mds
root@ceph-mon-01:~# apt -y install ceph-mds
root@ceph-mon-02:~# apt -y install ceph-mds
root@ceph-mon-03:~# apt -y install ceph-mds
6.13.5.2 创建mds服务
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy mds create ceph-mgr-02 ceph-mon-01 ceph-mon-02 ceph-mon-03
6.13.6 查看当前mds服务状态
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status
wgscephfs - 0 clients
=========
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active ceph-mgr-01 Reqs: 0 /s 10 13 12 0
POOL TYPE USED AVAIL
cephfs-metadata metadata 96.0k 56.2G
cephfs-data data 0 56.2G
STANDBY MDS
ceph-mgr-02
ceph-mon-02
ceph-mon-03
ceph-mon-01
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)
6.13.7 查看当前ceph集群状态
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph -s cluster: id: 6e521054-1532-4bc8-9971-7f8ae93e8430 health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 65m)
mgr: ceph-mgr-01(active, since 26h), standbys: ceph-mgr-02
mds: 1/1 daemons up, 4 standby
osd: 9 osds: 9 up (since 2d), 9 in (since 2d)
data:
volumes: 1/1 healthy
pools: 4 pools, 161 pgs
objects: 43 objects, 24 MiB
usage: 1.4 GiB used, 179 GiB / 180 GiB avail
pgs: 161 active+clean
6.13.8 查看当前cephfs文件系统状态
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs get wgscephfs
Filesystem 'wgscephfs' (8)
fs_name wgscephfs
epoch 39
flags 13
created 2021-09-25T20:01:13.237645+0800
modified 2021-09-25T20:05:16.835799+0800
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
required_client_features {}
last_failure 0
last_failure_osd_epoch 0
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds 1
in 0
up {0=1104195}
failed
damaged
stopped
data_pools [14]
metadata_pool 13
inline_data disabled
balancer
standby_count_wanted 1
[mds.ceph-mgr-01{0:1104195} state up:active seq 113 addr [v2:172.16.10.225:6802/3134604779,v1:172.16.10.225:6803/3134604779]]
6.13.9 删除MDS服务
- MDS 会自动通知 Ceph 监视器它正在关闭。这使监视器能够即时故障转移到可用的备用数据库(如果存在)。没有必要使用管理命令来实现此故障转移,例如通过使用 ceph fs fail mds.id
6.13.9.1 停止mds服务
点击查看代码
root@ceph-mon-03:~# systemctl stop ceph-mds@ceph-mon-03
6.13.9.2 查看当前mds状态
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status
wgscephfs - 0 clients
=========
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active ceph-mgr-02 Reqs: 0 /s 10 13 12 0
1 active ceph-mgr-01 Reqs: 0 /s 10 13 11 0
POOL TYPE USED AVAIL
cephfs-metadata metadata 168k 56.2G
cephfs-data data 0 56.2G
STANDBY MDS
ceph-mon-02
ceph-mon-01
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)
6.13.9.3 删除/var/lib/ceph/mds/ceph-${id}
点击查看代码
root@ceph-mon-03:~# rm -rf /var/lib/ceph/mds/ceph-ceph-mon-03
6.13.10 设置处于激活状态mds数量
点击查看代码
#设置同时活跃状态的主mds最大值为2
ceph@ceph-deploy:~/ceph-cluster$ ceph fs set wgscephfs max_mds 2
6.13.11 查看当前mds状态
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status
wgscephfs - 0 clients
=========
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active ceph-mgr-02 Reqs: 0 /s 10 13 12 0
1 active ceph-mgr-01 Reqs: 0 /s 10 13 11 0
POOL TYPE USED AVAIL
cephfs-metadata metadata 168k 56.2G
cephfs-data data 0 56.2G
STANDBY MDS
ceph-mon-02
ceph-mon-01
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)
6.13.12 mds高可用优化
- 当前ceph-mgr-01和ceph-mgr-02为active状态。
- 将ceph-mgr-02设置为ceph-mgr-01的standby。
- 将ceph-mon-02设置为ceph-mon-01的standby。
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ cat ceph.conf [global] fsid = 6e521054-1532-4bc8-9971-7f8ae93e8430 public_network = 172.16.10.0/24 cluster_network = 172.16.10.0/24 mon_initial_members = ceph-mon-01 mon_host = 172.16.10.148 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx mon_allow_pool_delete = true
mon clock drift allowed = 2
mon clock drift warn backoff = 30
[mds.ceph-mon-01]
mds_standby_for_name = ceph-mgr-01
mds_standby_replay = true
[mds.ceph-mon-02]
mds_standby_for_name = ceph-mgr-02
mds_standby_replay = true
6.13.13 分发配置文件
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite config push ceph-mgr-01
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite config push ceph-mgr-02
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite config push ceph-mon-01
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite config push ceph-mon-02
6.13.14 重启mds服务
点击查看代码
root@ceph-mon-02:~# systemctl restart ceph-mds@ceph-mon-02
root@ceph-mon-01:~# systemctl restart ceph-mds@ceph-mon-01
root@ceph-mgr-02:~# systemctl restart ceph-mds@ceph-mgr-02
root@ceph-mgr-01:~# systemctl restart ceph-mds@ceph-mgr-01
6.13.15 ceph集群mds高可用状态
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status
wgscephfs - 0 clients
=========
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active ceph-mgr-02 Reqs: 0 /s 10 13 12 0
1 active ceph-mon-01 Reqs: 0 /s 10 13 11 0
POOL TYPE USED AVAIL
cephfs-metadata metadata 168k 56.2G
cephfs-data data 0 56.2G
STANDBY MDS
ceph-mon-02
ceph-mgr-01
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)
6.13.15 验证mds状态一对一对应
- ceph-mgr-02和ceph-mgr-01交替切换状态为actinve。
- ceph-mon-02和ceph-mon-01交替切换状态为actinve。
6.14 通过ganesha将cephfs导出为NFS
- https://docs.ceph.com/en/pacific/cephfs/nfs/
6.14.1 配置要求
- Ceph 文件系统是luminous 或更高版本。
- 在 NFS 服务器主机中,'libcephfs2'是luminous 或更高版本、'nfs-ganesha' 和 'nfs-ganesha-ceph' 包(Ganesha v2.5 或更高版本)。
- NFS-Ganesha 服务器主机连接到 Ceph 公网。
- 建议使用 3.5 或更高稳定版本的 NFS-Ganesha 包与 pacific (16.2.x) 或更高稳定版本的 Ceph 包。
- 在安装有cephfs的节点安装nfs-ganesha,nfs-ganesha-ceph
6.14.2 在部署有ceph-mds节点安装ganesha服务
6.14.2.1 查看ganesha版本信息
点击查看代码
root@ceph-mgr-01:~# apt-cache madison nfs-ganesha-ceph
nfs-ganesha-ceph | 2.6.0-2 | http://mirrors.ucloud.cn/ubuntu bionic/universe amd64 Packages
nfs-ganesha | 2.6.0-2 | http://mirrors.ucloud.cn/ubuntu bionic/universe Sources
6.12.2.2 安装ganesha服务
点击查看代码
root@ceph-mgr-01:~# apt -y install nfs-ganesha-ceph nfs-ganesha
6.12.2.3 ganesha配置信息
- https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/config_samples/ceph.conf
root@ceph-mgr-01:~# mv /etc/ganesha/ganesha.conf /etc/ganesha/ganesha.conf.back root@ceph-mgr-01:~# vi /etc/ganesha/ganesha.conf NFS_CORE_PARAM { # Ganesha can lift the NFS grace period early if NLM is disabled. Enable_NLM = false;
# rquotad doesn't add any value here. CephFS doesn't support per-uid # quotas anyway. Enable_RQUOTA = false; # In this configuration, we're just exporting NFSv4. In practice, it's # best to use NFSv4.1+ to get the benefit of sessions. Protocols = 4;
}
EXPORT
{
# Export Id (mandatory, each EXPORT must have a unique Export_Id)
Export_Id = 77;# Exported path (mandatory) Path = /; # Pseudo Path (required for NFS v4) Pseudo = /cephfs-test; # Time out attribute cache entries immediately Attr_Expiration_Time = 0; # We're only interested in NFSv4 in this configuration Protocols = 4; # NFSv4 does not allow UDP transport Transports = TCP; # Time out attribute cache entries immediately Attr_Expiration_Time = 0; # setting for root Squash Squash="No_root_squash"; # Required for access (default is None) # Could use CLIENT blocks instead Access_Type = RW; # Exporting FSAL FSAL { Name = CEPH; hostname="172.16.10.225"; #当前节点ip地址 }
}
LOG {
# default log level
Default_Log_Level = WARN;
}
6.12.2.4 ganesha服务管理
点击查看代码
root@ceph-mgr-01:~# systemctl restart nfs-ganesha
root@ceph-mgr-01:~# systemctl status nfs-ganesha
6.12.3 ganesha客户端设置
6.12.3.1 ubuntu系统
点击查看代码
root@ceph-client-ubuntu18.04-01:~# apt -y install nfs-common
6.12.3.2 centos系统
点击查看代码
[root@ceph-client-centos7-01 ~]# yum install -y nfs-utils
6.12.4 客户端挂载
6.12.4.1 客户端以ceph方式挂载
点击查看代码
root@ceph-client-ubuntu20.04-01:~# mount -t ceph 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ /data/cephfs-data -o name=wgs,secretfile=/etc/ceph/wgs.key
root@ceph-client-ubuntu20.04-01:~# df -TH
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 4.1G 0 4.1G 0% /dev
tmpfs tmpfs 815M 1.1M 814M 1% /run
/dev/vda2 ext4 22G 18G 2.1G 90% /
tmpfs tmpfs 4.1G 0 4.1G 0% /dev/shm
tmpfs tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs tmpfs 4.1G 0 4.1G 0% /sys/fs/cgroup
/dev/vdb ext4 528G 32G 470G 7% /data
tmpfs tmpfs 815M 0 815M 0% /run/user/1001
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ ceph 61G 0 61G 0% /data/cephfs-data
6.12.4.2 写入测试数据
点击查看代码
root@ceph-client-ubuntu20.04-01:~# cd /data/cephfs-data/
root@ceph-client-ubuntu20.04-01:/data/cephfs-data# echo "mount nfs" > nfs.txt
root@ceph-client-ubuntu20.04-01:/data/cephfs-data# ls -l
total 1
-rw-r--r-- 1 root root 10 Sep 26 22:08 nfs.txt
6.12.4.3 客户端以nfs方式挂载
[root@ceph-client-centos7-01 ~]# mount -t nfs -o nfsvers=4.1,proto=tcp 172.16.10.225:/cephfs-test /data/cephfs-data/
[root@ceph-client-centos7-01 ~]# df -TH
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 2.1G 0 2.1G 0% /dev
tmpfs tmpfs 412M 51M 361M 13% /run
/dev/vda1 ext4 106G 4.1G 97G 5% /
tmpfs tmpfs 2.1G 0 2.1G 0% /dev/shm
tmpfs tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs tmpfs 2.1G 0 2.1G 0% /sys/fs/cgroup
/dev/vdb ext4 106G 16G 85G 16% /data
tmpfs tmpfs 412M 0 412M 0% /run/user/1003
172.16.10.225:/cephfs-test nfs4 61G 0 61G 0% /data/cephfs-data
6.12.4.4 验证nfs挂载数据
点击查看代码
[root@ceph-client-centos7-01 ~]# ls -l /data/cephfs-data/
total 1
-rw-r--r-- 1 root root 10 Sep 26 22:08 nfs.txt
[root@ceph-client-centos7-01 ~]# cat /data/cephfs-data/nfs.txt
mount nfs
6.12.5 客户端卸载
6.12.5.1 ubuntu系统
点击查看代码
root@ceph-client-ubuntu20.04-01:~# umount /data/cephfs-data
6.12.5.2 ubuntu系统
点击查看代码
[root@ceph-client-centos7-01 ~]# umount /data/cephfs-data