20210821第二天:Ceph账号管理(普通用户挂载)、mds高可用

下面主要内容:

  1. 用户权限管理和授权流程
  2. 用普通用户挂载rbd和cephfs
  3. mds高可用
    • 多mds active
    • 多mds active加standby

 

一、Ceph的用户权限管理和授权流程

一般系统的身份认真无非三点:账号、角色和认真鉴权,Ceph 的用户可以是一个具体的人或系统角色(e.g. 应用程序),Ceph 管理员通过创建用户并设置权限来控制谁可以访问、操作 Ceph Cluster、Pool 或 Objects 等资源。

 20210821第二天:Ceph账号管理(普通用户挂载)、mds高可用

 

 

 Ceph 的用户类型可以分为以下几类

  • 客户端用户
    • 操作用户(e.g. client.admin)
    • 应用程序用户(e.g. client.cinder)
  • 其他用户
    • Ceph 守护进程用户(e.g. mds.ceph-node1、osd.0)

用户命名遵循 <TYPE.ID> 的命名规则,其中 Type 有 mon,osd,client 三者,L 版本以后添加了 mgr 类型。

NOTE:为 Ceph 守护进程创建用户,是因为 MON、OSD、MDS 等守护进程同样遵守 CephX 协议,但它们不能属于真正意义上的客户端。

Ceph 用户的密钥:相当于用户的密码,本质是一个唯一字符串。e.g. key: AQB3175c7fuQEBAAMIyROU5o2qrwEghuPwo68g==

Ceph 用户的授权(Capabilities, caps):即用户的权限,通常在创建用户的同时进行授权。只有在授权之后,用户才可以使用权限范围内的 MON、OSD、MDS 的功能,也可以通过授权限制用户对 Ceph 集群数据或命名空间的访问范围。以 client.admin 和 client.jack为例:

 1 client.admin
 2     key: AQBErhthY4YdIhAANKTOMAjkzpKkHSkXSoNpaQ==
 3     # 允许访问 MDS
 4     caps: [mds] allow *
 5     # 允许访问 Mgr
 6     caps: [mgr] allow *
 7     # 允许访问 MON
 8     caps: [mon] allow *
 9     # 允许访问 OSD
10     caps: [osd] allow *
11 
12 client.tom
13     key: AQAQ5SRhNPftJBAA3lKYmTsgyeA1OQOVo2AwZQ==
14     # 允许对 MON 读的权限
15     caps: [mon] allow r
16     # 允许对rbd-data资源池进行读写
17     caps: [osd] allow rwx pool=rbd-date

授权类型

  • allow:在守护进程进行访问设置之前就已经具有特定权限,常见于管理员和守护进程用户。
  • r:授予用户读的权限,读取集群各个组件(MON/OSD/MDS/CRUSH/PG)的状态,但是不能修改。
  • w:授予用户写对象的权限,与 r 配合使用,修改集群的各个组件的状态,可以执行组件的各个动作指令。
  • x:授予用户调用类方法的能力,仅仅和 ceph auth 操作相关。
  • class-read:授予用户调用类读取方法的能力,是 x 的子集。
  • class-write:授予用户调用类写入方法的能力,是 x 的子集。
  • *:授予用户 rwx 权限。
  • profile osd:授权用户以 OSD 身份连接到其它 OSD 或 MON,使得 OSD 能够处理副本心跳和状态汇报。
  • profile mds:授权用户以 MDS 身份连接其它 MDS 或 MON。
  • profile bootstrap-osd:授权用户引导 OSD 守护进程的能力,通常授予部署工具(e.g. ceph-deploy),让它们在引导 OSD 时就有增加密钥的权限了。
  • profile bootstrap-mds:授权用户引导 MDS 守护进程的能力。同上。

NOTE:可见 Ceph 客户端不直接访问 Objects,而是必须要经过 OSD 守护进程的交互。

用户管理常规操作

Ceph 用户管理指令 auth:

 1 root@node02:~# ceph auth --help
 2 ...
 3 auth add <entity> [<caps>...]                                                                            add auth info for <entity> from input file, or random key if no input is given, and/or any caps 
 4                                                                                                           specified in the command
 5 auth caps <entity> <caps>...                                                                             update caps for <name> from caps specified in the command
 6 auth export [<entity>]                                                                                   write keyring for requested entity, or master keyring if none given
 7 auth get <entity>                                                                                        write keyring file with requested key
 8 auth get-key <entity>                                                                                    display requested key
 9 auth get-or-create <entity> [<caps>...]                                                                  add auth info for <entity> from input file, or random key if no input given, and/or any caps specified 
10                                                                                                           in the command
11 auth get-or-create-key <entity> [<caps>...]                                                              get, or add, key for <name> from system/caps pairs specified in the command.  If key already exists, any 
12                                                                                                           given caps must match the existing caps for that key.
13 auth import                                                                                              auth import: read keyring file from -i <file>
14 auth ls                                                                                                  list authentication state
15 auth print-key <entity>                                                                                  display requested key
16 auth print_key <entity>                                                                                  display requested key
17 auth rm <entity>                                                                                         remove all caps for <name>

获取指定用户的权限信息

1 root@node02:~# ceph auth get client.admin
2 [client.admin]
3     key = AQBErhthY4YdIhAANKTOMAjkzpKkHSkXSoNpaQ==
4     caps mds = "allow *"
5     caps mgr = "allow *"
6     caps mon = "allow *"
7     caps osd = "allow *"
8 exported keyring for client.admin
9 root@node02:~# 

新建用户

 1 # 授权的格式为:`{守护进程类型} ‘allow {权限}‘`
 2 
 3 # 创建 client.jack 用户,并授权可读 MON、可读写 OSD Pool livepool
 4 ceph auth add client.jack mon ‘allow r‘ osd ‘allow rw pool=rbd-data‘
 5 
 6 # 获取或创建 client.tom,若创建,则授权可读 MON,可读写 OSD Pool livepool
 7 ceph auth get-or-create client.tom mon ‘allow r‘ osd ‘allow rw pool=rbd-data‘
 8 
 9 # 获取或创建 client.jerry,若创建,则授权可读 MON,可读写  OSD Pool livepool,输出用户秘钥环文件
10 ceph auth get-or-create client.jerry mon ‘allow r‘ osd ‘allow rw pool=rbd-data‘ -o jerry.keyring
11 ceph auth get-or-create-key client.ringo mon ‘allow r‘ osd ‘allow rw pool=rbd-data‘ -o ringo.key

删除用户

1 ceph auth del {TYPE}.{ID}

CephX 认证系统

Ceph 提供了两种身份认证方式:None 和 CephX。前者表示客户端不需要通过密钥访问即可访问 Ceph 存储集群,显然这种方式是不被推荐的。所以我们一般会启用 CephX 认证系统,通过编辑 ceph.conf 开启:

1 [global]
2 ...
3 # 表示存储集群(mon,osd,mds)相互之间需要通过 keyring 认证
4 auth_cluster_required = cephx
5 # 表示客户端(比如gateway)到存储集群(mon,osd,mds)需要通过 keyring 认证
6 auth_service_required = cephx
7 # 表示存储集群(mon,osd,mds)到客户端(e.g. gateway)需要通过 keyring 认证
8 auth_client_required = cephx

  CephX 的本质是一种对称加密协议,加密算法为 AES,用于识别用户的身份、对用户在客户端上的操作进行认证,以此防止中间人攻击、数据篡改等网络安全问题。应用 CephX 的前提是创建一个用户,当我们使用上述指令创建一个用户时,MON 会将用户的密钥返回给客户端,同时自己也会保存一份副本。这样客户端和 MON 就共享了一份密钥。CephX 使用共享密钥的方式进行认证,客户端和 MON 集群都会持有一份用户密钥,它提供了一个相互认证的机制,MON 集群确定客户端持有用户密钥,而客户端确定 MON 集群持有用户密钥的的副本。

20210821第二天:Ceph账号管理(普通用户挂载)、mds高可用

 

 

 

身份认证原理

20210821第二天:Ceph账号管理(普通用户挂载)、mds高可用

 

 

 

  1. 用户通过客户端向 MON 发起请求。
  2. 客户端将用户名传递到 MON。
  3. MON 对用户名进行检查,若用户存在,则通过加密用户密钥生成一个 session key 并返回客户端。
  4. 客户端通过共享密钥解密 session key,只有拥有相同用户密钥环文件的客户端可以完成解密。
  5. 客户端得到 session key 后,客户端持有 session key 再次向 MON 发起请求
  6. MON 生成一个 ticket,同样使用用户密钥进行加密,然后发送给客户端。
  7. 客户端同样通过共享密钥解密得到 ticket。
  8. 往后,客户端持有 ticket 向 MON、OSD 发起请求。

20210821第二天:Ceph账号管理(普通用户挂载)、mds高可用

 

 

 这就是共享密钥认证的好处,客户端、MON、OSD、MDS 共同持有用户的密钥,只要客户端与 MON 完成验证之后,客户端就可以与任意服务进行交互。并且只要客户端拥有任意用户的密钥环文件,客户端就可以执行特定用户所具有权限的所有操作。当我们执行 ceph -s 时候,实际上执行的是 ceph -s --conf /etc/ceph/ceph.conf --name client.admin --keyring /etc/ceph/ceph.client.admin.keyring。客户端会从下列默认路径查看 admin 用户的 keyring:

 

1 /etc/ceph/ceph.client.admin.keyring
2 /etc/ceph/ceph.keyring
3 /etc/ceph/keyring
4 /etc/ceph/keyring.bin

使用 ceph-authtool 进行密钥环管理

NOTE:密钥环文件 .keyring 是 Ceph 用户的载体,当客户端拥有了密钥环文件就相当于可以通过对应的用户来执行客户端操作。

MON、OSD、admin client 的初始密钥环文件

  • /var/lib/ceph/mon/ceph-$hostname/keyring
  • /var/lib/ceph/osd/ceph-$hostname/keyring
  • /etc/ceph/ceph.client.admin.keyring

指定密钥环(admin 用户)来执行集群操作

1 ceph -n client.admin --keyring=/etc/ceph/ceph.client.admin.keyring health
2 rbd create -p rbd volume01 --size 1G --image-feature layering --conf /etc/ceph/ceph.conf --name client.admin --keyring /etc/ceph/ceph.client.admin.keyring

查看用户密钥

ceph auth print-key {TYPE}.{ID}

将用户密钥导入到密钥环

1 ceph auth get client.admin -o /etc/ceph/ceph.client.admin.keyring

创建用户并生产密钥环文件

1 ceph-authtool -n client.user1 --cap osd ‘allow rwx‘ --cap mon ‘allow rwx‘ /etc/ceph/ceph.keyring
2 ceph-authtool -C /etc/ceph/ceph.keyring -n client.user1 --cap osd ‘allow rwx‘ --cap mon ‘allow rwx‘ --gen-key

修改用户授权

ceph-authtool /etc/ceph/ceph.keyring -n client.user1 --cap osd ‘allow rwx‘ --cap mon ‘allow rwx‘

从密钥环文件导入一个用户:

1 ceph auth import -i /path/to/keyring

将用户导出到一个密钥环文件

1 ceph auth get client.admin -o /etc/ceph/ceph.client.admin.keyring

下面是我本次作业的情况:

20210821第二天:Ceph账号管理(普通用户挂载)、mds高可用
root@node01:~# ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL    USED  RAW USED  %RAW USED
hdd    240 GiB  240 GiB  41 MiB    41 MiB       0.02
TOTAL  240 GiB  240 GiB  41 MiB    41 MiB       0.02
 
--- POOLS ---
POOL                   ID  PGS   STORED  OBJECTS    USED  %USED  MAX AVAIL
device_health_metrics   1    1      0 B        0     0 B      0     76 GiB
mypool1                 2   32  3.1 KiB        3  21 KiB      0     76 GiB
root@node01:~# 
root@node01:~# ceph osd pool create rbd-data 32 32
pool ‘rbd-data‘ created
root@node02:~# ceph osd lspools
1 device_health_metrics
2 mypool1
3 rbd-data
root@node02:~# 
root@node01:~# ceph osd pool ls
device_health_metrics
mypool1
rbd-data
root@node01:~# ceph osd pool application enable rbd-data rbd
enabled application ‘rbd‘ on pool ‘rbd-data‘
root@node01:~# rb
rbash            rbd              rbdmap           rbd-replay       rbd-replay-many  rbd-replay-prep  
root@node01:~# rbd pool init -p rbd-data
root@node01:~# rbd create rbdtest1.img --pool rbd-data --size 1G
root@node01:~# rbd ls --pool rbd-data
rbdtest1.img
root@node01:~# rbd info rbd-data/rbdtest1.img
rbd image ‘rbdtest1.img‘:
    size 1 GiB in 256 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: fb9a70e1dad6
    block_name_prefix: rbd_data.fb9a70e1dad6
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    op_features: 
    flags: 
    create_timestamp: Tue Aug 24 20:17:40 2021
    access_timestamp: Tue Aug 24 20:17:40 2021
    modify_timestamp: Tue Aug 24 20:17:40 2021
root@node01:~# rbd feature disable rbd-data/rbdtest1.img exclusive-lock object-map fast-diff  deep-flatten
root@node01:~# rbd info rbd-data/rbdtest1.img
rbd image ‘rbdtest1.img‘:
    size 1 GiB in 256 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: fb9a70e1dad6
    block_name_prefix: rbd_data.fb9a70e1dad6
    format: 2
    features: layering
    op_features: 
    flags: 
    create_timestamp: Tue Aug 24 20:17:40 2021
    access_timestamp: Tue Aug 24 20:17:40 2021
    modify_timestamp: Tue Aug 24 20:17:40 2021
root@node01:~# 







root@node01:~# ceph auth add client.tom mon ‘allow r‘ osd ‘allow rwx pool=rbd-date‘
added key for client.tom
root@node01:~# ceph auth get client.tom
[client.tom]
    key = AQAQ5SRhNPftJBAA3lKYmTsgyeA1OQOVo2AwZQ==
    caps mon = "allow r"
    caps osd = "allow rwx pool=rbd-date"
exported keyring for client.tom
root@node01:~# 
root@node01:~# 
root@node01:~# ceph auth get
get                get-key            get-or-create      get-or-create-key  
root@node01:~# ceph auth get-or-create client.jerry mon ‘allow r‘ osd ‘allow rwx pool=rbd-data‘
[client.jerry]
    key = AQBb5SRhY64XORAAHS+d0M/q8UixCa013knHQw==
root@node01:~# 
root@node01:~# ceph auth get client.jerry
[client.jerry]
    key = AQBb5SRhY64XORAAHS+d0M/q8UixCa013knHQw==
    caps mon = "allow r"
    caps osd = "allow rwx pool=rbd-data"
exported keyring for client.jerry
root@node01:~# 
root@node01:~# ceph auth get-or-create-key client.jerry
AQBb5SRhY64XORAAHS+d0M/q8UixCa013knHQw==
root@node01:~# ceph auth get-or-create-key client.jerry mon ‘allow r‘ osd ‘allow rwx pool=rbd-data‘
AQBb5SRhY64XORAAHS+d0M/q8UixCa013knHQw==
root@node01:~# ceph auth get-or-create-key client.jerry
AQBb5SRhY64XORAAHS+d0M/q8UixCa013knHQw==
root@node01:~# 
root@node01:~# ceph auth print-key client.jerry
AQBb5SRhY64XORAAHS+d0M/q8UixCa013knHQw==root@node01:~# 
root@node01:~# 
root@node01:~# 
root@node01:~# ceph auth  get client.jerry
[client.jerry]
    key = AQBb5SRhY64XORAAHS+d0M/q8UixCa013knHQw==
    caps mon = "allow r"
    caps osd = "allow rwx pool=rbd-data"
exported keyring for client.jerry
root@node01:~# ceph auth caps client.jerry mon ‘allow rw‘ osd ‘allow r pool=rbd-data‘
updated caps for client.jerry
root@node01:~# ceph auth get client.jerry
[client.jerry]
    key = AQBb5SRhY64XORAAHS+d0M/q8UixCa013knHQw==
    caps mon = "allow rw"
    caps osd = "allow r pool=rbd-data"
exported keyring for client.jerry
root@node01:~# ceph auth  del client.jerry
updated
root@node01:~# ceph auth get client.jerry
Error ENOENT: failed to find client.jerry in keyring
root@node01:~# 



5.6.1:通 通 过 秘 钥 环 文 件 备 份 与 恢 复 用

root@node01:~# ceph auth get-or-create client.user1 mon ‘allow r‘ osd ‘allow rwx pool=rbd-data‘
[client.user1]
    key = AQDR5yRhiTkqJhAAY5ZmSnVKf/1/BGr/q0OTaQ==
root@node01:~# ceph auth get client.user1
[client.user1]
    key = AQDR5yRhiTkqJhAAY5ZmSnVKf/1/BGr/q0OTaQ==
    caps mon = "allow r"
    caps osd = "allow rwx pool=rbd-data"
exported keyring for client.user1
root@node01:~# ls
ceph-deploy  cluster  snap
root@node01:~# ceph-authtool --help
usage: ceph-authtool keyringfile [OPTIONS]...
where the options are:
  -l, --list                    will list all keys and capabilities present in
                                the keyring
  -p, --print-key               will print an encoded key for the specified
                                entityname. This is suitable for the
                                ‘mount -o secret=..‘ argument
  -C, --create-keyring          will create a new keyring, overwriting any
                                existing keyringfile
  -g, --gen-key                 will generate a new secret key for the
                                specified entityname
  --gen-print-key               will generate a new secret key without set it
                                to the keyringfile, prints the secret to stdout
  --import-keyring FILE         will import the content of a given keyring
                                into the keyringfile
  -n NAME, --name NAME          specify entityname to operate on
  -a BASE64, --add-key BASE64   will add an encoded key to the keyring
  --cap SUBSYSTEM CAPABILITY    will set the capability for given subsystem
  --caps CAPSFILE               will set all of capabilities associated with a
                                given key, for all subsystems
  --mode MODE                   will set the desired file mode to the keyring
                                e.g: ‘0644‘, defaults to ‘0600‘
root@node01:~# ceph-authtool -C ceph.client.user1.keyring
creating ceph.client.user1.keyring
root@node01:~# cat ceph.client.user1.keyring 
root@node01:~# file ceph.client.user1.keyring 
ceph.client.user1.keyring: empty
root@node01:~# 
root@node01:~# ceph auth  get client.user1
[client.user1]
    key = AQDR5yRhiTkqJhAAY5ZmSnVKf/1/BGr/q0OTaQ==
    caps mon = "allow r"
    caps osd = "allow rwx pool=rbd-data"
exported keyring for client.user1
root@node01:~# ceph auth  get client.user1 -o ceph.client.user1.keyring 
exported keyring for client.user1
root@node01:~# cat ceph.client.user1.keyring 
[client.user1]
    key = AQDR5yRhiTkqJhAAY5ZmSnVKf/1/BGr/q0OTaQ==
    caps mon = "allow r"
    caps osd = "allow rwx pool=rbd-data"
root@node01:~# 

从 keyring 文件恢复用户认证信息:


root@node01:~# 
root@node01:~# ceph auth del client.user1
updated
root@node01:~# ceph auth get client.user1
Error ENOENT: failed to find client.user1 in keyring
root@node01:~# 
root@node01:~# ceph auth import -i ceph.client.user1.keyring 
imported keyring
root@node01:~# 
root@node01:~# ceph auth get client.user1
[client.user1]
    key = AQDR5yRhiTkqJhAAY5ZmSnVKf/1/BGr/q0OTaQ==
    caps mon = "allow r"
    caps osd = "allow rwx pool=rbd-data"
exported keyring for client.user1
root@node01:~# 





root@node01:~# ceph-authtool -C ceph.client.tom.keyring
creating ceph.client.tom.keyring
root@node01:~# ceph auth get client.tom -o ceph.client.tom.keyring 
exported keyring for client.tom
root@node01:~# cat ceph.client.tom.keyring 
[client.tom]
    key = AQAQ5SRhNPftJBAA3lKYmTsgyeA1OQOVo2AwZQ==
    caps mon = "allow r"
    caps osd = "allow rwx pool=rbd-date"
root@node01:~# ceph-authtool -l ceph.client.tom.keyring 
[client.tom]
    key = AQAQ5SRhNPftJBAA3lKYmTsgyeA1OQOVo2AwZQ==
    caps mon = "allow r"
    caps osd = "allow rwx pool=rbd-date"
root@node01:~# 
root@node01:~# ceph-authtool ceph.client.user1.keyring --import-keyring ceph.client.tom.keyring 
importing contents of ceph.client.tom.keyring into ceph.client.user1.keyring
root@node01:~# echo $?
0
root@node01:~# cat ceph.client.tom.keyring 
[client.tom]
    key = AQAQ5SRhNPftJBAA3lKYmTsgyeA1OQOVo2AwZQ==
    caps mon = "allow r"
    caps osd = "allow rwx pool=rbd-date"
root@node01:~# 
root@node01:~# cat ceph.client.user1.keyring 
[client.tom]
    key = AQAQ5SRhNPftJBAA3lKYmTsgyeA1OQOVo2AwZQ==
    caps mon = "allow r"
    caps osd = "allow rwx pool=rbd-date"
[client.user1]
    key = AQDR5yRhiTkqJhAAY5ZmSnVKf/1/BGr/q0OTaQ==
    caps mon = "allow r"
    caps osd = "allow rwx pool=rbd-data"
root@node01:~# 





普通用户挂载
同步配置文件:
root@node01:~# ls
ceph.client.tom.keyring  ceph.client.user1.keyring  ceph-deploy  client.client.user1.keyring  client.user1.keyring  cluster  snap
root@node01:~# scp ceph.client.user1.keyring ceph-deploy/ceph.conf root@node04:/etc/ceph/
root@node04‘s password: 
ceph.client.user1.keyring                                                                                                                                                       100%  244   263.4KB/s   00:00    
ceph.conf                                                                                                                                                                       100%  265    50.5KB/s   00:00    
root@node01:~#

客户端操作

root@node04:/etc/netplan# apt install ceph-common
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  ibverbs-providers libbabeltrace1 libcephfs2 libdw1 libgoogle-perftools4 libibverbs1 libjaeger libleveldb1d liblttng-ust-ctl4 liblttng-ust0 liblua5.3-0 libnl-route-3-200 liboath0 librabbitmq4 librados2
  libradosstriper1 librbd1 librdkafka1 librdmacm1 librgw2 libsnappy1v5 libtcmalloc-minimal4 python3-ceph-argparse python3-ceph-common python3-cephfs python3-prettytable python3-rados python3-rbd python3-rgw
Suggested packages:
  ceph-base ceph-mds
The following NEW packages will be installed:
  ceph-common ibverbs-providers libbabeltrace1 libcephfs2 libdw1 libgoogle-perftools4 libibverbs1 libjaeger libleveldb1d liblttng-ust-ctl4 liblttng-ust0 liblua5.3-0 libnl-route-3-200 liboath0 librabbitmq4
  librados2 libradosstriper1 librbd1 librdkafka1 librdmacm1 librgw2 libsnappy1v5 libtcmalloc-minimal4 python3-ceph-argparse python3-ceph-common python3-cephfs python3-prettytable python3-rados python3-rbd
  python3-rgw
0 upgraded, 30 newly installed, 0 to remove and 79 not upgraded.
Need to get 37.3 MB of archives.
After this operation, 152 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific focal/main amd64 libjaeger amd64 16.2.5-1focal [3,780 B]

中间太长了,我就删除了点

Setting up ceph-common (16.2.5-1focal) ...
Adding group ceph....done
Adding system user ceph....done
Setting system user ceph properties....done
chown: cannot access ‘/var/log/ceph/*.log*‘: No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target.
Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service.
Processing triggers for man-db (2.9.1-1) ...
Processing triggers for libc-bin (2.31-0ubuntu9.2) ...
root@node04:/etc/netplan# ls /etc/ceph/
rbdmap
root@node04:/etc/netplan# ls /etc/ceph/
ceph.client.user1.keyring  ceph.conf  rbdmap
root@node04:/etc/netplan# cd /etc/ceph/
root@node04:/etc/ceph# ll
total 20
drwxr-xr-x  2 root root 4096 Aug 24 21:10 ./
drwxr-xr-x 96 root root 4096 Aug 24 21:09 ../
-rw-------  1 root root  244 Aug 24 21:10 ceph.client.user1.keyring
-rw-r--r--  1 root root  265 Aug 24 21:10 ceph.conf
-rw-r--r--  1 root root   92 Jul  8 22:16 rbdmap
root@node04:/etc/ceph# ceph --user user1 -s
  cluster:
    id:     9138c3cf-f529-4be6-ba84-97fcab59844b
    health: HEALTH_WARN
            1 slow ops, oldest one blocked for 4225 sec, mon.node02 has slow ops
 
  services:
    mon: 3 daemons, quorum node01,node02,node03 (age 70m)
    mgr: node01(active, since 70m), standbys: node02, node03
    osd: 6 osds: 6 up (since 70m), 6 in (since 7d)
 
  data:
    pools:   3 pools, 65 pgs
    objects: 7 objects, 54 B
    usage:   44 MiB used, 240 GiB / 240 GiB avail
    pgs:     65 active+clean
 
root@node04:/etc/ceph# 
root@node04:/etc/ceph# 
root@node04:/etc/ceph# rbd --user user1 -p rbd-data map rbdtest1.img
/dev/rbd0
rbd: --user is deprecated, use --id
root@node04:/etc/ceph# lsblk 
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0                       7:0    0 55.4M  1 loop /snap/core18/2128
loop1                       7:1    0 55.4M  1 loop /snap/core18/1944
loop2                       7:2    0 31.1M  1 loop /snap/snapd/10707
loop3                       7:3    0 69.9M  1 loop /snap/lxd/19188
loop4                       7:4    0 32.3M  1 loop /snap/snapd/12704
loop5                       7:5    0 70.3M  1 loop /snap/lxd/21029
sda                         8:0    0   40G  0 disk 
├─sda1                      8:1    0    1M  0 part 
├─sda2                      8:2    0    1G  0 part /boot
└─sda3                      8:3    0   39G  0 part 
  └─ubuntu--vg-ubuntu--lv 253:0    0   20G  0 lvm  /
sr0                        11:0    1  1.1G  0 rom  
rbd0                      252:0    0    1G  0 disk 
root@node04:/etc/ceph# 
root@node04:/etc/ceph# rbd --id user1 -p rbd-data map rbdtest1.img
rbd: warning: image already mapped as /dev/rbd0
/dev/rbd1
root@node04:/etc/ceph# rbd --id user1 unmap rbd-data/rbdtest1.img
rbd: rbd-data/rbdtest1.img: mapped more than once, unmapping /dev/rbd0 only
root@node04:/etc/ceph# lsblk 
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0                       7:0    0 55.4M  1 loop /snap/core18/2128
loop1                       7:1    0 55.4M  1 loop /snap/core18/1944
loop2                       7:2    0 31.1M  1 loop /snap/snapd/10707
loop3                       7:3    0 69.9M  1 loop /snap/lxd/19188
loop4                       7:4    0 32.3M  1 loop /snap/snapd/12704
loop5                       7:5    0 70.3M  1 loop /snap/lxd/21029
sda                         8:0    0   40G  0 disk 
├─sda1                      8:1    0    1M  0 part 
├─sda2                      8:2    0    1G  0 part /boot
└─sda3                      8:3    0   39G  0 part 
  └─ubuntu--vg-ubuntu--lv 253:0    0   20G  0 lvm  /
sr0                        11:0    1  1.1G  0 rom  
rbd1                      252:16   0    1G  0 disk 
root@node04:/etc/ceph# rbd --id user1 unmap rbd-data/rbdtest1.img
root@node04:/etc/ceph# lsblk 
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0                       7:0    0 55.4M  1 loop /snap/core18/2128
loop1                       7:1    0 55.4M  1 loop /snap/core18/1944
loop2                       7:2    0 31.1M  1 loop /snap/snapd/10707
loop3                       7:3    0 69.9M  1 loop /snap/lxd/19188
loop4                       7:4    0 32.3M  1 loop /snap/snapd/12704
loop5                       7:5    0 70.3M  1 loop /snap/lxd/21029
sda                         8:0    0   40G  0 disk 
├─sda1                      8:1    0    1M  0 part 
├─sda2                      8:2    0    1G  0 part /boot
└─sda3                      8:3    0   39G  0 part 
  └─ubuntu--vg-ubuntu--lv 253:0    0   20G  0 lvm  /
sr0                        11:0    1  1.1G  0 rom  
root@node04:/etc/ceph# 
root@node04:/etc/ceph# 
root@node04:/etc/ceph# rbd --id user1 -p rbd-data map rbdtest1.img
/dev/rbd0
root@node04:/etc/ceph# lsblk 
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0                       7:0    0 55.4M  1 loop /snap/core18/2128
loop1                       7:1    0 55.4M  1 loop /snap/core18/1944
loop2                       7:2    0 31.1M  1 loop /snap/snapd/10707
loop3                       7:3    0 69.9M  1 loop /snap/lxd/19188
loop4                       7:4    0 32.3M  1 loop /snap/snapd/12704
loop5                       7:5    0 70.3M  1 loop /snap/lxd/21029
sda                         8:0    0   40G  0 disk 
├─sda1                      8:1    0    1M  0 part 
├─sda2                      8:2    0    1G  0 part /boot
└─sda3                      8:3    0   39G  0 part 
  └─ubuntu--vg-ubuntu--lv 253:0    0   20G  0 lvm  /
sr0                        11:0    1  1.1G  0 rom  
rbd0                      252:0    0    1G  0 disk 
root@node04:/etc/ceph# 
root@node04:/etc/ceph# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0              isize=512    agcount=8, agsize=32768 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=16     swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
root@node04:/etc/ceph# mount /dev/rbd0 /mnt/
root@node04:/etc/ceph# cp /etc/passwd /mnt/
root@node04:/etc/ceph# cd /mnt/
root@node04:/mnt# ls
passwd
root@node04:/mnt# tail passwd 
uuidd:x:107:112::/run/uuidd:/usr/sbin/nologin
tcpdump:x:108:113::/nonexistent:/usr/sbin/nologin
landscape:x:109:115::/var/lib/landscape:/usr/sbin/nologin
pollinate:x:110:1::/var/cache/pollinate:/bin/false
usbmux:x:111:46:usbmux daemon,,,:/var/lib/usbmux:/usr/sbin/nologin
sshd:x:112:65534::/run/sshd:/usr/sbin/nologin
systemd-coredump:x:999:999:systemd Core Dumper:/:/usr/sbin/nologin
vmuser:x:1000:1000:vmuser:/home/vmuser:/bin/bash
lxd:x:998:100::/var/snap/lxd/common/lxd:/bin/false
ceph:x:64045:64045:Ceph storage service:/var/lib/ceph:/usr/sbin/nologin
root@node04:/mnt# 



cephfs挂载:
root@node01:~/ceph-deploy# ceph -s
  cluster:
    id:     9138c3cf-f529-4be6-ba84-97fcab59844b
    health: HEALTH_WARN
            1 slow ops, oldest one blocked for 4986 sec, mon.node02 has slow ops
 
  services:
    mon: 3 daemons, quorum node01,node02,node03 (age 83m)
    mgr: node01(active, since 83m), standbys: node02, node03
    osd: 6 osds: 6 up (since 83m), 6 in (since 7d)
 
  data:
    pools:   3 pools, 65 pgs
    objects: 18 objects, 14 MiB
    usage:   132 MiB used, 240 GiB / 240 GiB avail
    pgs:     65 active+clean
 
root@node01:~/ceph-deploy# ceph osd lspools
1 device_health_metrics
2 mypool1
3 rbd-data
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# ceph osd pool create cephfs-metadata 16 16
pool ‘cephfs-metadata‘ created
root@node01:~/ceph-deploy# ceph osd pool create cephfs-data 32 32
pool ‘cephfs-data‘ created
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# ceph -s
  cluster:
    id:     9138c3cf-f529-4be6-ba84-97fcab59844b
    health: HEALTH_WARN
            1 slow ops, oldest one blocked for 5126 sec, mon.node02 has slow ops
 
  services:
    mon: 3 daemons, quorum node01,node02,node03 (age 85m)
    mgr: node01(active, since 85m), standbys: node02, node03
    osd: 6 osds: 6 up (since 85m), 6 in (since 7d)
 
  data:
    pools:   5 pools, 113 pgs
    objects: 18 objects, 14 MiB
    usage:   137 MiB used, 240 GiB / 240 GiB avail
    pgs:     113 active+clean
 
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# ceph fs new mycephfs ceph-metadata cephfs-data
Error ENOENT: pool ‘ceph-metadata‘ does not exist
root@node01:~/ceph-deploy# ceph fs new mycephfs cephfs-metadata cephfs-data
new fs with metadata pool 4 and data pool 5
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# ceph -s
  cluster:
    id:     9138c3cf-f529-4be6-ba84-97fcab59844b
    health: HEALTH_ERR
            1 filesystem is offline
            1 filesystem is online with fewer MDS than max_mds
            1 slow ops, oldest one blocked for 5221 sec, mon.node02 has slow ops
 
  services:
    mon: 3 daemons, quorum node01,node02,node03 (age 87m)
    mgr: node01(active, since 87m), standbys: node02, node03
    mds: 0/0 daemons up
    osd: 6 osds: 6 up (since 87m), 6 in (since 7d)
 
  data:
    volumes: 1/1 healthy
    pools:   5 pools, 113 pgs
    objects: 18 objects, 14 MiB
    usage:   137 MiB used, 240 GiB / 240 GiB avail
    pgs:     113 active+clean
 
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# ceph mds stat
mycephfs:0
root@node01:~/ceph-deploy# ceph mds status mycephfs
no valid command found; 10 closest matches:
mds metadata [<who>]
mds count-metadata <property>
mds versions
mds compat show
mds ok-to-stop <ids>...
mds fail <role_or_gid>
mds repaired <role>
mds rm <gid:int>
mds compat rm_compat <feature:int>
mds compat rm_incompat <feature:int>
Error EINVAL: invalid command
root@node01:~/ceph-deploy# ceph fs status mycephfs
mycephfs - 0 clients
========
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata     0   75.9G  
  cephfs-data      data       0   75.9G  
root@node01:~/ceph-deploy# ceph -s
  cluster:
    id:     9138c3cf-f529-4be6-ba84-97fcab59844b
    health: HEALTH_ERR
            1 filesystem is offline
            1 filesystem is online with fewer MDS than max_mds
            1 slow ops, oldest one blocked for 5291 sec, mon.node02 has slow ops
 
  services:
    mon: 3 daemons, quorum node01,node02,node03 (age 88m)
    mgr: node01(active, since 88m), standbys: node02, node03
    mds: 0/0 daemons up
    osd: 6 osds: 6 up (since 88m), 6 in (since 7d)
 
  data:
    volumes: 1/1 healthy
    pools:   5 pools, 113 pgs
    objects: 18 objects, 14 MiB
    usage:   137 MiB used, 240 GiB / 240 GiB avail
    pgs:     113 active+clean
 
root@node01:~/ceph-deploy# ceph mds stat
mycephfs:0
root@node01:~/ceph-deploy# ls
ceph.bootstrap-mds.keyring  ceph.bootstrap-mgr.keyring  ceph.bootstrap-osd.keyring  ceph.bootstrap-rgw.keyring  ceph.client.admin.keyring  ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring
root@node01:~/ceph-deploy# ceph-de
ceph-debugpack  ceph-dencoder   ceph-deploy     
root@node01:~/ceph-deploy# ceph-deploy mds create node01
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mds create node01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f9007af8a50>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mds at 0x7f9007acddd0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  mds                           : [(‘node01‘, ‘node01‘)]
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts node01:node01
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: Ubuntu 20.04 focal
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to node01
[node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node01][WARNIN] mds keyring does not exist yet, creating one
[node01][DEBUG ] create a keyring file
[node01][DEBUG ] create path if it doesn‘t exist
[node01][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.node01 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-node01/keyring
[node01][INFO  ] Running command: systemctl enable ceph-mds@node01
[node01][WARNIN] Created symlink /etc/systemd/system/ceph-mds.target.wants/ceph-mds@node01.service → /lib/systemd/system/ceph-mds@.service.
[node01][INFO  ] Running command: systemctl start ceph-mds@node01
[node01][INFO  ] Running command: systemctl enable ceph.target
root@node01:~/ceph-deploy# ceph mds stat
mycephfs:1 {0=node01=up:active}
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# ceph auth add client.jack mon ‘allow r‘ mds ‘allow rw‘ osd ‘allow rwx pool=cephfs-data‘
added key for client.jack
root@node01:~/ceph-deploy# ceph auth get client.jack
[client.jack]
    key = AQD29CRh1IjhChAAjYT5Ydmp/cVYuVfKeAaBfw==
    caps mds = "allow rw"
    caps mon = "allow r"
    caps osd = "allow rwx pool=cephfs-data"
exported keyring for client.jack
root@node01:~/ceph-deploy# ceph auth get client.admin
[client.admin]
    key = AQBErhthY4YdIhAANKTOMAjkzpKkHSkXSoNpaQ==
    caps mds = "allow *"
    caps mgr = "allow *"
    caps mon = "allow *"
    caps osd = "allow *"
exported keyring for client.admin
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# ceph auth get client.jack -o ceph.client.jack.keyring
exported keyring for client.jack
root@node01:~/ceph-deploy# cat ceph.client.jack.keyring
[client.jack]
    key = AQD29CRh1IjhChAAjYT5Ydmp/cVYuVfKeAaBfw==
    caps mds = "allow rw"
    caps mon = "allow r"
    caps osd = "allow rwx pool=cephfs-data"
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# scp ceph.client.jack.keyring node04:/etc/ceph/
root@node04‘s password: 
ceph.client.jack.keyring                                                                                                                                                        100%  148   118.7KB/s   00:00    
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# ceph auth get-or-create-key cliet.jack
Error EINVAL: bad entity name
root@node01:~/ceph-deploy# ceph auth print-key cliet.jack
Error EINVAL: invalid entity_auth cliet.jack
root@node01:~/ceph-deploy# ceph auth print-key client.jack 
AQD29CRh1IjhChAAjYT5Ydmp/cVYuVfKeAaBfw==root@node01:~/ceph-deploy# ceph auth print-key client.jack > jack.key
root@node01:~/ceph-deploy# scp jack.key node04:/etc/ceph/
root@node04‘s password: 
jack.key                                                                                                                                                                        100%   40    26.3KB/s   00:00    
root@node01:~/ceph-deploy#


root@node04:~# ceph --user jack -s
  cluster:
    id:     9138c3cf-f529-4be6-ba84-97fcab59844b
    health: HEALTH_WARN
            1 slow ops, oldest one blocked for 5671 sec, mon.node02 has slow ops
 
  services:
    mon: 3 daemons, quorum node01,node02,node03 (age 94m)
    mgr: node01(active, since 95m), standbys: node02, node03
    mds: 1/1 daemons up
    osd: 6 osds: 6 up (since 94m), 6 in (since 7d)
 
  data:
    volumes: 1/1 healthy
    pools:   5 pools, 113 pgs
    objects: 40 objects, 14 MiB
    usage:   138 MiB used, 240 GiB / 240 GiB avail
    pgs:     113 active+clean
 
root@node04:~# ceph --id jack -s
  cluster:
    id:     9138c3cf-f529-4be6-ba84-97fcab59844b
    health: HEALTH_WARN
            1 slow ops, oldest one blocked for 5676 sec, mon.node02 has slow ops
 
  services:
    mon: 3 daemons, quorum node01,node02,node03 (age 94m)
    mgr: node01(active, since 95m), standbys: node02, node03
    mds: 1/1 daemons up
    osd: 6 osds: 6 up (since 94m), 6 in (since 7d)
 
  data:
    volumes: 1/1 healthy
    pools:   5 pools, 113 pgs
    objects: 40 objects, 14 MiB
    usage:   138 MiB used, 240 GiB / 240 GiB avail
    pgs:     113 active+clean
 
root@node04:~# 
root@node04:~# mkdir /data
root@node04:~# 
root@node04:~# mount -t ceph node01:6789,node02:6789,node03:6789:/ /data/ -o name=jack,secretfile=/etc/ceph/jack.key
root@node04:~# df -hT
Filesystem                                                    Type      Size  Used Avail Use% Mounted on
udev                                                          devtmpfs  936M     0  936M   0% /dev
tmpfs                                                         tmpfs     196M  1.2M  195M   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv                             ext4       20G  4.5G   15G  25% /
tmpfs                                                         tmpfs     980M     0  980M   0% /dev/shm
tmpfs                                                         tmpfs     5.0M     0  5.0M   0% /run/lock
tmpfs                                                         tmpfs     980M     0  980M   0% /sys/fs/cgroup
/dev/sda2                                                     ext4      976M  106M  804M  12% /boot
/dev/loop0                                                    squashfs   56M   56M     0 100% /snap/core18/2128
/dev/loop1                                                    squashfs   56M   56M     0 100% /snap/core18/1944
/dev/loop2                                                    squashfs   32M   32M     0 100% /snap/snapd/10707
/dev/loop3                                                    squashfs   70M   70M     0 100% /snap/lxd/19188
tmpfs                                                         tmpfs     196M     0  196M   0% /run/user/0
/dev/loop4                                                    squashfs   33M   33M     0 100% /snap/snapd/12704
/dev/loop5                                                    squashfs   71M   71M     0 100% /snap/lxd/21029
/dev/rbd0                                                     xfs      1014M   40M  975M   4% /mnt
192.168.11.210:6789,192.168.11.220:6789,192.168.11.230:6789:/ ceph       76G     0   76G   0% /data
root@node04:~# cd /data/
root@node04:/data# echo ‘test content‘ > testfile.txt
root@node04:/data# cat testfile.txt 
test content
root@node04:/data# 
View Code

上面是用户创建及挂载,下面是:mds高可用情况:

20210821第二天:Ceph账号管理(普通用户挂载)、mds高可用
  1 root@node01:~/ceph-deploy# ceph-deploy mds create node02
  2 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
  3 [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mds create node02
  4 [ceph_deploy.cli][INFO  ] ceph-deploy options:
  5 [ceph_deploy.cli][INFO  ]  username                      : None
  6 [ceph_deploy.cli][INFO  ]  verbose                       : False
  7 [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
  8 [ceph_deploy.cli][INFO  ]  subcommand                    : create
  9 [ceph_deploy.cli][INFO  ]  quiet                         : False
 10 [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7febd49fda50>
 11 [ceph_deploy.cli][INFO  ]  cluster                       : ceph
 12 [ceph_deploy.cli][INFO  ]  func                          : <function mds at 0x7febd49d2dd0>
 13 [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
 14 [ceph_deploy.cli][INFO  ]  mds                           : [(‘node02‘, ‘node02‘)]
 15 [ceph_deploy.cli][INFO  ]  default_release               : False
 16 [ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts node02:node02
 17 [node02][DEBUG ] connected to host: node02 
 18 [node02][DEBUG ] detect platform information from remote host
 19 [node02][DEBUG ] detect machine type
 20 [ceph_deploy.mds][INFO  ] Distro info: Ubuntu 20.04 focal
 21 [ceph_deploy.mds][DEBUG ] remote host will use systemd
 22 [ceph_deploy.mds][DEBUG ] deploying mds bootstrap to node02
 23 [node02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
 24 [node02][WARNIN] mds keyring does not exist yet, creating one
 25 [node02][DEBUG ] create a keyring file
 26 [node02][DEBUG ] create path if it doesn‘t exist
 27 [node02][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.node02 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-node02/keyring
 28 [node02][INFO  ] Running command: systemctl enable ceph-mds@node02
 29 [node02][WARNIN] Created symlink /etc/systemd/system/ceph-mds.target.wants/ceph-mds@node02.service → /lib/systemd/system/ceph-mds@.service.
 30 [node02][INFO  ] Running command: systemctl start ceph-mds@node02
 31 [node02][INFO  ] Running command: systemctl enable ceph.target
 32 root@node01:~/ceph-deploy# 
 33 root@node01:~/ceph-deploy# ceph mds stat
 34 mycephfs:1 {0=node01=up:active} 1 up:standby
 35 root@node01:~/ceph-deploy# 
 36 root@node01:~/ceph-deploy# ceph-deploy mds create node03
 37 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
 38 [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mds create node03
 39 [ceph_deploy.cli][INFO  ] ceph-deploy options:
 40 [ceph_deploy.cli][INFO  ]  username                      : None
 41 [ceph_deploy.cli][INFO  ]  verbose                       : False
 42 [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
 43 [ceph_deploy.cli][INFO  ]  subcommand                    : create
 44 [ceph_deploy.cli][INFO  ]  quiet                         : False
 45 [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f1d42589a50>
 46 [ceph_deploy.cli][INFO  ]  cluster                       : ceph
 47 [ceph_deploy.cli][INFO  ]  func                          : <function mds at 0x7f1d4255edd0>
 48 [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
 49 [ceph_deploy.cli][INFO  ]  mds                           : [(‘node03‘, ‘node03‘)]
 50 [ceph_deploy.cli][INFO  ]  default_release               : False
 51 [ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts node03:node03
 52 [node03][DEBUG ] connected to host: node03 
 53 [node03][DEBUG ] detect platform information from remote host
 54 [node03][DEBUG ] detect machine type
 55 [ceph_deploy.mds][INFO  ] Distro info: Ubuntu 20.04 focal
 56 [ceph_deploy.mds][DEBUG ] remote host will use systemd
 57 [ceph_deploy.mds][DEBUG ] deploying mds bootstrap to node03
 58 [node03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
 59 [node03][WARNIN] mds keyring does not exist yet, creating one
 60 [node03][DEBUG ] create a keyring file
 61 [node03][DEBUG ] create path if it doesn‘t exist
 62 [node03][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.node03 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-node03/keyring
 63 [node03][INFO  ] Running command: systemctl enable ceph-mds@node03
 64 [node03][WARNIN] Created symlink /etc/systemd/system/ceph-mds.target.wants/ceph-mds@node03.service → /lib/systemd/system/ceph-mds@.service.
 65 [node03][INFO  ] Running command: systemctl start ceph-mds@node03
 66 [node03][INFO  ] Running command: systemctl enable ceph.target
 67 root@node01:~/ceph-deploy# ceph mds stat
 68 mycephfs:1 {0=node01=up:active} 1 up:standby
 69 root@node01:~/ceph-deploy# ceph mds stat
 70 mycephfs:1 {0=node01=up:active} 2 up:standby
 71 root@node01:~/ceph-deploy# ceph-deploy mds create node04
 72 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
 73 [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mds create node04
 74 [ceph_deploy.cli][INFO  ] ceph-deploy options:
 75 [ceph_deploy.cli][INFO  ]  username                      : None
 76 [ceph_deploy.cli][INFO  ]  verbose                       : False
 77 [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
 78 [ceph_deploy.cli][INFO  ]  subcommand                    : create
 79 [ceph_deploy.cli][INFO  ]  quiet                         : False
 80 [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f72374d4a50>
 81 [ceph_deploy.cli][INFO  ]  cluster                       : ceph
 82 [ceph_deploy.cli][INFO  ]  func                          : <function mds at 0x7f72374a9dd0>
 83 [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
 84 [ceph_deploy.cli][INFO  ]  mds                           : [(‘node04‘, ‘node04‘)]
 85 [ceph_deploy.cli][INFO  ]  default_release               : False
 86 [ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts node04:node04
 87 root@node04‘s password: 
 88 bash: python2: command not found
 89 [ceph_deploy.mds][ERROR ] connecting to host: node04 resulted in errors: IOError cannot send (already closed?)
 90 [ceph_deploy][ERROR ] GenericError: Failed to create 1 MDSs
 91 
 92 root@node01:~/ceph-deploy# ceph-deploy mds create node04
 93 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
 94 [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mds create node04
 95 [ceph_deploy.cli][INFO  ] ceph-deploy options:
 96 [ceph_deploy.cli][INFO  ]  username                      : None
 97 [ceph_deploy.cli][INFO  ]  verbose                       : False
 98 [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
 99 [ceph_deploy.cli][INFO  ]  subcommand                    : create
100 [ceph_deploy.cli][INFO  ]  quiet                         : False
101 [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f4fa90baa50>
102 [ceph_deploy.cli][INFO  ]  cluster                       : ceph
103 [ceph_deploy.cli][INFO  ]  func                          : <function mds at 0x7f4fa908fdd0>
104 [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
105 [ceph_deploy.cli][INFO  ]  mds                           : [(‘node04‘, ‘node04‘)]
106 [ceph_deploy.cli][INFO  ]  default_release               : False
107 [ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts node04:node04
108 root@node04‘s password: 
109 root@node04‘s password: 
110 [node04][DEBUG ] connected to host: node04 
111 [node04][DEBUG ] detect platform information from remote host
112 [node04][DEBUG ] detect machine type
113 [ceph_deploy.mds][INFO  ] Distro info: Ubuntu 20.04 focal
114 [ceph_deploy.mds][DEBUG ] remote host will use systemd
115 [ceph_deploy.mds][DEBUG ] deploying mds bootstrap to node04
116 [node04][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
117 [node04][WARNIN] mds keyring does not exist yet, creating one
118 [node04][DEBUG ] create a keyring file
119 [node04][DEBUG ] create path if it doesn‘t exist
120 [node04][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.node04 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-node04/keyring
121 [node04][INFO  ] Running command: systemctl enable ceph-mds@node04
122 [node04][WARNIN] Created symlink /etc/systemd/system/ceph-mds.target.wants/ceph-mds@node04.service → /lib/systemd/system/ceph-mds@.service.
123 [node04][INFO  ] Running command: systemctl start ceph-mds@node04
124 [node04][INFO  ] Running command: systemctl enable ceph.target
125 root@node01:~/ceph-deploy# ceph mds stat
126 mycephfs:1 {0=node01=up:active} 3 up:standby
127 root@node01:~/ceph-deploy# 
128 root@node01:~/ceph-deploy# 
129 root@node01:~/ceph-deploy# ceph mds stat
130 mycephfs:1 {0=node01=up:active} 3 up:standby
131 root@node01:~/ceph-deploy# 
132 root@node01:~/ceph-deploy# 
133 root@node01:~/ceph-deploy# ceph fs status
134 mycephfs - 1 clients
135 ========
136 RANK  STATE    MDS       ACTIVITY     DNS    INOS   DIRS   CAPS  
137  0    active  node01  Reqs:    0 /s    11     14     12      2   
138       POOL         TYPE     USED  AVAIL  
139 cephfs-metadata  metadata   108k  75.9G  
140   cephfs-data      data    12.0k  75.9G  
141 STANDBY MDS  
142    node03    
143    node04    
144    node02    
145 MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
146 root@node01:~/ceph-deploy# 
147 root@node01:~/ceph-deploy# 
148 root@node01:~/ceph-deploy# ceph fs get mycephfs
149 Filesystem ‘mycephfs‘ (1)
150 fs_name    mycephfs
151 epoch    5
152 flags    12
153 created    2021-08-24T21:27:30.730136+0800
154 modified    2021-08-24T21:29:55.774998+0800
155 tableserver    0
156 root    0
157 session_timeout    60
158 session_autoclose    300
159 max_file_size    1099511627776
160 required_client_features    {}
161 last_failure    0
162 last_failure_osd_epoch    0
163 compat    compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
164 max_mds    1
165 in    0
166 up    {0=65409}
167 failed    
168 damaged    
169 stopped    
170 data_pools    [5]
171 metadata_pool    4
172 inline_data    disabled
173 balancer    
174 standby_count_wanted    1
175 [mds.node01{0:65409} state up:active seq 2 addr [v2:192.168.11.210:6810/3443284017,v1:192.168.11.210:6811/3443284017]]
176 root@node01:~/ceph-deploy# 
177 root@node01:~/ceph-deploy# 
178 root@node01:~/ceph-deploy# ceph fs set mycephfs max_mds 2
179 root@node01:~/ceph-deploy# 
180 root@node01:~/ceph-deploy# 
181 root@node01:~/ceph-deploy# ceph fs get mycephfs
182 Filesystem ‘mycephfs‘ (1)
183 fs_name    mycephfs
184 epoch    12
185 flags    12
186 created    2021-08-24T21:27:30.730136+0800
187 modified    2021-08-24T21:44:44.248039+0800
188 tableserver    0
189 root    0
190 session_timeout    60
191 session_autoclose    300
192 max_file_size    1099511627776
193 required_client_features    {}
194 last_failure    0
195 last_failure_osd_epoch    0
196 compat    compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
197 max_mds    2
198 in    0,1
199 up    {0=65409,1=64432}
200 failed    
201 damaged    
202 stopped    
203 data_pools    [5]
204 metadata_pool    4
205 inline_data    disabled
206 balancer    
207 standby_count_wanted    1
208 [mds.node01{0:65409} state up:active seq 2 addr [v2:192.168.11.210:6810/3443284017,v1:192.168.11.210:6811/3443284017]]
209 [mds.node02{1:64432} state up:active seq 41 addr [v2:192.168.11.220:6808/4242415336,v1:192.168.11.220:6809/4242415336]]
210 root@node01:~/ceph-deploy# 
211 root@node01:~/ceph-deploy# 
212 root@node01:~/ceph-deploy# ceph fs status
213 mycephfs - 1 clients
214 ========
215 RANK  STATE    MDS       ACTIVITY     DNS    INOS   DIRS   CAPS  
216  0    active  node01  Reqs:    0 /s    11     14     12      2   
217  1    active  node02  Reqs:    0 /s    10     13     11      0   
218       POOL         TYPE     USED  AVAIL  
219 cephfs-metadata  metadata   180k  75.9G  
220   cephfs-data      data    12.0k  75.9G  
221 STANDBY MDS  
222    node03    
223    node04    
224 MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
225 root@node01:~/ceph-deploy# 
226 root@node01:~/ceph-deploy# 
227 root@node01:~/ceph-deploy# 
228 
229 
230 
231 root@node01:~/ceph-deploy# 
232 root@node01:~/ceph-deploy# systemctl restart ceph
233 ceph-crash.service       ceph-mds@node01.service  ceph-mgr@node01.service  ceph-mon@node01.service  ceph-osd@0.service       ceph-osd.target          ceph.service             
234 ceph-fuse.target         ceph-mds.target          ceph-mgr.target          ceph-mon.target          ceph-osd@3.service       ceph-radosgw.target      ceph.target              
235 root@node01:~/ceph-deploy# systemctl restart ceph-m
236 ceph-mds@node01.service  ceph-mds.target          ceph-mgr@node01.service  ceph-mgr.target          ceph-mon@node01.service  ceph-mon.target          
237 root@node01:~/ceph-deploy# systemctl restart ceph-mds
238 ceph-mds@node01.service  ceph-mds.target          
239 root@node01:~/ceph-deploy# systemctl restart ceph-mds@node01.service 
240 root@node01:~/ceph-deploy# 
241 root@node01:~/ceph-deploy# ceph fs status
242 mycephfs - 1 clients
243 ========
244 RANK  STATE    MDS       ACTIVITY     DNS    INOS   DIRS   CAPS  
245  0    active  node04  Reqs:    0 /s     1      4      2      1   
246  1    active  node02  Reqs:    0 /s    10     13     11      0   
247       POOL         TYPE     USED  AVAIL  
248 cephfs-metadata  metadata   180k  75.9G  
249   cephfs-data      data    12.0k  75.9G  
250 STANDBY MDS  
251    node03    
252 MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
253 root@node01:~/ceph-deploy# 
254 root@node01:~/ceph-deploy# 
255 root@node01:~/ceph-deploy# ceph fs status
256 mycephfs - 1 clients
257 ========
258 RANK  STATE    MDS       ACTIVITY     DNS    INOS   DIRS   CAPS  
259  0    active  node04  Reqs:    0 /s    11     14     12      1   
260  1    active  node02  Reqs:    0 /s    10     13     11      0   
261       POOL         TYPE     USED  AVAIL  
262 cephfs-metadata  metadata   180k  75.9G  
263   cephfs-data      data    12.0k  75.9G  
264 STANDBY MDS  
265    node03    
266    node01    
267 MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
268 root@node01:~/ceph-deploy# 
269 root@node01:~/ceph-deploy# 
270 root@node01:~/ceph-deploy# vim ceph.conf 
271 root@node01:~/ceph-deploy# cat ceph.conf 
272 [global]
273 fsid = 9138c3cf-f529-4be6-ba84-97fcab59844b
274 public_network = 192.168.11.0/24
275 cluster_network = 192.168.22.0/24
276 mon_initial_members = node01
277 mon_host = 192.168.11.210
278 auth_cluster_required = cephx
279 auth_service_required = cephx
280 auth_client_required = cephx
281 
282 [mds.node04]
283 mds_standby_for_name = node03
284 mds_standby_replay = true
285 [mds.node02]
286 mds_standby_for_name = node04
287 mds_standby_replay = true
288 root@node01:~/ceph-deploy# vim ceph.conf 
289 root@node01:~/ceph-deploy# cat ceph.conf 
290 [global]
291 fsid = 9138c3cf-f529-4be6-ba84-97fcab59844b
292 public_network = 192.168.11.0/24
293 cluster_network = 192.168.22.0/24
294 mon_initial_members = node01
295 mon_host = 192.168.11.210
296 auth_cluster_required = cephx
297 auth_service_required = cephx
298 auth_client_required = cephx
299 
300 [mds.node04]
301 mds_standby_for_name = node03
302 mds_standby_replay = true
303 [mds.node02]
304 mds_standby_for_name = node01
305 mds_standby_replay = true
306 root@node01:~/ceph-deploy# ceph-deploy config push node01
307 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
308 [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy config push node01
309 [ceph_deploy.cli][INFO  ] ceph-deploy options:
310 [ceph_deploy.cli][INFO  ]  username                      : None
311 [ceph_deploy.cli][INFO  ]  verbose                       : False
312 [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
313 [ceph_deploy.cli][INFO  ]  subcommand                    : push
314 [ceph_deploy.cli][INFO  ]  quiet                         : False
315 [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f4fdbb800f0>
316 [ceph_deploy.cli][INFO  ]  cluster                       : ceph
317 [ceph_deploy.cli][INFO  ]  client                        : [‘node01‘]
318 [ceph_deploy.cli][INFO  ]  func                          : <function config at 0x7f4fdbbd6350>
319 [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
320 [ceph_deploy.cli][INFO  ]  default_release               : False
321 [ceph_deploy.config][DEBUG ] Pushing config to node01
322 [node01][DEBUG ] connected to host: node01 
323 [node01][DEBUG ] detect platform information from remote host
324 [node01][DEBUG ] detect machine type
325 [node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
326 [ceph_deploy.config][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite
327 [ceph_deploy][ERROR ] GenericError: Failed to config 1 hosts
328 
329 root@node01:~/ceph-deploy# ceph-deploy --overwrite-conf config push node01
330 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
331 [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy --overwrite-conf config push node01
332 [ceph_deploy.cli][INFO  ] ceph-deploy options:
333 [ceph_deploy.cli][INFO  ]  username                      : None
334 [ceph_deploy.cli][INFO  ]  verbose                       : False
335 [ceph_deploy.cli][INFO  ]  overwrite_conf                : True
336 [ceph_deploy.cli][INFO  ]  subcommand                    : push
337 [ceph_deploy.cli][INFO  ]  quiet                         : False
338 [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f201b0280f0>
339 [ceph_deploy.cli][INFO  ]  cluster                       : ceph
340 [ceph_deploy.cli][INFO  ]  client                        : [‘node01‘]
341 [ceph_deploy.cli][INFO  ]  func                          : <function config at 0x7f201b07e350>
342 [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
343 [ceph_deploy.cli][INFO  ]  default_release               : False
344 [ceph_deploy.config][DEBUG ] Pushing config to node01
345 [node01][DEBUG ] connected to host: node01 
346 [node01][DEBUG ] detect platform information from remote host
347 [node01][DEBUG ] detect machine type
348 [node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
349 root@node01:~/ceph-deploy# ceph-deploy --overwrite-conf config push node02 node02 node04
350 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
351 [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy --overwrite-conf config push node02 node02 node04
352 [ceph_deploy.cli][INFO  ] ceph-deploy options:
353 [ceph_deploy.cli][INFO  ]  username                      : None
354 [ceph_deploy.cli][INFO  ]  verbose                       : False
355 [ceph_deploy.cli][INFO  ]  overwrite_conf                : True
356 [ceph_deploy.cli][INFO  ]  subcommand                    : push
357 [ceph_deploy.cli][INFO  ]  quiet                         : False
358 [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f549232a0f0>
359 [ceph_deploy.cli][INFO  ]  cluster                       : ceph
360 [ceph_deploy.cli][INFO  ]  client                        : [‘node02‘, ‘node02‘, ‘node04‘]
361 [ceph_deploy.cli][INFO  ]  func                          : <function config at 0x7f5492380350>
362 [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
363 [ceph_deploy.cli][INFO  ]  default_release               : False
364 [ceph_deploy.config][DEBUG ] Pushing config to node02
365 [node02][DEBUG ] connected to host: node02 
366 [node02][DEBUG ] detect platform information from remote host
367 [node02][DEBUG ] detect machine type
368 [node02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
369 [ceph_deploy.config][DEBUG ] Pushing config to node02
370 [node02][DEBUG ] connected to host: node02 
371 [node02][DEBUG ] detect platform information from remote host
372 [node02][DEBUG ] detect machine type
373 [node02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
374 [ceph_deploy.config][DEBUG ] Pushing config to node04
375 root@node04‘s password: 
376 root@node04‘s password: 
377 [node04][DEBUG ] connected to host: node04 
378 [node04][DEBUG ] detect platform information from remote host
379 [node04][DEBUG ] detect machine type
380 [node04][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
381 root@node01:~/ceph-deploy# 
382 root@node01:~/ceph-deploy# 
383 root@node01:~/ceph-deploy# systemctl restart ceph-mds@node01.service 
384 root@node01:~/ceph-deploy# 
385 root@node01:~/ceph-deploy# 
386 root@node01:~/ceph-deploy# ceph fs status 
387 mycephfs - 1 clients
388 ========
389 RANK  STATE    MDS       ACTIVITY     DNS    INOS   DIRS   CAPS  
390  0    active  node04  Reqs:    0 /s    11     14     12      1   
391  1    active  node02  Reqs:    0 /s    10     13     11      0   
392       POOL         TYPE     USED  AVAIL  
393 cephfs-metadata  metadata   180k  75.9G  
394   cephfs-data      data    12.0k  75.9G  
395 STANDBY MDS  
396    node03    
397    node01    
398 MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
399 root@node01:~/ceph-deploy# 
400 root@node01:~/ceph-deploy# 
401 root@node01:~/ceph-deploy# ceph-deploy admin node01 node02 node03 node04
402 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
403 [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy admin node01 node02 node03 node04
404 [ceph_deploy.cli][INFO  ] ceph-deploy options:
405 [ceph_deploy.cli][INFO  ]  username                      : None
406 [ceph_deploy.cli][INFO  ]  verbose                       : False
407 [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
408 [ceph_deploy.cli][INFO  ]  quiet                         : False
409 [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f9fa57ccf50>
410 [ceph_deploy.cli][INFO  ]  cluster                       : ceph
411 [ceph_deploy.cli][INFO  ]  client                        : [‘node01‘, ‘node02‘, ‘node03‘, ‘node04‘]
412 [ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7f9fa58a54d0>
413 [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
414 [ceph_deploy.cli][INFO  ]  default_release               : False
415 [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node01
416 [node01][DEBUG ] connected to host: node01 
417 [node01][DEBUG ] detect platform information from remote host
418 [node01][DEBUG ] detect machine type
419 [node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
420 [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node02
421 [node02][DEBUG ] connected to host: node02 
422 [node02][DEBUG ] detect platform information from remote host
423 [node02][DEBUG ] detect machine type
424 [node02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
425 [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node03
426 [node03][DEBUG ] connected to host: node03 
427 [node03][DEBUG ] detect platform information from remote host
428 [node03][DEBUG ] detect machine type
429 [node03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
430 [ceph_deploy.admin][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite
431 [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node04
432 root@node04‘s password: 
433 Permission denied, please try again.
434 root@node04‘s password: 
435 root@node04‘s password: 
436 [node04][DEBUG ] connected to host: node04 
437 [node04][DEBUG ] detect platform information from remote host
438 [node04][DEBUG ] detect machine type
439 [node04][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
440 [ceph_deploy][ERROR ] GenericError: Failed to configure 1 admin hosts
441 
442 root@node01:~/ceph-deploy# ceph-deploy admin node01 node02 node03 node04 --overwrite-conf 
443 usage: ceph-deploy [-h] [-v | -q] [--version] [--username USERNAME]
444                    [--overwrite-conf] [--ceph-conf CEPH_CONF]
445                    COMMAND ...
446 ceph-deploy: error: unrecognized arguments: --overwrite-conf
447 root@node01:~/ceph-deploy# ceph-deploy --overwrite-conf admin node01 node02 node03 node04 
448 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
449 [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy --overwrite-conf admin node01 node02 node03 node04
450 [ceph_deploy.cli][INFO  ] ceph-deploy options:
451 [ceph_deploy.cli][INFO  ]  username                      : None
452 [ceph_deploy.cli][INFO  ]  verbose                       : False
453 [ceph_deploy.cli][INFO  ]  overwrite_conf                : True
454 [ceph_deploy.cli][INFO  ]  quiet                         : False
455 [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f41f97a5f50>
456 [ceph_deploy.cli][INFO  ]  cluster                       : ceph
457 [ceph_deploy.cli][INFO  ]  client                        : [‘node01‘, ‘node02‘, ‘node03‘, ‘node04‘]
458 [ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7f41f987e4d0>
459 [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
460 [ceph_deploy.cli][INFO  ]  default_release               : False
461 [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node01
462 [node01][DEBUG ] connected to host: node01 
463 [node01][DEBUG ] detect platform information from remote host
464 [node01][DEBUG ] detect machine type
465 [node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
466 [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node02
467 [node02][DEBUG ] connected to host: node02 
468 [node02][DEBUG ] detect platform information from remote host
469 [node02][DEBUG ] detect machine type
470 [node02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
471 [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node03
472 [node03][DEBUG ] connected to host: node03 
473 [node03][DEBUG ] detect platform information from remote host
474 [node03][DEBUG ] detect machine type
475 [node03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
476 [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node04
477 root@node04‘s password: 
478 root@node04‘s password: 
479 [node04][DEBUG ] connected to host: node04 
480 [node04][DEBUG ] detect platform information from remote host
481 [node04][DEBUG ] detect machine type
482 [node04][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
483 root@node01:~/ceph-deploy# 
484 root@node01:~/ceph-deploy# 
485 root@node01:~/ceph-deploy# ls
486 ceph.bootstrap-mds.keyring  ceph.bootstrap-osd.keyring  ceph.client.admin.keyring  ceph.conf             ceph.mon.keyring
487 ceph.bootstrap-mgr.keyring  ceph.bootstrap-rgw.keyring  ceph.client.jack.keyring   ceph-deploy-ceph.log  jack.key
488 root@node01:~/ceph-deploy# vim ceph.conf 
489 root@node01:~/ceph-deploy# ceph fs status 
490 mycephfs - 1 clients
491 ========
492 RANK  STATE    MDS       ACTIVITY     DNS    INOS   DIRS   CAPS  
493  0    active  node04  Reqs:    0 /s    11     14     12      1   
494  1    active  node02  Reqs:    0 /s    10     13     11      0   
495       POOL         TYPE     USED  AVAIL  
496 cephfs-metadata  metadata   180k  75.9G  
497   cephfs-data      data    12.0k  75.9G  
498 STANDBY MDS  
499    node03    
500    node01    
501 MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
502 root@node01:~/ceph-deploy# 
503 root@node01:~/ceph-deploy# 
504 root@node01:~/ceph-deploy# 
505 root@node01:~/ceph-deploy# 
506 root@node01:~/ceph-deploy# 
507 root@node01:~/ceph-deploy# 
508 root@node01:~/ceph-deploy# ceph fs status 
509 mycephfs - 1 clients
510 ========
511 RANK  STATE    MDS       ACTIVITY     DNS    INOS   DIRS   CAPS  
512  0    active  node02  Reqs:    0 /s     1      4      2      0   
513  1    active  node01  Reqs:    0 /s    10     13     11      0   
514       POOL         TYPE     USED  AVAIL  
515 cephfs-metadata  metadata   180k  75.9G  
516   cephfs-data      data    12.0k  75.9G  
517 STANDBY MDS  
518    node03    
519 MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
520 root@node01:~/ceph-deploy# ceph fs status 
521 mycephfs - 1 clients
522 ========
523 RANK  STATE    MDS       ACTIVITY     DNS    INOS   DIRS   CAPS  
524  0    active  node02  Reqs:    0 /s    11     14     12      1   
525  1    active  node01  Reqs:    0 /s    10     13     11      0   
526       POOL         TYPE     USED  AVAIL  
527 cephfs-metadata  metadata   180k  75.9G  
528   cephfs-data      data    12.0k  75.9G  
529 STANDBY MDS  
530    node03    
531    node04    
532 MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
533 root@node01:~/ceph-deploy# ceph fs status 
534 mycephfs - 1 clients
535 ========
536 RANK  STATE    MDS       ACTIVITY     DNS    INOS   DIRS   CAPS  
537  0    active  node02  Reqs:    0 /s    11     14     12      1   
538  1    active  node01  Reqs:    0 /s    10     13     11      0   
539       POOL         TYPE     USED  AVAIL  
540 cephfs-metadata  metadata   180k  75.9G  
541   cephfs-data      data    12.0k  75.9G  
542 STANDBY MDS  
543    node03    
544    node04    
545 MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
546 root@node01:~/ceph-deploy# 
547 root@node01:~/ceph-deploy# 
548 root@node01:~/ceph-deploy# cat ceph.c
549 cat: ceph.c: No such file or directory
550 root@node01:~/ceph-deploy# cat ceph.conf 
551 [global]
552 fsid = 9138c3cf-f529-4be6-ba84-97fcab59844b
553 public_network = 192.168.11.0/24
554 cluster_network = 192.168.22.0/24
555 mon_initial_members = node01
556 mon_host = 192.168.11.210
557 auth_cluster_required = cephx
558 auth_service_required = cephx
559 auth_client_required = cephx
560 
561 [mds.node04]
562 mds_standby_for_name = node03
563 mds_standby_replay = true
564 [mds.node02]
565 mds_standby_for_name = node01
566 mds_standby_replay = true
567 root@node01:~/ceph-deploy# 
568 root@node01:~/ceph-deploy# 
569 root@node01:~/ceph-deploy# systemctl restart ceph-mds@node01.service 
570 root@node01:~/ceph-deploy# 
571 root@node01:~/ceph-deploy# 
572 root@node01:~/ceph-deploy# ceph fs status 
573 mycephfs - 1 clients
574 ========
575 RANK  STATE    MDS       ACTIVITY     DNS    INOS   DIRS   CAPS  
576  0    active  node02  Reqs:    0 /s    11     14     12      1   
577  1    active  node04  Reqs:    0 /s    10     13     12      0   
578       POOL         TYPE     USED  AVAIL  
579 cephfs-metadata  metadata   180k  75.9G  
580   cephfs-data      data    12.0k  75.9G  
581 STANDBY MDS  
582    node03    
583    node01    
584 MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
585 root@node01:~/ceph-deploy# 
586 root@node01:~/ceph-deploy# 
View Code

 

 

明天白天有空慢慢的排版

 

20210821第二天:Ceph账号管理(普通用户挂载)、mds高可用

上一篇:不懂SpringApplication生命周期事件?那就等于不会Spring Boot嘛


下一篇:pom文件详解及maven打包(二)