Ceph RBD块存储 快照创建和克隆

快照其实和虚拟机的快照是一样的,只不过是用一些命令实现的

创建快照


0、创建rbd使用的pool

[root@cephnode01 ~]# rbd create --size 10240 image02
rbd: error opening default pool 'rbd'
Ensure that the default pool has been created or specify an alternate pool name.

[root@cephnode01 ~]# ceph osd pool create rbd 32 32
pool 'rbd' created

[root@cephnode01 ~]# ceph osd pool application enable rbd rbd
enabled application 'rbd' on pool 'rbd'

1、创建快照

[root@cephnode01 ~]# rbd create --size 10240 image02
[root@cephnode01 ~]# rbd info image02
rbd image 'image02':
	size 10 GiB in 2560 objects
	order 22 (4 MiB objects)
	snapshot_count: 0
	id: 1bdd49fc87cdb
	block_name_prefix: rbd_data.1bdd49fc87cdb
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	op_features: 
	flags: 
	create_timestamp: Tue May 11 10:59:29 2021
	access_timestamp: Tue May 11 10:59:29 2021
	modify_timestamp: Tue May 11 10:59:29 2021


查看pool里面保存了什么信息
[root@cephnode01 ~]# rados -p rbd ls
rbd_object_map.1bdd49fc87cdb
rbd_header.1bdd49fc87cdb
rbd_directory
rbd_info
rbd_id.image02

[root@cephnode01 ~]# rbd snap create image02@image02_snap01

2、列出创建的快照

[root@cephnode01 ~]# rbd snap list image02
SNAPID NAME           SIZE   PROTECTED TIMESTAMP                
     4 image02_snap01 10 GiB           Tue May 11 11:08:53 2021 
或者
[root@cephnode01 ~]# rbd ls -l
NAME                   SIZE   PARENT FMT PROT LOCK 
image02                10 GiB          2           
image02@image02_snap01 10 GiB          2 

 3、查看快照详细信息(这个是全量快照拍过来的)

[root@cephnode01 ~]# rbd info image02@image02_snap01
rbd image 'image02':
	size 10 GiB in 2560 objects
	order 22 (4 MiB objects)
	snapshot_count: 1
	id: 1bdd49fc87cdb
	block_name_prefix: rbd_data.1bdd49fc87cdb
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	op_features: 
	flags: 
	create_timestamp: Tue May 11 10:59:29 2021
	access_timestamp: Tue May 11 10:59:29 2021
	modify_timestamp: Tue May 11 10:59:29 2021
	protected: False

这里父快照是image02

4、克隆快照(快照必须处于被保护状态【不能写】才能被克隆)

这个也就是相对于从一个pool里面导入到另外一个pool里面

[root@cephnode01 ~]# rbd snap protect image02@image02_snap01

[root@cephnode01 ~]# rbd clone rbd/image02@image02_snap01 kube/image02_clone01
rbd: error opening pool 'kube': (2) No such file or directory
[root@cephnode01 ~]# ceph osd pool create kube 32 32
pool 'kube' created
[root@cephnode01 ~]# ceph osd pool ls
.rgw.root
default.rgw.control
default.rgw.meta
default.rgw.log
rbd
kube

[root@cephnode01 ~]# rbd clone rbd/image02@image02_snap01 kube/image02_clone01


[root@cephnode01 ~]# rados -p kube ls
rbd_id.image02_clone01
rbd_directory
rbd_children
rbd_info
rbd_header.1be0dd967da90
rbd_object_map.1be0dd967da90


这里可以看到rbd里面没有数据,在不写数据的时候是不占用集群空间的,相对于限制配额的作用
[root@cephnode01 ~]# rados -p rbd ls
rbd_object_map.1bdd49fc87cdb
rbd_header.1bdd49fc87cdb
rbd_directory
rbd_info
rbd_id.image02
rbd_object_map.1bdd49fc87cdb.0000000000000004

5、查看快照的children,查看到kube这个pool里面有子快照

[root@cephnode01 ~]# rbd children image02
kube/image02_clone01

6、去掉快照的parent

去掉关系,你们拷贝到kube pool下面的块就是独立的块

[root@cephnode01 ~]# rbd flatten kube/image02_clone01
Image flatten: 100% complete...done.

[root@cephnode01 ~]# rbd children image02
[root@cephnode01 ~]# 
可以看到不存在父子关系了

 

 

恢复快照,恢复到之前版本的快照


[root@cephnode01 ~]# rbd snap rollback image02@image02_snap01
Rolling back to snapshot: 100% complete...done.

 

删除快照


[root@cephnode01 ~]# rbd snap unprotect image02@image02_snap01
[root@cephnode01 ~]# rbd snap remove image02@image02_snap01
Removing snap: 100% complete...done.

 

上一篇:分布式存储Ceph RBD-Mirror 灾备方案


下一篇:k8s通过ceph-csi接入存储的概要分析