openstack高可用集群15-后端存储技术—GlusterFS(分布式存储)

 
 
 
openstack高可用集群15-后端存储技术—GlusterFS(分布式存储)openstack高可用集群15-后端存储技术—GlusterFS(分布式存储)
原理
提起分布式存储,简单的说,就是我可以建立一个多节点的分布式存储集群,对用户来说我可能只存储到某一个路径下,但是系统会自动往其他节点进行复制,也就是其实你的数据复制了N份(按照你的节点来说),这样即使有一个节点出现问题,也不影响其他节点的数据,这样就达到了存储高可用性的目的,当然这种方式最大的优点就是不需要花费额外费用,使用廉价的硬件资源就可以满足需求。
 
GlusterFS采用模块化、堆栈式的架构,可通过灵活的配置支持高度定制化的应用环境,比如大文件存储、海量小文件存储、云存储、多传输协议应用等。每个功能以模块形式实现,然后以积木方式进行简单的组合,即可实现复杂的功能。比如,Replicate模块可实现RAID1,Stripe模块可实现RAID0,通过两者的组合可实现RAID10和RAID01,同时获得高性能和高可靠性。如下图所示:
 
openstack高可用集群15-后端存储技术—GlusterFS(分布式存储)openstack高可用集群15-后端存储技术—GlusterFS(分布式存储)
特点
扩展性和高性能 
GlusterFS利用双重特性来提供几TB至数PB的高扩展存储解决方案。Scale-Out架构允许通过简单地增加资源来提高存储容量和性能,磁盘、计算和I/O资源都可以独立增加,支持10GbE和InfiniBand等高速网络互联。Gluster弹性哈希(Elastic Hash)解除了GlusterFS对元数据服务器的需求,消除了单点故障和性能瓶颈,真正实现了并行化数据访问。
 
高可用性 
GlusterFS可以对文件进行自动复制,如镜像或多次复制,从而确保数据总是可以访问,甚至是在硬件故障的情况下也能正常访问。自我修复功能能够把数据恢复到正确的状态,而且修复是以增量的方式在后台执行,几乎不会产生性能负载。GlusterFS没有设计自己的私有数据文件格式,而是采用操作系统中主流标准的磁盘文件系统(如EXT3、ZFS)来存储文件,因此数据可以使用各种标准工具进行复制和访问。
 
全局统一命名空间 
全局统一命名空间将磁盘和内存资源聚集成一个单一的虚拟存储池,对上层用户和应用屏蔽了底层的物理硬件。存储资源可以根据需要在虚拟存储池中进行弹性扩展,比如扩容或收缩。当存储虚拟机映像时,存储的虚拟映像文件没有数量限制,成千虚拟机均通过单一挂载点进行数据共享。虚拟机I/O可在命名空间内的所有服务器上自动进行负载均衡,消除了SAN环境中经常发生的访问热点和性能瓶颈问题。
 
弹性哈希算法 
GlusterFS采用弹性哈希算法在存储池中定位数据,而不是采用集中式或分布式元数据服务器索引。在其他的Scale-Out存储系统中,元数据服务器通常会导致I/O性能瓶颈和单点故障问题。GlusterFS中,所有在Scale-Out存储配置中的存储系统都可以智能地定位任意数据分片,不需要查看索引或者向其他服务器查询。这种设计机制完全并行化了数据访问,实现了真正的线性性能扩展。
 
弹性卷管理 
数据储存在逻辑卷中,逻辑卷可以从虚拟化的物理存储池进行独立逻辑划分而得到。存储服务器可以在线进行增加和移除,不会导致应用中断。逻辑卷可以在所有配置服务器中增长和缩减,可以在不同服务器迁移进行容量均衡,或者增加和移除系统,这些操作都可在线进行。文件系统配置更改也可以实时在线进行并应用,从而可以适应工作负载条件变化或在线性能调优。
 
基于标准协议 
Gluster存储服务支持NFS, CIFS, HTTP, FTP以及Gluster原生协议,完全与POSIX标准兼容。现有应用程序不需要作任何修改或使用专用API,就可以对Gluster中的数据进行访问。这在公有云环境中部署Gluster时非常有用,Gluster对云服务提供商专用API进行抽象,然后提供标准POSIX接口。
 
--------------------------------------------------------------------------------------------------------------------------------
 
实验环境
10.30.1.231    glfs1  GlusterFS分布式存储服务器1
10.30.1.232    glfs2  GlusterFS分布式存储服务器2
10.30.1.233    glfs3  GlusterFS分布式存储服务器3
10.30.1.234    glfs4  GlusterFS分布式存储服务器4
10.30.1.201    node1  openstack控制节点1
10.30.1.202    node2  openstack控制节点2
10.30.1.203    node3  openstack计算节点1
10.30.1.204    node4  openstack计算节点2
 
添加hosts文件实现集群主机的相互通信
管理服务器和agent端都添加hosts文件实现集群主机之间相互能够解析
#echo -e "10.30.1.231    glfs1\n10.30.1.232    glfs2\n10.30.1.233    glfs3\n10.30.1.234    glfs4" >> /etc/hosts
修改每台机器的 /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.30.1.231    glfs1
10.30.1.232    glfs2
10.30.1.233    glfs3
10.30.1.234    glfs4
 
关闭selinux和防火墙
setenforce 0
sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
systemctl stop iptables
systemctl stop firewalld
systemctl disable iptables
systemctl disable firewalld
 
1.安装glusterfs
 
 
当然你的主机联网的情况下,用下面的方法安装更便捷
yum install centos-release-gluster7 -y
yum  install -y glusterfs-server glusterfs-cli glusterfs-geo-replication
 
2.配置glusterfs
 
 
 
2.1查看版本信息
glusterfs -V
 
2.2启动停止glusterfs服务
systemctl start glusterfsd
systemctl start glusterd
2.3设置开机启动
systemctl enable glusterfsd
systemctl enable glusterd
 
2.4服务运行状态查看
systemctl status glusterd
 
 
2.5存储主机加入信任存储池
 
[root@glfs1 ~]# gluster peer probe glfs2
[root@glfs1 ~]# gluster peer probe glfs3
[root@glfs1 ~]# gluster peer probe glfs4
 
注:除了自己这台服务器,加入其他所有服务器
 
2.6状态查看
gluster peer status
Number of Peers: 3
 
Hostname: glfs2
Uuid: ec9f45ad-afd8-45a9-817f-f76da020eb8d
State: Peer in Cluster (Connected)
 
Hostname: glfs3
Uuid: efc1aaec-191b-4c2c-bcbf-58467d2611c1
State: Peer in Cluster (Connected)
 
Hostname: glfs4
Uuid: 7fc1e778-c2f7-4970-b8f6-039ab0d2792b
State: Peer in Cluster (Connected)
2.7创建分布式复制卷
 
六种类型的volume可以被创建:
 
Distributed:分布式卷,文件通过hash算法随机的分布到由bricks组成的卷上。
Replicated:复制式卷,类似raid1,replica数必须等于volume中brick所包含的存储服务器数,可用性高。
Striped:条带式卷,类似与raid0,stripe数必须等于volume中brick所包含的存储服务器数,文件被分成数据块,以Round
Robin的方式存储在bricks中,并发粒度是数据块,大文件性能好。
Distributed Striped:分布式的条带卷,volume中brick所包含的存储服务器数必须是stripe的倍数(>=2倍),兼顾分布式和条带式的功能。
Distributed Replicated:分布式的复制卷,volume中brick所包含的存储服务器数必须是 replica 的倍数(>=2倍),兼顾分布式和复制式的功能。
Distributed striped replicated :分布式条带复制卷 (核心内容,推荐生产环境使用)
 
创建命令参考(Distributed Replicated:分布式的复制卷)
 
# mkfs.xfs -f /dev/sdb
# mkdir -p /storage/brick2
# mkfs.xfs -f /dev/sdc
# mkdir -p /storage/brick3
# mount /dev/sdb /storage/brick2
# mount /dev/sdc /storage/brick3
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        96G  790M   91G   1% /
tmpfs           499M     0  499M   0% /dev/shm
/dev/sda1       477M   28M  425M   7% /boot
/dev/sdb        100G   43M  100G   1% /storage/brick1
/dev/sdc        100G   33M  100G   1% /storage/brick2
/dev/sdd       100G   33M  100G   1% /storage/brick3
# echo "/dev/sdb  /storage/brick2    xfs defaults 0 0"  >> /etc/fstab
# echo "/dev/sdc  /storage/brick3    xfs defaults 0 0"  >> /etc/fstab
# gluster volume create gv9  replica 2  glfs1:/storage/brick2 glfs2:/storage/brick2 glfs3:/storage/brick2  glfs4:/storage/brick2 glfs1:/storage/brick3 glfs2:/storage/brick3 glfs3:/storage/brick3  glfs4:/storage/brick3 force     
# gluster volume info gv9
Volume Name: gv9
Type: Distributed-Replicate
Volume ID: 73f05436-a34a-49fc-b354-6d0890c326db
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: glfs1:/storage/brick2
Brick2: glfs2:/storage/brick2
Brick3: glfs3:/storage/brick2
Brick4: glfs4:/storage/brick2
Brick5: glfs1:/storage/brick3
Brick6: glfs2:/storage/brick3
Brick7: glfs3:/storage/brick3
Brick8: glfs4:/storage/brick3
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
 
 
 
 
手动放开nfs,并重启volume
# gluster volume info gv9
Volume Name: gv9
Type: Distributed-Replicate
Volume ID: 14b16ff2-b60b-44f5-8eaa-f4348d340ac3
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: glfs1:/storage/brick2
Brick2: glfs2:/storage/brick2
Brick3: glfs3:/storage/brick2
Brick4: glfs4:/storage/brick2
Brick5: glfs1:/storage/brick3
Brick6: glfs2:/storage/brick3
Brick7: glfs3:/storage/brick3
Brick8: glfs4:/storage/brick3
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
 
# gluster volume set gv9 nfs.disable off
volume set: success
# gluster volume stop gv9
# gluster volume start gv9
 
 
3.安装nfs服务
[root@glfs1 ~]# yum install nfs-utils rpcbind -y
[root@glfs1 ~]# mkdir -pv /instances
[root@glfs1 ~]# systemctl start nfs
[root@glfs1 ~]# systemctl enable nfs
[root@glfs1 ~]# systemctl start rpcbind
[root@glfs1 ~]# systemctl enable rpcbind
[root@glfs1 ~]# echo '/instances 10.30.1.0/24(fsid=0,rw,no_root_squash)' >>/etc/exports
[root@glfs1 ~]# systemctl restart nfs
 
 
 
1 nova与glusterfs结合
 
在所有计算节点上,把创建成功的,glusterfs共享出来的volume,挂在到/var/lib/nova/instances目录:
 
mount -t glusterfs <本机IP>:/instances /var/lib/nova/instances    
chown -R nova:nova /var/lib/nova/instances  
 
示范操作
挂载(当前生效)
[root@node3 ~]# mount -t glusterfs 10.30.1.231:/gv9 /var/lib/nova/instances/
 
 
挂载(永久)
[root@node7 ~]# echo "10.30.1.231:/gv9 /var/lib/nova/instances glusterfs defaults,_netdev 0 0" >> /etc/fstab
 
 
注:要在所有openstack计算节点添加对应的主机名和IP解析,要不挂载不上
[root@node3 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
#10.30.1.249 mirror.centos.org
10.30.1.201 node1
10.30.1.202 node2
10.30.1.203 node3
10.30.1.204 node4
10.30.1.205 node5
10.30.1.206 node6
10.30.1.231    glfs1
10.30.1.232    glfs2
10.30.1.233    glfs3
10.30.1.234    glfs4
 
作为nova云主机的镜像存储空间后各节点下的文件如下
10.30.1.231 | CHANGED | rc=0 >>
/storage/brick2
├── 00bb0e63-c904-43df-b553-45a15b3ecf94
├── 28afde38-8547-40f6-9145-d34718782b02
├── 5d578231-3277-4533-afe8-96cfa6704d5b
│   ├── disk
│   └── disk.info
├── 62fa8664-513c-46d4-9237-bdb5d93769c6
│   ├── disk
│   └── disk.info
├── ad3b8b4f-7fc7-49b6-a216-fdea39ff0870
│   ├── disk
│   └── disk.info
├── _base
├── d33c01d9-cb42-446e-a0e1-180ca35b9689
│   ├── console.log
│   └── disk.info
├── e7c2bd59-30ac-4c8e-a95b-607697903b14
│   ├── console.log
│   └── disk.info
├── ef692cfd-c8a9-4c77-be69-e934b9aff838
│   ├── disk
│   └── disk.info
├── f7394f15-aea2-49bd-b0b3-da4f7ef76737
└── locks
/storage/brick3
├── 00bb0e63-c904-43df-b553-45a15b3ecf94
├── 28afde38-8547-40f6-9145-d34718782b02
├── 5d578231-3277-4533-afe8-96cfa6704d5b
│   ├── console.log
│   └── disk.info
├── 62fa8664-513c-46d4-9237-bdb5d93769c6
│   ├── console.log
│   └── disk.info
├── a36c45ee0cb50b3d5f57afcff5c9a552becfe68b
├── ad3b8b4f-7fc7-49b6-a216-fdea39ff0870
│   ├── console.log
│   └── disk.info
├── _base
├── d33c01d9-cb42-446e-a0e1-180ca35b9689
│   ├── disk
│   └── disk.info
├── e7c2bd59-30ac-4c8e-a95b-607697903b14
│   ├── disk
│   └── disk.info
├── ef692cfd-c8a9-4c77-be69-e934b9aff838
│   ├── console.log
│   └── disk.info
├── f7394f15-aea2-49bd-b0b3-da4f7ef76737
└── locks
    ├── nova-08af414964fe4c5a827e9a263419d72905d0069b
    └── nova-a36c45ee0cb50b3d5f57afcff5c9a552becfe68b
 
 
22 directories, 27 files
 
 
10.30.1.232 | CHANGED | rc=0 >>
/storage/brick2
├── 00bb0e63-c904-43df-b553-45a15b3ecf94
├── 28afde38-8547-40f6-9145-d34718782b02
├── 5d578231-3277-4533-afe8-96cfa6704d5b
│   ├── disk
│   └── disk.info
├── 62fa8664-513c-46d4-9237-bdb5d93769c6
│   ├── disk
│   └── disk.info
├── ad3b8b4f-7fc7-49b6-a216-fdea39ff0870
│   ├── disk
│   └── disk.info
├── _base
├── d33c01d9-cb42-446e-a0e1-180ca35b9689
│   ├── console.log
│   └── disk.info
├── e7c2bd59-30ac-4c8e-a95b-607697903b14
│   ├── console.log
│   └── disk.info
├── ef692cfd-c8a9-4c77-be69-e934b9aff838
│   ├── disk
│   └── disk.info
├── f7394f15-aea2-49bd-b0b3-da4f7ef76737
└── locks
/storage/brick3
├── 00bb0e63-c904-43df-b553-45a15b3ecf94
├── 28afde38-8547-40f6-9145-d34718782b02
├── 5d578231-3277-4533-afe8-96cfa6704d5b
│   ├── console.log
│   └── disk.info
├── 62fa8664-513c-46d4-9237-bdb5d93769c6
│   ├── console.log
│   └── disk.info
├── a36c45ee0cb50b3d5f57afcff5c9a552becfe68b
├── ad3b8b4f-7fc7-49b6-a216-fdea39ff0870
│   ├── console.log
│   └── disk.info
├── _base
├── d33c01d9-cb42-446e-a0e1-180ca35b9689
│   ├── disk
│   └── disk.info
├── e7c2bd59-30ac-4c8e-a95b-607697903b14
│   ├── disk
│   └── disk.info
├── ef692cfd-c8a9-4c77-be69-e934b9aff838
│   ├── console.log
│   └── disk.info
├── f7394f15-aea2-49bd-b0b3-da4f7ef76737
└── locks
    ├── nova-08af414964fe4c5a827e9a263419d72905d0069b
    └── nova-a36c45ee0cb50b3d5f57afcff5c9a552becfe68b
 
 
22 directories, 27 files
 
 
10.30.1.233 | CHANGED | rc=0 >>
/storage/brick2
├── 00bb0e63-c904-43df-b553-45a15b3ecf94
│   ├── disk
│   └── disk.info
├── 28afde38-8547-40f6-9145-d34718782b02
│   ├── console.log
│   └── disk.info
├── 5d578231-3277-4533-afe8-96cfa6704d5b
├── 62fa8664-513c-46d4-9237-bdb5d93769c6
├── ad3b8b4f-7fc7-49b6-a216-fdea39ff0870
├── _base
│   └── a36c45ee0cb50b3d5f57afcff5c9a552becfe68b
├── compute_nodes
├── d33c01d9-cb42-446e-a0e1-180ca35b9689
├── e7c2bd59-30ac-4c8e-a95b-607697903b14
├── ef692cfd-c8a9-4c77-be69-e934b9aff838
├── f7394f15-aea2-49bd-b0b3-da4f7ef76737
│   ├── disk
│   └── disk.info
└── locks
    └── nova-storage-registry-lock
/storage/brick3
├── 00bb0e63-c904-43df-b553-45a15b3ecf94
│   ├── console.log
│   └── disk.info
├── 08af414964fe4c5a827e9a263419d72905d0069b
├── 28afde38-8547-40f6-9145-d34718782b02
│   ├── disk
│   └── disk.info
├── 5d578231-3277-4533-afe8-96cfa6704d5b
├── 62fa8664-513c-46d4-9237-bdb5d93769c6
├── ad3b8b4f-7fc7-49b6-a216-fdea39ff0870
├── _base
│   └── a36c45ee0cb50b3d5f57afcff5c9a552becfe68b
├── d33c01d9-cb42-446e-a0e1-180ca35b9689
├── e7c2bd59-30ac-4c8e-a95b-607697903b14
├── ef692cfd-c8a9-4c77-be69-e934b9aff838
├── f7394f15-aea2-49bd-b0b3-da4f7ef76737
│   ├── console.log
│   └── disk.info
└── locks
 
 
22 directories, 17 files
 
 
10.30.1.234 | CHANGED | rc=0 >>
/storage/brick2
├── 00bb0e63-c904-43df-b553-45a15b3ecf94
│   ├── disk
│   └── disk.info
├── 28afde38-8547-40f6-9145-d34718782b02
│   ├── console.log
│   └── disk.info
├── 5d578231-3277-4533-afe8-96cfa6704d5b
├── 62fa8664-513c-46d4-9237-bdb5d93769c6
├── ad3b8b4f-7fc7-49b6-a216-fdea39ff0870
├── _base
│   └── a36c45ee0cb50b3d5f57afcff5c9a552becfe68b
├── compute_nodes
├── d33c01d9-cb42-446e-a0e1-180ca35b9689
├── e7c2bd59-30ac-4c8e-a95b-607697903b14
├── ef692cfd-c8a9-4c77-be69-e934b9aff838
├── f7394f15-aea2-49bd-b0b3-da4f7ef76737
│   ├── disk
│   └── disk.info
└── locks
    └── nova-storage-registry-lock
/storage/brick3
├── 00bb0e63-c904-43df-b553-45a15b3ecf94
│   ├── console.log
│   └── disk.info
├── 08af414964fe4c5a827e9a263419d72905d0069b
├── 28afde38-8547-40f6-9145-d34718782b02
│   ├── disk
│   └── disk.info
├── 5d578231-3277-4533-afe8-96cfa6704d5b
├── 62fa8664-513c-46d4-9237-bdb5d93769c6
├── ad3b8b4f-7fc7-49b6-a216-fdea39ff0870
├── _base
│   └── a36c45ee0cb50b3d5f57afcff5c9a552becfe68b
├── d33c01d9-cb42-446e-a0e1-180ca35b9689
├── e7c2bd59-30ac-4c8e-a95b-607697903b14
├── ef692cfd-c8a9-4c77-be69-e934b9aff838
├── f7394f15-aea2-49bd-b0b3-da4f7ef76737
│   ├── console.log
│   └── disk.info
└── locks
 
 
22 directories, 17 files
 
 
计算节点的云主机
10.30.1.204 | CHANGED | rc=0 >>
28afde38-8547-40f6-9145-d34718782b02 instance-000000a2             
d33c01d9-cb42-446e-a0e1-180ca35b9689 instance-000000a4             
ef692cfd-c8a9-4c77-be69-e934b9aff838 instance-000000a6             
f7394f15-aea2-49bd-b0b3-da4f7ef76737 instance-000000a8             
e7c2bd59-30ac-4c8e-a95b-607697903b14 instance-000000aa             
 
 
10.30.1.203 | CHANGED | rc=0 >>
62fa8664-513c-46d4-9237-bdb5d93769c6 instance-000000a3             
00bb0e63-c904-43df-b553-45a15b3ecf94 instance-000000a5             
ad3b8b4f-7fc7-49b6-a216-fdea39ff0870 instance-000000a7             
5d578231-3277-4533-afe8-96cfa6704d5b instance-000000a9   
 
手动关闭glfs4主机,发现对运行的云主机没有影响,说明了使用glusterfs的高可用性
[root@glfs1 ~]# gluster volume status gv9
Status of volume: gv9
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick glfs1:/storage/brick2                 49152     0          Y       3705
Brick glfs2:/storage/brick2                 49152     0          Y       3402
Brick glfs3:/storage/brick2                 49152     0          Y       3405
Brick glfs1:/storage/brick3                 49153     0          Y       3714
Brick glfs2:/storage/brick3                 49153     0          Y       3411
Brick glfs3:/storage/brick3                 49153     0          Y       3414
Self-heal Daemon on localhost               N/A       N/A        Y       3742
NFS Server on localhost                     N/A       N/A        N       N/A  
Self-heal Daemon on glfs3                   N/A       N/A        Y       3576
NFS Server on glfs3                         N/A       N/A        N       N/A  
Self-heal Daemon on glfs2                   N/A       N/A        Y       3575
NFS Server on glfs2                         N/A       N/A        N       N/A  
Task Status of Volume gv9
------------------------------------------------------------------------------
There are no active volume tasks
 
 
 
注:由于所有计算节点都挂载了相同的volume,openstack平台上能实现虚机的热迁移
 
2 glance与glusterfs结合
在控制节点上,把创建成功的,glusterfs共享出来的volume,挂在到/var/lib/glance/images目录:
 
mount -t glusterfs <本机IP>:/images /var/lib/glance/images    
chown -R glance:glance /var/lib/glance/images  
添加images、instances到fstab自动挂载
10.30.1.231:/images   /var/lib/glance/images       glusterfs  defaults,_netdev,backupvolfile-server=controller2,backupvolfile-server=compute01   0 0  
10.30.1.231:/instances   /var/lib/nova/instances   glusterfs  defaults,_netdev,backupvolfile-server=controller2,backupvolfile-server=compute01   0 0  
使用backupvolfile起到了高可用性,避免单点故障
3 cinder与glusterfs结合(openstack ocata中已经将cinder的glusterfs驱动移除,不再支持glusterfs)
3.1 cinder常用的三种后端
(1)本地创建逻辑卷lvm后端
(2)glusterfs后端
 
cinder.conf配置如下:
[DEFAULT]  
enabled_backends = lvm,nfs,,glusterfs   
[lvm]  
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver  
volume_backend_name=openstack_LVM  
volume_group = cinder-volumes  
iscsi_protocol = iscsi  
iscsi_helper = lioadm   
[glusterfs]   
volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver   
volume_backend_name=GlusterFS   
glusterfs_shares_config=/etc/cinder/shares.conf   
glusterfs_mount_point_base=/var/lib/cinder/glusterfs   
 
 
3.2 GlusterFS
块存储服务中,GlusterFS Driver官方文档:
 
 安装glusterfs服务器
在cinder节点10.30.1.205上,安装glusterfs客户端软件
 
# yum -y install glusterfs-client glusterfs-fuse
 
 
mkfs.xfs -f /dev/vdf
mkdir -p /storage/brick6
mkfs.xfs -f /dev/vdg
mkdir -p /storage/brick7
mount /dev/vdf /storage/brick6
mount /dev/vdg /storage/brick7
echo "/dev/vdf  /storage/brick6    xfs defaults 0 0"  >> /etc/fstab
echo "/dev/vdg  /storage/brick7    xfs defaults 0 0"  >> /etc/fstab
[root@glfs1 ~]# gluster volume create gv11 replica 2  glfs1:/storage/brick6 glfs2:/storage/brick6 glfs3:/storage/brick6  glfs4:/storage/brick6 glfs1:/storage/brick7 glfs2:/storage/brick7 glfs3:/storage/brick7  glfs4:/storage/brick7 force       
[root@glfs1 ~]# gluster volume start gv11
 
volume挂载到cinder节点上
[root@node5 ~]# mkdir -pv /var/lib/cinder/glusterfs
[root@node5 ~]# mount -t glusterfs 10.30.1.231:/gv11 /var/lib/cinder/glusterfs
1 在cinder节点,cinder-volume端配置内容如下
 
[DEFAULT]  
enabled_backends = glusterfs  
[glusterfs]                                                                                           #最后添加  
volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver      #驱动    
glusterfs_shares_config = /etc/cinder/shares.conf                            #glusterfs存储  
glusterfs_mount_point_base = /var/lib/cinder/volumes                    #挂载点  
 
Configuration option = Default value    Description
[DEFAULT]
glusterfs_backup_mount_point = $state_path/backup_mount    (StrOpt) Base dir containing mount point for gluster share.
glusterfs_backup_share = None    (StrOpt) GlusterFS share in <hostname|ipv4addr|ipv6addr>:<gluster_vol_name> format. Eg: 1.2.3.4:backup_vol
glusterfs_mount_point_base = $state_path/mnt    (StrOpt) Base dir containing mount points for gluster shares.
glusterfs_shares_config = /etc/cinder/glusterfs_shares    (StrOpt) File with the list of available gluster shares
nas_volume_prov_type = thin    (StrOpt) Provisioning type that will be used when creating volumes.
2. 配置glusterfs存储配置
在/etc/cinder/shares.conf文件中配置上卷信息:
# cat /etc/cinder/shares.conf   
10.30.1.231:/gv11  
文件中添加glusterfs卷信息,注意该文件的权限,所属组
[root@node5 ~]#chown root.cinder /etc/cinder/shares.conf 
[root@node5 ~]#chmod 640 /etc/cinder/shares.conf 
[root@node5 ~]# chown -R cinder:cinder /var/lib/cinder/*  
[root@node5 ~]# ll /var/lib/cinder/volumes/  
3. 重启cinder-volume服务
 
# systemctl restart openstack-cinder-volume
检查日志信息,看是否有错误/var/log/cinder/volume.log
重启服务后,使用mount查看信息:
 
 
10.30.1.231:/gv11 on /var/lib/cinder/volumes/16b81d8d542fdbf4d70330bb672e9714 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)  
4. controller节点检查服务状态
controller节点的/etc/cinder/cinder.conf文件中添加内容
 
[glusterfs]  
volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver  
  
查看服务状态:  
[root@node1 ~]# cinder service-list  
+------------------+----------------------+------+---------+-------+----------------------------+-----------------+  
|      Binary      |         Host         | Zone |  Status | State |         Updated_at         | Disabled Reason |  
+------------------+----------------------+------+---------+-------+----------------------------+-----------------+  
| cinder-scheduler |      controller      | nova | enabled |   up  | 2017-05-17T09:35:38.000000 |        -        |  
|  cinder-volume   | controller@glusterfs | nova | enabled |   up  | 2017-05-17T09:35:43.000000 |        -        |  
+------------------+----------------------+------+---------+-------+----------------------------+-----------------+  
5. controller建立type
[root@node1 ~]#cinder type-create glusterfs  
+--------------------------------------+-----------+-------------+-----------+  
|                  ID      |  Name   | Description | Is_Public |  
+--------------------------------------+-----------+-------------+-----------+  
| ffd4caf8-2b0f-48d8-aaea-488339922914 | glusterfs |    -    |  True   |  
+--------------------------------------+-----------+-------------+-----------+  
6. controller配置cinder-type和volume_backend_name联动
 
[root@node1 ~]#cinder type-key glusterfs set volume_backend_name=glusterfs  
 
#查看type的设置情况
 
[root@node1 ~]#cinder extra-specs-list  
+--------------------------------------+-----------+----------------------------------------+  
|       ID                  |   Name   |        extra_specs            |  
+--------------------------------------+-----------+----------------------------------------+  
| ffd4caf8-2b0f-48d8-aaea-488339922914 | glusterfs | {u'volume_backend_name': u'glusterfs'} |  
+--------------------------------------+-----------+----------------------------------------+  
7. 重启controller的cinder服务
 
[root@node1 ~]#systemctl restart openstack-cinder-scheduler
cinder-scheduler stop/waiting  
cinder-scheduler start/running, process 27121  
[root@node1 ~]#service cinder-api restart  
cinder-api stop/waiting  
cinder-api start/running, process 27157  
8 创建cinder volume
 
[root@node1 ~]# cinder create --display-name "test1" --volume-type glusterfs 10        #执行cinder type的类型  
  
[root@node1 ~]# cinder show 59e2e560-6633-45f4-9d73-6f7ea62c06ef  
+---------------------------------------+--------------------------------------+  
|                Property               |                Value                 |  
+---------------------------------------+--------------------------------------+  
|              attachments              |                  []                  |  
|           availability_zone           |                 nova                 |  
|                bootable               |                false                 |  
|          consistencygroup_id          |                 None                 |  
|               created_at              |      2017-05-17T09:19:47.000000      |  
|              description              |                 None                 |  
|               encrypted               |                False                 |  
|                   id                  | 59e2e560-6633-45f4-9d73-6f7ea62c06ef |  
|                metadata               |                  {}                  |  
|            migration_status           |                 None                 |  
|              multiattach              |                False                 |  
|                  name                 |                test1                 |  
|         os-vol-host-attr:host         |    controller@glusterfs#GlusterFS    |  
|     os-vol-mig-status-attr:migstat    |                 None                 |  
|     os-vol-mig-status-attr:name_id    |                 None                 |  
|      os-vol-tenant-attr:tenant_id     |   27a967778eb84f5296258809de65f15e   |  
|   os-volume-replication:driver_data   |                 None                 |  
| os-volume-replication:extended_status |                 None                 |  
|           replication_status          |               disabled               |  
|                  size                 |                  10                  |  
|              snapshot_id              |                 None                 |  
|              source_volid             |                 None                 |  
|                 status                |              available               |  
|                user_id                |   73b742285a6049d5a806d34c2020a1e1   |  
|              volume_type              |               glusterfs              |  
+---------------------------------------+--------------------------------------+  
9 查看两个集群节点的存储内容
 
[root@glfs1 ~]# ls /storage/*  
/storage/brick6:  
volume-59e2e560-6633-45f4-9d73-6f7ea62c06ef  
  
/storage/brick7:  
 
 
 
 
上一篇:海鑫智圣:物联网漫谈之MQTT协议


下一篇:Django Signals 从实践到源码分析(转)