GlusterFS在日常生产中遇到的很多,他有这配置简单、稳定等特点;
接下来我们来部署一下GlusterFS,介绍一下它的几种模式;
1解析主机名;
[root@sxb-1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.88.101 sxb-1 192.168.88.102 sxb-2 192.168.88.103 sxb-3 192.168.88.104 sxb-4
2.配置yum源;拷贝到其他主机;
[root@sxb-1 ~]# vim /etc/yum.repos.d/glusterfs-epel.repo [glusterfs] name=GlusterFS is a clustered file-system capable of scaling to several petabytes. baseurl=https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-6/ enabled=1 gpgcheck=0 [root@sxb-1 ~]# scp /etc/yum.repos.d/glusterfs-epel.repo 192.168.88.102:/etc/yum.repos.d/ glusterfs-epel.repo 100% 188 145.7KB/s 00:00 [root@sxb-1 ~]# scp /etc/yum.repos.d/glusterfs-epel.repo 192.168.88.103:/etc/yum.repos.d/ glusterfs-epel.repo 100% 188 177.4KB/s 00:00 [root@sxb-1 ~]# scp /etc/yum.repos.d/glusterfs-epel.repo 192.168.88.104:/etc/yum.repos.d/ glusterfs-epel.repo 100% 188 1.8KB/s 00:00
3.安装glusterfs-server(四台主机都需要安装);启动服务;
[root@sxb-1 ~]# yum install yum-plugin-priorities -y
[root@sxb-1 ~]# yum install epel-release -y
[root@sxb-1 ~]# yum install glusterfs-server^C [root@sxb-1 ~]# systemctl start glusterd^C
4.将成员主机添加到glusterfs集群中;
[root@sxb-1 ~]# gluster peer probe sxb-1 peer probe: success. Probe on localhost not needed [root@sxb-1 ~]# gluster peer probe sxb-2 peer probe: success. [root@sxb-1 ~]# gluster peer probe sxb-3 peer probe: success. [root@sxb-1 ~]# gluster peer probe sxb-4 peer probe: success.
5.查看一下现在的集群信息;
[root@sxb-1 ~]# gluster peer status Number of Peers: 3 Hostname: sxb-2 Uuid: a1e1d7ba-2e20-4192-b7d0-ab91d135a38d State: Peer in Cluster (Connected) Hostname: sxb-3 Uuid: 59a7034c-1d86-48d6-b6f0-6a063c6d9a9b State: Peer in Cluster (Connected) Hostname: sxb-4 Uuid: 75f4e4e0-0e6e-4f34-90c7-5a2918dbb972 State: Peer in Cluster (Connected)
6.创建集群;为我们分别在四台主机上创建一个数据目录这样我们的集群就部署完成了;
[root@sxb-1 ~]# mkdir -p /opt/glusterfs/data1 [root@sxb-1 ~]# gluster volume info
[root@host1 ~]# mkdir -p /opt/glusterfs/data1 [root@host2 ~]# mkdir -p /opt/glusterfs/data2 [root@host3 ~]# mkdir -p /opt/glusterfs/data3 [root@host4 ~]# mkdir -p /opt/glusterfs/data4
我们来了解一下glusterfs的集中模式;
分布卷
默认模式,既GHT,也叫 分布巻:将文件已hash算法随机分不到一台服务器节点中存储。
需要两台服务器;
[root@sxb-1 ~]# gluster volume create vloume1 sxb-1:/opt/glusterfs/data1/ sxb-2:/opt/glusterfs/data2 volume create: vloume1: failed: The brick sxb-1:/opt/glusterfs/data1 is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.
集群默认是不可以在root下创建卷,这里我们可以在结尾 加上 force;
[root@sxb-1 ~]# gluster volume create vloume1 sxb-1:/opt/glusterfs/data1/ sxb-2:/opt/glusterfs/data2 force volume create: vloume1: success: please start the volume to access data
我们可以看到,刚才创建的分布巻vloume1可以看到了;
[root@sxb-1 ~]# gluster volume list vloume1
查看一下具体信息;
[root@sxb-1 ~]# gluster volume info Volume Name: vloume1 Type: Distribute Volume ID: e01c0e8a-d758-41d8-ad96-72d90af77d98 Status: Created Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: sxb-1:/opt/glusterfs/data1 Brick2: sxb-2:/opt/glusterfs/data2 Options Reconfigured: transport.address-family: inet nfs.disable: on
启用这个卷;
[root@sxb-1 ~]# gluster volume start vloume1 volume start: vloume1: success
接下来进行测试;我们需要给client装一下软件;
[root@localhost ~]# yum install glusterfs gluster-fuse -y
我们对创建的volume1进行挂载;(四台都可以)
mount -t glusterfs host1.zyg.com:volume1 /mnt [root@sxb-3 mnt]# df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/centos-root xfs 17G 2.1G 15G 12% / devtmpfs devtmpfs 475M 0 475M 0% /dev tmpfs tmpfs 487M 0 487M 0% /dev/shm tmpfs tmpfs 487M 7.7M 479M 2% /run tmpfs tmpfs 487M 0 487M 0% /sys/fs/cgroup /dev/sda1 xfs 1014M 163M 852M 17% /boot tmpfs tmpfs 98M 0 98M 0% /run/user/0 sxb-3:volume1 fuse.glusterfs 34G 11G 24G 32% /mnt
创建文件测试;可以看到100个文件被分布存在两个服务器上;
[root@sxb-3 mnt]# touch test{1..100} [root@sxb-1 data1]# ls test1 test22 test37 test47 test53 test61 test71 test81 test88 test96 test13 test26 test4 test48 test54 test62 test72 test83 test9 test97 test16 test31 test40 test5 test56 test63 test78 test84 test90 test98 test19 test32 test43 test50 test59 test65 test79 test85 test92 test2 test36 test45 test52 test60 test69 test8 test87 test95