新增三台centos7.2机器,
1 最小化安装系统环境配置
ip分配是 服务端192.168.0.191 192.168.0.192 192.168.0.193 192.168.0.190
1.1 主机名分别为 gluster01 gluster02 gluster03
hostnamectl set-hostname gluster01
hostnamectl set-hostname gluster02
hostnamectl set-hostname gluster03
hostnamectl set-hostname gluster00
1.2 关闭防火墙和selinux
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/enforcing/disabled/g' /etc/selinux/config
1.3 配置yum源
cd /etc/yum.repos.d/
wget http://mirrors.aliyun.com/repo/Centos-7.repo
wget http://mirrors.aliyun.com/repo/epel-7.repo
yum -y install epel-release
yum install -y vim lrzsz wget
1.4 修改所有机器的hosts文件,添加对应的ip主机名解析
vim /etc/hosts
192.168.0.191 gluster01
192.168.0.192 gluster02
192.168.0.193 gluster03
192.168.0.190 gluster00
1.5 三台机器--数据盘格式化---xfs
[root@localhost ~]# fdisk -l 磁盘 /dev/sda:32.2 GB, 32212254720 字节,62914560 个扇区 Units = 扇区 of 1 * 512 = 512 bytes 扇区大小(逻辑/物理):512 字节 / 512 字节 I/O 大小(最小/最佳):512 字节 / 512 字节 磁盘标签类型:dos 磁盘标识符:0x0006241e 设备 Boot Start End Blocks Id System /dev/sda1 * 2048 616447 307200 83 Linux /dev/sda2 616448 4810751 2097152 82 Linux swap / Solaris /dev/sda3 4810752 62914559 29051904 83 Linux 磁盘 /dev/sdb:32.2 GB, 32212254720 字节,62914560 个扇区 Units = 扇区 of 1 * 512 = 512 bytes 扇区大小(逻辑/物理):512 字节 / 512 字节 I/O 大小(最小/最佳):512 字节 / 512 字节 [root@localhost ~]# fdisk /dev/sdb 欢迎使用 fdisk (util-linux 2.23.2)。 更改将停留在内存中,直到您决定将更改写入磁盘。 使用写入命令前请三思。 Device does not contain a recognized partition table 使用磁盘标识符 0x1cc01f75 创建新的 DOS 磁盘标签。 命令(输入 m 获取帮助):n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p 分区号 (1-4,默认 1):1 起始 扇区 (2048-62914559,默认为 2048): 将使用默认值 2048 Last 扇区, +扇区 or +size{K,M,G} (2048-62914559,默认为 62914559): 将使用默认值 62914559 分区 1 已设置为 Linux 类型,大小设为 30 GiB 命令(输入 m 获取帮助):w The partition table has been altered! Calling ioctl() to re-read partition table. 正在同步磁盘。
分区完成后执行xfs格式化
[root@localhost ~]# mkfs -t xfs /dev/sdb1 meta-data=/dev/sdb1 isize=256 agcount=4, agsize=1966016 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=7864064, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal log bsize=4096 blocks=3839, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
挂载数据盘/dev/sdb1
[root@localhost ~]# mkdir -p /data/k8s [root@localhost ~]# echo '/dev/sdb1 /data/k8s xfs defaults 1 2' >> /etc/fstab [root@localhost ~]# mount -a
[root@localhost ~]# df -h 文件系统 容量 已用 可用 已用% 挂载点 /dev/sda3 28G 993M 27G 4% / devtmpfs 904M 0 904M 0% /dev tmpfs 913M 0 913M 0% /dev/shm tmpfs 913M 8.6M 904M 1% /run tmpfs 913M 0 913M 0% /sys/fs/cgroup /dev/sda1 297M 108M 189M 37% /boot tmpfs 183M 0 183M 0% /run/user/0 /dev/sdb1 30G 33M 30G 1% /data/k8s
2 安装配置GlusterFS服务端
yum install centos-release-gluster -y
yum install -y glusterfs glusterfs-server glusterfs-fuse
yum install glusterfs-rdma -y
systemctl start glusterd
systemctl enable glusterd
查看版本 (glusterfs 6.1是最新版本,已经弃用stripe模式)
[root@gluster01 k8s]# gluster --version glusterfs 6.1 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation.
选择任意节点,比如在节点gluster01上,配置整个GlusterFS集群,把各个节点加入到集群(在节点1上,加入节点2和3)
gluster peer probe gluster01 本节点可以不执行
gluster peer probe gluster02
gluster peer probe gluster03
gluster peer probe gluster00
查看集群状态
[root@gluster01 ~]# gluster peer status Number of Peers: 2 Hostname: gluster02 Uuid: 710e5f59-9f93-451a-b092-43acd480dd4b State: Peer in Cluster (Connected) Hostname: gluster03 Uuid: f7664912-0038-4b86-8aee-88644df1221c State: Peer in Cluster (Connected) Hostname: gluster00 Uuid: d90cabcd-5afb-49d2-a50b-197f10cb00bf State: Peer in Cluster (Connected) [root@gluster01 ~]#
###删除节点操作 命令
gluster peer detach gluster00
3 不同的卷模式示例
https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/#creating-dispersed-volumes
--------------------------------------------------------------------------------------------------------------------
示例1:创建复制卷
gluster volume create k8s-data replica 3 gluster01:/data/k8s gluster02:/data/k8s gluster03:/data/k8s force
[root@gluster01 ~]# gluster volume create k8s-data replica 3 gluster01:/data/k8s gluster02:/data/k8s gluster03:/data/k8s force volume create: k8s-data: success: please start the volume to access data [root@gluster01 ~]# gluster volume info k8s-data Volume Name: k8s-data Type: Replicate Volume ID: 7c70dd22-2e6c-4f7b-b2a4-d9ce579fe506 Status: Created Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: gluster01:/data/k8s Brick2: gluster02:/data/k8s Brick3: gluster03:/data/k8s Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off [root@gluster01 ~]# gluster volume status Status of volume: k8s-data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gluster01:/data/k8s 49152 0 Y 2300 Brick gluster02:/data/k8s 49152 0 Y 2249 Brick gluster03:/data/k8s 49152 0 Y 2246 Self-heal Daemon on localhost N/A N/A Y 2321 Self-heal Daemon on gluster03 N/A N/A Y 2267 Self-heal Daemon on gluster02 N/A N/A Y 2270 Task Status of Volume k8s-data ------------------------------------------------------------------------------ There are no active volume tasks
删除复制卷
[root@gluster01 ~]# gluster volume stop k8s-data
[root@gluster01 ~]# gluster volume delete k8s-data
[root@gluster01 ~]# gluster volume stop k8s-data Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: k8s-data: success [root@gluster01 ~]# gluster volume delete k8s-data Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y volume delete: k8s-data: success [root@gluster01 ~]#
--------------------------------------------------------------------------------------------------------------------
示例2:创建分布式复制卷 (该模式的replica为2 ,则后面需要跟2的整数倍个文件系统或者目录,1倍是复制卷,2倍及以上是分布式复制卷)
创建时指定文件系统或者块设备的顺序对数据保护有很大影响。
每个replica_count连续砖将形成一个副本集,所有副本集合并为一个卷范围的分发集,
要确保副本集成员未放置在同一节点上,每个服务器上的第一个块,然后列出每个服务器上相同顺序的块,依此类推。
gluster volume create k8s-data replica 2 gluster01:/data/k8s gluster02:/data/k8s gluster03:/data/k8s gluster00:/data/k8s force
[root@gluster01 k8s]# gluster volume create k8s-data replica 2 gluster01:/data/k8s gluster02:/data/k8s gluster03:/data/k8s gluster00:/data/k8s force volume create: k8s-data: success: please start the volume to access data [root@gluster01 k8s]# gluster volume info k8s-data Volume Name: k8s-data Type: Distributed-Replicate Volume ID: b9e31332-56b9-4fe1-988e-2d01ce236fc8 Status: Created Snapshot Count: 0 Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: gluster01:/data/k8s Brick2: gluster02:/data/k8s Brick3: gluster03:/data/k8s Brick4: gluster00:/data/k8s Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
分布式复制卷官网示例:
例如,具有双向镜像的四节点分布式(复制)卷: # gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 Creation of test-volume has been successful Please start the volume to access data. 例如,要使用双向镜像创建六节点分布式(复制)卷: # gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 Creation of test-volume has been successful Please start the volume to access data.
-----------------------------------------------------------------------------------------------------------------
示例:创建分散卷
创建分散卷命令:
gluster volume create test-volume disperse 4 gluster01:/data/k8s gluster02:/data/k8s gluster03:/data/k8s gluster00:/data/k8s force
[root@gluster01 k8s]# gluster volume create test-volume disperse 4 gluster01:/data/k8s gluster02:/data/k8s gluster03:/data/k8s gluster00:/data/k8s force There isn't an optimal redundancy value for this configuration. Do you want to create the volume with redundancy 1 ? (y/n) y volume create: test-volume: success: please start the volume to access data [root@gluster01 k8s]# gluster volume info test-volume Volume Name: test-volume Type: Disperse Volume ID: 57de6cdf-36d4-4ba1-b30f-931a70999024 Status: Created Snapshot Count: 0 Number of Bricks: 1 x (3 + 1) = 4 Transport-type: tcp Bricks: Brick1: gluster01:/data/k8s Brick2: gluster02:/data/k8s Brick3: gluster03:/data/k8s Brick4: gluster00:/data/k8s Options Reconfigured: transport.address-family: inet nfs.disable: on
---------------------------------------------------------------------------------------------
示例:分布式分散卷 disperse必须大于2 (disperse 为 3 后面的文件系统必须是6个)
由于机器不足或者磁盘不足,用根目录的/k8s目录 作为两个文件系统,生产上请使用新增机器,或者新数据盘(副本集不能再同一个磁盘中)
gluster volume create test-volume disperse 3 gluster01:/data/k8s gluster02:/data/k8s gluster03:/data/k8s gluster00:/data/k8s gluster01:/k8s gluster02:/k8s force
[root@gluster01 k8s]# gluster volume create test-volume disperse 3 gluster01:/data/k8s gluster02:/data/k8s gluster03:/data/k8s gluster00:/data/k8s gluster01:/k8s gluster02:/k8s force volume create: test-volume: success: please start the volume to access data [root@gluster01 k8s]# gluster volume info test-volume Volume Name: test-volume Type: Distributed-Disperse Volume ID: c6bda235-8c92-43be-b524-ff6567988008 Status: Created Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: gluster01:/data/k8s Brick2: gluster02:/data/k8s Brick3: gluster03:/data/k8s Brick4: gluster00:/data/k8s Brick5: gluster01:/k8s Brick6: gluster02:/k8s Options Reconfigured: transport.address-family: inet nfs.disable: on [root@gluster01 k8s]# gluster volume start test-volume volume start: test-volume: success
4 安装客户端
找一个机器192.168.0.195上
yum install centos-release-gluster -y ##必须升级这个 保证版本对应一直,版本不一致可能造成挂载不上。
yum install -y glusterfs glusterfs-fuse glusterfs-rdma
增加hosts解析vim /etc/hosts
192.168.0.191 gluster01
192.168.0.192 gluster02
192.168.0.190 gluster00
192.168.0.193 gluster03
创建挂载目录
mkdir /data
执行挂载
mount -t glusterfs gluster01:/test-volume /data
[root@localhost ~]# mount -t glusterfs gluster01:test-volume /data [root@localhost ~]# df -h 文件系统 容量 已用 可用 已用% 挂载点 /dev/sda3 28G 1.1G 27G 4% / devtmpfs 904M 0 904M 0% /dev tmpfs 913M 0 913M 0% /dev/shm tmpfs 913M 8.6M 904M 1% /run tmpfs 913M 0 913M 0% /sys/fs/cgroup /dev/sda1 297M 108M 189M 37% /boot tmpfs 183M 0 183M 0% /run/user/0 gluster01:test-volume 116G 4.1G 112G 4% /data
测试
复制 /var/log/message 到/data
去server机器查看
[root@gluster01 k8s]# ll /data/k8s/ 总用量 156 -rw------- 2 root root 153600 5月 29 21:27 messages [root@gluster02 k8s]# ll /data/k8s/ 总用量 156 -rw------- 2 root root 153600 5月 29 21:27 messages [root@gluster03 k8s]# ll /data/k8s/ 总用量 156 -rw------- 2 root root 153600 5月 29 21:27 messages
已经成功复制到三个机器上
注:所有的分布式复制分布式分散,都是就近指定数的文件系统为复制副本集群
比如disperse 3 gluster01:/data/k8s gluster02:/data/k8s gluster03:/data/k8s gluster00:/data/k8s gluster05:/data/k8s gluster06:/data/k8s
前三个为一套副本集群,后三个是另一套副本集群。挂载gluster01 02 03 中任意一个都会复制到剩余两个机器文件系统中;挂载gluster00 05 06中任意一个,都会复制到剩余两个机器文件系统中。