对象存储glusterfs使用

新增三台centos7机器,最小化安装


ip分配是 服务端192.168.1.7 192.168.1.8  客户端192.168.1.9    

主机名分别为 gluster01 gluster02 client01

hostnamectl set-hostname gluster01

hostnamectl set-hostname gluster02

hostnamectl set-hostname client01


配置所有机器网络,要求能访问外网(略)


关闭防火墙和selinux

systemctl stop firewalld

systemctl disable firewalld

setenforce 0

sed -i 's/enforcing/disabled/g' /etc/selinux/config


修改所有机器的hosts文件,添加对应的ip主机名解析

vi /etc/hosts

192.168.1.7  gluster01

192.168.1.8  gluster02

192.168.1.9  client01


配置yum源

cd /etc/yum.repos.d/

wget http://mirrors.aliyun.com/repo/Centos-7.repo

wget http://mirrors.aliyun.com/repo/epel-7.repo

yum -y install epel-release


安装服务端

yum install centos-release-gluster -y

yum install -y glusterfs glusterfs-server glusterfs-fuse


systemctl start glusterd

systemctl enable glusterd


在节点gluster01上,配置整个GlusterFS集群,把各个节点加入到集群

gluster peer probe gluster01

gluster peer probe gluster02

查看集群状态

gluster peer status


在两个服务节点上创建数据存储目录

mkdir -p  /usr/local/share/models


在gluster01上创建GlusterFS磁盘

加上replica 2 就是2个节点中,每个节点都要把数据存储一次,就是一个数据存储2份,每个节点一份

如果不加replica 2,就是2个节点的磁盘空间整合成一个硬盘

gluster volume create models replica 2 gluster01:/usr/local/share/models gluster02:/usr/local/share/models force


启动集群

gluster volume start models


安装客户端

yum install -y glusterfs glusterfs-fuse

mkdir -p /mnt/models

挂载

mount -t glusterfs -o rw gluster01:models /mnt/models/


df查看

文件系统            1K-块    已用     可用 已用% 挂载点

/dev/sda2        18244432 1012448 16282176    6% /

devtmpfs           491416       0   491416    0% /dev

tmpfs              500680       0   500680    0% /dev/shm

tmpfs              500680    6792   493888    2% /run

tmpfs              500680       0   500680    0% /sys/fs/cgroup

/dev/sda1          194235   95079    84820   53% /boot

tmpfs              100136       0   100136    0% /run/user/0

gluster01:models 18244352 1012480 16282112    6% /mnt/model


其他操作命令

删除GlusterFS磁盘

# gluster volume stop  models  先停止

# gluster volume delete models  再删除


卸载GlusterFS磁盘

gluster peer detach gluster02


ACL访问控制

gluster volume set models auth.allow 192.168.1.*,192.168.2.*


添加GlusterFS节点

# gluster peer probe gluster03

# gluster peer probe gluster04

# gluster volume add-brick models gluster03:/data/gluster gluster04:/data/gluster


迁移GlusterFS数据

# gluster volume remove-brick models gluster01:/usr/local/share/models gluster03:/usr/local/share/models start

# gluster volume remove-brick models gluster01:/usr/local/share/models gluster03:/usr/local/share/models status

# gluster volume remove-brick models gluster01:/usr/local/share/models gluster03:/usr/local/share/models commit


修复GlusterFS数据(在节点1宕机的情况下)

# gluster volume replace-brick models gluster01:/usr/local/share/models gluster03:/usr/local/share/models commit -force

# gluster volume heal models full






















本文转自super李导51CTO博客,原文链接:http://blog.51cto.com/superleedo/2056732 ,如需转载请自行联系原作者


上一篇:Oracle数据导入导出


下一篇:大型网站限流算法的实现和改造