环境介绍:
Server:10.10.0.200 kvm200
Client1:10.10.0.201 kvm201
Client1:10.10.0.202 kvm202
1、首先部署Server端
主机名、网络配置、SELinux
1
2
3
4
|
[root@kvm200 ~] # vi /etc/sysconfig/network
NETWORKING= yes
HOSTNAME=kvm200. test .com
|
1
2
3
4
5
6
|
[root@kvm200 ~] # vi /etc/idmapd.conf
[General] #Verbosity = 0# The following should be set to the local NFSv4 domain name # The default is the host's DNS domain name. Domain = test .com
|
1
2
|
[root@kvm200 ~] # hostname kvm200.test.com
kvm200. test .com
|
1
2
|
[root@kvm200 ~] # hostname --fqdn
kvm200. test .com
|
如果--fqdn能得到完整域名,即表示主机配置成功。
2、关闭SELinux
1
2
3
4
5
6
7
8
9
10
11
|
[root@kvm200 network-scripts] # vi /etc/sysconfig/selinux
# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - SELinux is fully disabled. SELINUX=disabled #将此处设置为permissive或者disable都是可以的。
# SELINUXTYPE= type of policy in use. Possible values are: # targeted - Only targeted network daemons are protected. # strict - Full SELinux protection. SELINUXTYPE=targeted |
1
|
[root@kvm200 network-scripts] # setenforce 0 #立即生效
|
Client1、Client2同样操作设置主机名、Seliux
3、NTP服务器
如果你的server可以连网,并且可以同步时间,这个步骤可以忽略。如果是本地的环境,那则需要有一台NTP服务器,用来同步主机间的时间。
此处将kvm200设置为NTP服务器
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
root@kvm200 network-scripts] # vi /etc/ntp.conf
# Permit all access over the loopback interface. This could # be tightened as well, but to do so would effect some of # the administrative functions. restrict 127.0.0.1restrict -6 ::1restrict 0.0.0.0 mask 0.0.0.0 nomodify notrap noquery restrict 192.168.166.0 mask 255.255.255.0 nomodify # Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap # Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). server 0.centos.pool.ntp.org iburst server 1.centos.pool.ntp.org iburst server 2.centos.pool.ntp.org iburst server 3.centos.pool.ntp.org iburstserver 210.72.145.44 prefer server 127.127.1.0 fudge 127.127.1.0 stratum 8 #图中红色部分的字体是新添加的。保存后重启ntp服务 |
1
2
3
|
[root@kvm200 ~] # service ntpd restart
Shutting down ntpd: [ OK ] Starting ntpd: [ OK ] |
客户端修改配置文件并手动同步:
1
2
3
4
5
6
7
8
|
[root@kvm201 ~] # vi /etc/ntp.conf
...... # Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). #server 0.centos.pool.ntp.org iburst #server 1.centos.pool.ntp.org iburst #server 2.centos.pool.ntp.org iburst #server 3.centos.pool.ntp.org iburst192.168.30.200 iburst #此处为ntp服务器的IP地址,要求能通信。如果不能通信则是服务器的iptables问题,后续我们将开放iptables端口。 |
1
2
3
|
[root@kvm201 ~] # service ntpd restart
Shutting down ntpd: [ OK ] Starting ntpd: [ OK ] |
1
|
[root@kvm201 ~] # ntpdate -u 192.168.30.200 #手动同步时间,或者等待系统自动同步也可,但等待时间可能较长,同步机制可谷歌NTP服务器介绍
|
NTP服务器搭建完成后。基本的环境配置已经完成。下面就是存储服务器的安装了、
4、IPtables防火墙设置以及修改挂载文件和创建挂载文件夹
4.1、修改每台防火墙设置为
1
2
3
|
root@kvm200 ~] # iptables -I INPUT 1 -s 10.10.0.0/16 -j ACCEPT
[root@kvm200 ~] # service iptables save
[root@kvm200 ~] # service iptables restart
|
4.2 修改挂载文件:/etc/fstab 在此文件中最后添加两行(此处建设时已规划好,在后续就使用此文件夹来挂载存储系统)
1
2
3
|
[root@kvm200 ~] # vi /etc/fstab
10.12.0.200: /Mian /primary glusterfs defaults,_netdev 0 0
|
4.3 存储服务器
搭建Gluster服务器之前,我们先配置Gluster源,之后去下载安装包
1
2
3
4
5
6
7
8
9
10
|
[root@kvm200 ~] # cd /etc/yum.repos.d/
[root@kvm200 yum.repos.d] # wget http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/CentOS/glusterfs-epel.repo
[root@kvm200 yum.repos.d] # cd ~
[root@kvm200 ~] # wget http://download.fedora.redhat.com/pub/epel/6/x86_64/epel-release-6-5.noarch.rpm
[root@kvm200 ~] # rpm -ivh epel-release-latest-6.noarch.rpm
[root@kvm200 ~] # yum -y install glusterfs-server
[root@kvm200 ~] # /etc/init.d/glusterfsd restart
|
安装完成之后,下面安装其它2台客户端:(注意:其它三台也需要先安装下面的才可以继续进行去。主要安装:glusterfs glusterfs-server glusterfs-fuse三个软件包)
1
2
3
4
5
6
7
8
|
[root@kvm201 ~] # cd /etc/yum.repos.d/[root@kvm201 yum.repos.d]# wget -P
[root@kvm201 yum.repos.d] # cd ~[root@kvm201 ~]# wget ftp://ftp.pbone.net/mirror/dl.iuscommunity.org/pub/ius/stable/CentOS/6/x86_64/epel-release-6-5.noarch.rpm
[root@kvm201 ~] # rpm -ivh epel-release-6-8.noarch.rpm
[root@kvm201 ~] # yum install glusterfs glusterfs-fuse glusterfs-server
[root@kvm200 ~] # /etc/init.d/glusterfsd restart
|
上面的步骤做完后,我们回到kvm200机器上,将它们(200、201)加入gluster节点,并创建存储卷:
1
2
3
4
|
[root@kvm200 ~] # gluster peer probe 10.12.0.201
[root@kvm200 ~] # gluster peer probe 10.12.0.202
[root@kvm200 ~] # gluster volume create Mian stripe 2 10.12.0.201:/storage 10.12.0.202:/storage force
[root@kvm200 ~] # gluster volume start Mian
|
(如果加节点时报错,可能跟防火墙有关,请检查各个主机的防火墙设置。)
查看状态:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
|
[root@kvm200 ~] # gluster volume status
Status of volume: Mian Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.12.0.200: /storage 49152 0 Y 3512
Brick 10.12.0.201: /storage 49152 0 Y 3417
Task Status of Volume Mian
------------------------------------------------------------------------------
There are no active volume tasks
[root@kvm200 ~] # gluster peer status
Number of Peers: 4Hostname: 10.12.0.204 Uuid: 23194295-4168-46d5-a0b3-8f06766c58b4 State: Peer in Cluster (Connected)
Hostname: 10.12.0.202 Uuid: de10fd85-7b85-4f28-970b-339977a0bcf6 State: Peer in Cluster (Connected)
Hostname: 10.12.0.201 Uuid: 0cd18fe2-62dd-457a-9365-a7c2c1c5c4b2 State: Peer in Cluster (Connected)
Hostname: 10.12.0.203 Uuid: d160b7c3-89de-4169-b04d-bb18712d75c5 State: Peer in Cluster (Connected)
|
到此Glusterfs就已经全部部署完毕,使用mount在需要使用挂载的机器上进行挂载!
本文转自yangxuncai110 51CTO博客,原文链接:http://blog.51cto.com/zlyang/1690055,如需转载请自行联系原作者