一、搭建PXC集群
1、环境:centos7+PXC5.7.21+mycat1.6.5
2、卸载mariadb
rpm -qa | grep mariadb* yum -y remove mariadb*
3、禁用防火墙或者开放端口
禁用防火墙:
systemctl stop firewalld
systemctl disable firewalld 或者启用防火墙,开放相关端口(3306,4444,4567,4568)
firewall-cmd --zone=public --add-port=3306/tcp --permanent
firewall-cmd --zone=public --add-port=4444/tcp --permanent
firewall-cmd --zone=public --add-port=4567/tcp --permanent
firewall-cmd --zone=public --add-port=4568/tcp --permanent 查看端口
firewall-cmd --list-ports 关闭SELinux
vi /etc/selinux/config
修改值为disabled 最后重启 reboot
4、安装pxc集群
yum lcoalinstall *.rpm 需要完整安装包的可以下方留言
5、设置mysql初始密码与账户
1、此时保证当前节点的mysql服务是启动状态,使用:cat/var/log/mysqld.log | grep "password" 查到初始密码 2、如mysqld.log 为空,则使用 service mysqld restart 重启一下服务再次查看 3、可以使用service mysqld status 查看当前节点服务状态 4、找到初始密码后,使用 mysql_secure_installation 修改密码 5、mysql -u root -p 登录成功后建立账户及分配权限 create user 'admin'@'%' identified '123456'; grant all privileges on *.* to 'admin'@'%'; flush privileges;
6、取消mysql开机自启动(否则会自动同步集群数据,同步期间,整个集群无法对外提供服务)
chkconfig mysqld off
7、停止各节点mysql服务,配置集群各节点my.cnf 文件
随便贴一份: [client]
socket=/var/lib/mysql/mysql.sock [mysqld]
server-id=1
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
log-bin
log_slave_updates
expire_logs_days=7 # Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0 character_set_server=utf8 bind-address=0.0.0.0 skip-name-resolve # Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0 wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_name=pxc-weitian
wsrep_cluster_address=gcomm://161.8.0.120,161.8.0.121,161.8.0.122
wsrep_node_name=dn1
wsrep_node_address=161.8.0.120
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=admin:123456
pxc_strict_mode=ENFORCING
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
8、启动PXC集群
主:
systemctl start mysql@bootstrap.service 从:
service mysql start 注意关闭时使用对应的命令:
主:
systemctl stop mysql@bootstrap.service 从:
service mysql stop 同时注意/var/lib/mysql/grastate.dat 文件,下次集群的主节点。
二、搭建MyCat集群
1、安装jdk1.8 并配置环境变量
2、下载mycat1.6.5 并tar解压
3、开放端口,关闭SELinux
8066 数据服务 9066 mycat管理端口
4、修改mycat配置信息
server.xml:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE mycat:server SYSTEM "server.dtd">
<mycat:server xmlns:mycat="http://io.mycat/"> <!-- 配置mycat帐号及使用的数据库 -->
<user name="admin" defaultAccount="true">
<property name="password">Abc_123456</property>
<property name="schemas">test</property>
</user> <!-- 根据需要配置只读帐号 -->
<user name="user">
<property name="password">user</property>
<property name="schemas">test</property>
<property name="readOnly">true</property>
</user> </mycat:server>
schema.xml:
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/"> <!--配置数据表 -->
<schema name="test" checkSQLschema="false" sqlMaxLimit="100">
<!-- auto sharding by id (long) -->
<table name="t_user" dataNode="dn1,dn2" rule="mod-long" />
</schema> <!--配置分片信息 --> <dataNode name="dn1" dataHost="cluster1" database="test" />
<dataNode name="dn2" dataHost="cluster2" database="test" /> <!--balance:
0:不开启读写分离
1:只保留一个写节点,其余全是读节点
2:所有的写节点都可以额外承担读请求
3:写节点只负责写,读节点只负责读 writeType:
0:所有的写操作都由第一个写节点负责完成,只有该节点宕机后,其余写节点才会被启用
1:所有的写操作由所有的写节点一起承担 switchType:切换节点的依据
1:使用mycat本身的心跳检测判断
2:使用集群的状态信息判断 --> <!--配置连接信息 -->
<dataHost name="cluster1" maxCon="1000" minCon="10" balance="2"
writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<!-- can have multi write hosts -->
<writeHost host="w1" url="161.8.0.120:3306" user="admin"
password="Abc_123456">
<!-- can have multi read hosts -->
<readHost host="w1r1" url="161.8.0.121:3306" user="admin" password="Abc_123456" />
<readHost host="w1r2" url="161.8.0.122:3306" user="admin" password="Abc_123456" />
</writeHost> <writeHost host="w2" url="161.8.0.121:3306" user="admin"
password="Abc_123456">
<!-- can have multi read hosts -->
<readHost host="w2r1" url="161.8.0.120:3306" user="admin" password="Abc_123456" />
<readHost host="w2r2" url="161.8.0.122:3306" user="admin" password="Abc_123456" />
</writeHost> </dataHost> <dataHost name="cluster2" maxCon="1000" minCon="10" balance="2"
writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<!-- can have multi write hosts -->
<writeHost host="w1" url="161.9.0.200:3306" user="admin"
password="Abc_123456">
<!-- can have multi read hosts -->
<readHost host="w1r1" url="161.9.0.201:3306" user="admin" password="Abc_123456" />
<readHost host="w1r2" url="161.9.0.202:3306" user="admin" password="Abc_123456" />
</writeHost> <writeHost host="w2" url="161.9.0.201:3306" user="admin"
password="Abc_123456">
<!-- can have multi read hosts -->
<readHost host="w2r1" url="161.9.0.200:3306" user="admin" password="Abc_123456" />
<readHost host="w2r2" url="161.9.0.202:3306" user="admin" password="Abc_123456" />
</writeHost> </dataHost> </mycat:schema>
rule.xml:
<?xml version="1.0" encoding="UTF-8"?>
<!-- - - Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License. - You
may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0
- - Unless required by applicable law or agreed to in writing, software -
distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the
License for the specific language governing permissions and - limitations
under the License. -->
<!DOCTYPE mycat:rule SYSTEM "rule.dtd">
<mycat:rule xmlns:mycat="http://io.mycat/">
<tableRule name="rule1">
<rule>
<columns>id</columns>
<algorithm>func1</algorithm>
</rule>
</tableRule> <tableRule name="rule2">
<rule>
<columns>user_id</columns>
<algorithm>func1</algorithm>
</rule>
</tableRule> <tableRule name="sharding-by-intfile">
<rule>
<columns>sharding_id</columns>
<algorithm>hash-int</algorithm>
</rule>
</tableRule>
<tableRule name="sharding-customer">
<rule>
<columns>sharding_id</columns>
<algorithm>hash-int-customer</algorithm>
</rule>
</tableRule>
<tableRule name="auto-sharding-long">
<rule>
<columns>id</columns>
<algorithm>rang-long</algorithm>
</rule>
</tableRule>
<tableRule name="mod-long">
<rule>
<columns>id</columns>
<algorithm>mod-long</algorithm>
</rule>
</tableRule>
<tableRule name="sharding-by-murmur">
<rule>
<columns>id</columns>
<algorithm>murmur</algorithm>
</rule>
</tableRule>
<tableRule name="crc32slot">
<rule>
<columns>id</columns>
<algorithm>crc32slot</algorithm>
</rule>
</tableRule>
<tableRule name="sharding-by-month">
<rule>
<columns>create_time</columns>
<algorithm>partbymonth</algorithm>
</rule>
</tableRule>
<tableRule name="latest-month-calldate">
<rule>
<columns>calldate</columns>
<algorithm>latestMonth</algorithm>
</rule>
</tableRule> <tableRule name="auto-sharding-rang-mod">
<rule>
<columns>id</columns>
<algorithm>rang-mod</algorithm>
</rule>
</tableRule> <tableRule name="jch">
<rule>
<columns>id</columns>
<algorithm>jump-consistent-hash</algorithm>
</rule>
</tableRule> <function name="murmur"
class="io.mycat.route.function.PartitionByMurmurHash">
<property name="seed">0</property><!-- 默认是0 -->
<property name="count">2</property><!-- 要分片的数据库节点数量,必须指定,否则没法分片 -->
<property name="virtualBucketTimes">160</property><!-- 一个实际的数据库节点被映射为这么多虚拟节点,默认是160倍,也就是虚拟节点数是物理节点数的160倍 -->
<!-- <property name="weightMapFile">weightMapFile</property> 节点的权重,没有指定权重的节点默认是1。以properties文件的格式填写,以从0开始到count-1的整数值也就是节点索引为key,以节点权重值为值。所有权重值必须是正整数,否则以1代替 -->
<!-- <property name="bucketMapPath">/etc/mycat/bucketMapPath</property>
用于测试时观察各物理节点与虚拟节点的分布情况,如果指定了这个属性,会把虚拟节点的murmur hash值与物理节点的映射按行输出到这个文件,没有默认值,如果不指定,就不会输出任何东西 -->
</function> <function name="crc32slot"
class="io.mycat.route.function.PartitionByCRC32PreSlot">
<property name="count">2</property><!-- 要分片的数据库节点数量,必须指定,否则没法分片 -->
</function>
<function name="hash-int"
class="io.mycat.route.function.PartitionByFileMap">
<property name="mapFile">partition-hash-int.txt</property>
</function>
<function name="hash-int-customer"
class="io.mycat.route.function.PartitionByFileMap">
<property name="mapFile">customer-hash-int.txt</property>
</function>
<function name="rang-long"
class="io.mycat.route.function.AutoPartitionByLong">
<property name="mapFile">autopartition-long.txt</property>
</function>
<function name="mod-long" class="io.mycat.route.function.PartitionByMod">
<!-- how many data nodes -->
<property name="count">2</property><!--根据集群切片数量设置 -->
</function> <function name="func1" class="io.mycat.route.function.PartitionByLong">
<property name="partitionCount">8</property>
<property name="partitionLength">128</property>
</function>
<function name="latestMonth"
class="io.mycat.route.function.LatestMonthPartion">
<property name="splitOneDay">24</property>
</function>
<function name="partbymonth"
class="io.mycat.route.function.PartitionByMonth">
<property name="dateFormat">yyyy-MM-dd</property>
<property name="sBeginDate">2015-01-01</property>
</function> <function name="rang-mod" class="io.mycat.route.function.PartitionByRangeMod">
<property name="mapFile">partition-range-mod.txt</property>
</function> <function name="jump-consistent-hash" class="io.mycat.route.function.PartitionByJumpConsistentHash">
<property name="totalBuckets">3</property>
</function>
</mycat:rule>
5、分配mycat权限
chmod -R 777 ./*.sh
6、启动mycat
./startup-nowrap.sh 关闭:
ps -aux 找到PId,kill -9 pid
7、常用命令
查看mycat状态
./mycat status //通常显示都是未启动,不用管 ./mycat console 控制台信息
mycat启动成功后,可以使用mysql客户端工具连接验证读写分片信息是否正确。
三、搭建HaProxy集群
HaProxy一主一备,实现高可用
1、开放端口,关闭selinux
3306 tpc/ip转发端口 4001 监控界面端口
2、安装haproxy
yum -y install haproxy
默认安装位置:/etc/haproxy/
3、修改配置文件
vi /etc/haproxy/haproxy.cfg #---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log 127.0.0.1 local2 ##记日志的功能
maxconn 4000 ##最大连接数,默认4000
#chroot /usr/local/haproxy
user root #所属用户
group root #所属组
daemon ###创建1个进程进入deamon模式运行。此参数要求将运行模式设置为"daemon" defaults
log global ###采用全局定义的日志
option dontlognull ###不记录健康检查的日志信息
retries 3 ###3次连接失败就认为服务不可用,也可以通过后面设置
option redispatch ###serverId对应的服务器挂掉后,强制定向到其他健康的服务器
maxconn 2000 ###最大连接数
timeout connect 5000ms ##contimeout 5000 连接超时
timeout client 50000ms ##clitimeout 50000 客户端连接超时
timeout server 50000ms ##srvtimeout 50000 服务器连接超时
##listen admin_status 192.168.9.165:48800 ##VIP
##stats uri/admin-status #统计页面
##stats auth admin:admin
mode http ###默认的模式,tcp是4层,http是7层,health只会返回OK 若是混合模式则 mode 不需要设置
option httplog ###日志类别http日志格式 混合模式 此处还需要加上 tcplog ## HAProxy的状态信息统计页面
listen admin_stats
bind 0.0.0.0:4001 ## 绑定端口
stats uri /dbs ##统计页面
stats auth admin:admin ## 设置统计页面认证的用户和密码,如果要设置多个,另起一行写入即可
mode http
option httplog ## 启用日志记录HTTP请求 listen proxy_mysql
bind 0.0.0.0:3306
mode tcp
option tcplog
balance roundrobin
server mycat_1 161.8.0.123:8066 check port 8066 weight 1 maxconn 2000
server mycat_2 161.9.0.203:8066 check port 8066 weight 1 maxconn 2000
option tcpka
4、常用命令
启动,关闭,重启,状态
service haproxy start /stop/restart /status ps -ef | grep haproxy
启动成功后,使用mysql客户端工具连接进行验证
四、安装配置KeepAlived一主一备
1、开放协议及关闭selinux
firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 --protocol vrrp -j ACCEPT firewall-cmd --reload
2、安装
安装
yum install -y keepalived 配置
vi /etc/keepalived/keepalievd.conf 启动
service keepalived start/stop/restart/status
3、配置文件
keepalived.conf:
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
161.9.0.205 #本网段地址
}
}
启动成功后测试验证