项目环境:
192.168.8.30 mycat
192.168.8.31 node1
192.168.8.32 node2
192.168.8.33 node3
三个节点MySQL均为单实例
一、当前分片信息配置
schema.xml
12345678910111213141516171819202122232425262728293031323334353637383940414243 |
<?xml version= "1.0" ?> <!DOCTYPE mycat:schema SYSTEM "schema.dtd" > <mycat:schema xmlns:mycat=" <schema name= "mycatdb" checkSQLschema= "false" sqlMaxLimit= "100" > <table name= "user04" dataNode= "dn$1-3,dn$13-15,dn$25-27" rule= "sharding-by-murmur-user04-id" >< /table > < /schema > <!-- <dataNode name= "dn1$0-743" dataHost= "localhost1" database= "db$0-743" /> --> <dataNode name= "dn1" dataHost= "node1" database= "testdb01" /> <dataNode name= "dn2" dataHost= "node1" database= "testdb02" /> <dataNode name= "dn3" dataHost= "node1" database= "testdb03" /> <dataNode name= "dn13" dataHost= "node2" database= "testdb13" /> <dataNode name= "dn14" dataHost= "node2" database= "testdb14" /> <dataNode name= "dn15" dataHost= "node2" database= "testdb15" /> <dataNode name= "dn25" dataHost= "node3" database= "testdb25" /> <dataNode name= "dn26" dataHost= "node3" database= "testdb26" /> <dataNode name= "dn27" dataHost= "node3" database= "testdb27" /> <!--<dataNode name= "dn4" dataHost= "sequoiadb1" database= "SAMPLE" /> <dataNode name= "jdbc_dn1" dataHost= "jdbchost" database= "db1" /> <dataNode name= "jdbc_dn2" dataHost= "jdbchost" database= "db2" /> <dataNode name= "jdbc_dn3" dataHost= "jdbchost" database= "db3" /> --> <dataHost name= "node1" maxCon= "1000" minCon= "10" balance= "1" writeType= "0" dbType= "mysql" dbDriver= "native" switchType= "1" slaveThreshold= "100" > <heartbeat> select user()< /heartbeat > <!-- can have multi write hosts --> <writeHost host= "192.168.8.31" url= "192.168.8.31:3306" user= "root" password= "mysql" >< /writeHost > < /dataHost > <dataHost name= "node2" maxCon= "1000" minCon= "10" balance= "1" writeType= "0" dbType= "mysql" dbDriver= "native" switchType= "1" slaveThreshold= "100" > <heartbeat> select user()< /heartbeat > <!-- can have multi write hosts --> <writeHost host= "192.168.8.32" url= "192.168.8.32:3306" user= "root" password= "mysql" >< /writeHost > < /dataHost > <dataHost name= "node3" maxCon= "1000" minCon= "10" balance= "1" writeType= "0" dbType= "mysql" dbDriver= "native" switchType= "1" slaveThreshold= "100" > <heartbeat> select user()< /heartbeat > <!-- can have multi write hosts --> <writeHost host= "192.168.8.33" url= "192.168.8.33:3306" user= "root" password= "mysql" >< /writeHost > < /dataHost > < /mycat :schema>
|
rule.xml
12345678910111213 |
<tableRule name= "sharding-by-murmur-user04-id" > <rule> <columns> id < /columns > <algorithm>murmur- id < /algorithm > < /rule > < /tableRule > < function name= "murmur-id" class= "io.mycat.route.function.PartitionByMurmurHash" > <property name= "seed" >0< /property ><!-- 默认是 0 --> <property name= "type" >0< /property ><!-- 默认是 0, 表示 integer,非 0 表示 string--> <property name= "count" >9< /property ><!-- 要分片的数据库节点数量,必须指定,否则没法分片 --> <property name= "virtualBucketTimes" >160< /property ><!-- 一个实际的数据库节点被映射为这么多虚拟节点,默认是 160 倍,也就是虚拟节点数是物理节点数 的 160 倍 --> < /function >
|
当前user04进行hash分区,共9个分片,存放在9个物理库。
二、数据查看
node1
1234567891011121314151617181920212223242526272829303132333435 |
mysql> select * from testdb01.user04; + ----+--------+ | id | name | + ----+--------+ | 8 | steven | | 14 | steven | | 16 | steven | | 17 | steven | | 34 | steven | | 49 | steven | + ----+--------+ 6 rows in set (0.00 sec) mysql> select * from testdb02.user04; + ----+--------+ | id | name | + ----+--------+ | 9 | steven | | 10 | steven | | 44 | steven | | 45 | steven | | 46 | steven | | 48 | steven | + ----+--------+ 6 rows in set (0.06 sec) mysql> select * from testdb03.user04; + ----+--------+ | id | name | + ----+--------+ | 11 | steven | | 24 | steven | | 33 | steven | | 35 | steven | | 40 | steven | + ----+--------+ 5 rows in set (0.07 sec)
|
node2
1234567891011121314151617181920212223242526272829 |
mysql> select * from testdb13.user04; + ----+--------+ | id | name | + ----+--------+ | 20 | steven | | 25 | steven | | 38 | steven | | 39 | steven | + ----+--------+ 4 rows in set (0.07 sec) mysql> select * from testdb14.user04; + ----+--------+ | id | name | + ----+--------+ | 1 | steven | | 41 | steven | | 50 | steven | + ----+--------+ 3 rows in set (0.03 sec) mysql> select * from testdb15.user04; + ----+--------+ | id | name | + ----+--------+ | 12 | steven | | 18 | steven | | 32 | steven | | 36 | steven | + ----+--------+ 4 rows in set (0.12 sec)
|
node3
12345678910111213141516171819202122232425262728293031323334353637383940 |
mysql> select * from testdb25.user04; + ----+--------+ | id | name | + ----+--------+ | 6 | steven | | 13 | steven | | 19 | steven | | 23 | steven | | 27 | steven | | 28 | steven | | 29 | steven | | 31 | steven | | 37 | steven | + ----+--------+ 9 rows in set (0.05 sec) mysql> select * from testdb26.user04; + ----+--------+ | id | name | + ----+--------+ | 4 | steven | | 5 | steven | | 15 | steven | | 22 | steven | | 42 | steven | + ----+--------+ 5 rows in set (0.01 sec) mysql> select * from testdb27.user04; + ----+--------+ | id | name | + ----+--------+ | 2 | steven | | 3 | steven | | 7 | steven | | 21 | steven | | 26 | steven | | 30 | steven | | 43 | steven | | 47 | steven | + ----+--------+ 8 rows in set (0.06 sec)
|
下面增加9个分片,重新进行配置
三、配置schema.xml,rule.xml
复制schema.xml、rule.xml并重命名为newSchema.xml、newRule.xml放于conf目录下,修改newSchema.xml和newRule.xml配置文件为扩容后的mycat配置参数(表的节点数、数据源、路由规则)
newSchema.xml
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950 |
<?xml version= "1.0" ?> <!DOCTYPE mycat:schema SYSTEM "schema.dtd" > <mycat:schema xmlns:mycat=" <schema name= "mycatdb" checkSQLschema= "false" sqlMaxLimit= "100" > <table name= "user04" dataNode= "dn$1-6,dn$13-18,dn$25-30" rule= "sharding-by-murmur-user04-id" >< /table > < /schema > <!-- <dataNode name= "dn1$0-743" dataHost= "localhost1" database= "db$0-743" /> --> <dataNode name= "dn1" dataHost= "node1" database= "testdb01" /> <dataNode name= "dn2" dataHost= "node1" database= "testdb02" /> <dataNode name= "dn3" dataHost= "node1" database= "testdb03" /> <dataNode name= "dn4" dataHost= "node1" database= "testdb04" /> <dataNode name= "dn5" dataHost= "node1" database= "testdb05" /> <dataNode name= "dn6" dataHost= "node1" database= "testdb06" /> <dataNode name= "dn13" dataHost= "node2" database= "testdb13" /> <dataNode name= "dn14" dataHost= "node2" database= "testdb14" /> <dataNode name= "dn15" dataHost= "node2" database= "testdb15" /> <dataNode name= "dn16" dataHost= "node2" database= "testdb16" /> <dataNode name= "dn17" dataHost= "node2" database= "testdb17" /> <dataNode name= "dn18" dataHost= "node2" database= "testdb18" /> <dataNode name= "dn25" dataHost= "node3" database= "testdb25" /> <dataNode name= "dn26" dataHost= "node3" database= "testdb26" /> <dataNode name= "dn27" dataHost= "node3" database= "testdb27" /> <dataNode name= "dn28" dataHost= "node3" database= "testdb28" /> <dataNode name= "dn29" dataHost= "node3" database= "testdb29" /> <dataNode name= "dn30" dataHost= "node3" database= "testdb30" /> <!--<dataNode name= "dn4" dataHost= "sequoiadb1" database= "SAMPLE" /> <dataNode name= "jdbc_dn1" dataHost= "jdbchost" database= "db1" /> <dataNode name= "jdbc_dn2" dataHost= "jdbchost" database= "db2" /> <dataNode name= "jdbc_dn3" dataHost= "jdbchost" database= "db3" /> --> <dataHost name= "node1" maxCon= "1000" minCon= "10" balance= "1" writeType= "0" dbType= "mysql" dbDriver= "native" switchType= "1" slaveThreshold= "100" > <heartbeat> select user()< /heartbeat > <!-- can have multi write hosts --> <writeHost host= "192.168.8.31" url= "192.168.8.31:3306" user= "root" password= "mysql" >< /writeHost > < /dataHost > <dataHost name= "node2" maxCon= "1000" minCon= "10" balance= "1" writeType= "0" dbType= "mysql" dbDriver= "native" switchType= "1" slaveThreshold= "100" > <heartbeat> select user()< /heartbeat > <!-- can have multi write hosts --> <writeHost host= "192.168.8.32" url= "192.168.8.32:3306" user= "root" password= "mysql" >< /writeHost > < /dataHost > <dataHost name= "node3" maxCon= "1000" minCon= "10" balance= "1" writeType= "0" dbType= "mysql" dbDriver= "native" switchType= "1" slaveThreshold= "100" > <heartbeat> select user()< /heartbeat > <!-- can have multi write hosts --> <writeHost host= "192.168.8.33" url= "192.168.8.33:3306" user= "root" password= "mysql" >< /writeHost > < /dataHost > < /mycat :schema>
|
newRule.xml
12345678910111213 |
<tableRule name= "sharding-by-murmur-user04-id" > <rule> <columns> id < /columns > <algorithm>murmur- id < /algorithm > < /rule > < /tableRule > < function name= "murmur-id" class= "io.mycat.route.function.PartitionByMurmurHash" > <property name= "seed" >0< /property ><!-- 默认是 0 --> <property name= "type" >0< /property ><!-- 默认是 0, 表示 integer,非 0 表示 string--> <property name= "count" >18< /property ><!-- 要分片的数据库节点数量,必须指定,否则没法分片 --> <property name= "virtualBucketTimes" >160< /property ><!-- 一个实际的数据库节点被映射为这么多虚拟节点,默认是 160 倍,也就是虚拟节点数是物理节点数 的 160 倍 --> < /function >
|
将分片数量改为18
四、修改migrateTables.properties
123456 |
#schema1=tb1,tb2,... #schema2=all(写all或者不写将对此schema下拆分节点变化的拆分表全部进行重新路由) #... #sample #TESTDB=travelrecord,company,goods mycatdb=user04
|
五、重新分区
修改 bin 目录下的 dataMigrate.sh 脚本文件,
../bin/dataMigrate.sh
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788 |
[root@mycat conf] # ../bin/dataMigrate.sh "/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-2.b14.el7.x86_64/jre/bin/java" -DMYCAT_HOME= "/usr/local/mycat" -classpath "/usr/local/mycat/conf:/usr/local/mycat/lib/classes:/usr/local/mycat/lib/asm-4.0.jar:/usr/local/mycat/lib/commons-collections-3.2.1.jar:/usr/local/mycat/lib/commons-lang-2.6.jar:/usr/local/mycat/lib/curator-client-2.11.0.jar:/usr/local/mycat/lib/curator-framework-2.11.0.jar:/usr/local/mycat/lib/curator-recipes-2.11.0.jar:/usr/local/mycat/lib/disruptor-3.3.4.jar:/usr/local/mycat/lib/dom4j-1.6.1.jar:/usr/local/mycat/lib/druid-1.0.26.jar:/usr/local/mycat/lib/ehcache-core-2.6.11.jar:/usr/local/mycat/lib/fastjson-1.2.12.jar:/usr/local/mycat/lib/guava-19.0.jar:/usr/local/mycat/lib/hamcrest-core-1.3.jar:/usr/local/mycat/lib/hamcrest-library-1.3.jar:/usr/local/mycat/lib/jline-0.9.94.jar:/usr/local/mycat/lib/joda-time-2.9.3.jar:/usr/local/mycat/lib/jsr305-2.0.3.jar:/usr/local/mycat/lib/kryo-2.10.jar:/usr/local/mycat/lib/leveldb-0.7.jar:/usr/local/mycat/lib/leveldb-api-0.7.jar:/usr/local/mycat/lib/log4j-1.2.17.jar:/usr/local/mycat/lib/log4j-1.2-api-2.5.jar:/usr/local/mycat/lib/log4j-api-2.5.jar:/usr/local/mycat/lib/log4j-core-2.5.jar:/usr/local/mycat/lib/log4j-slf4j-impl-2.5.jar:/usr/local/mycat/lib/mapdb-1.0.7.jar:/usr/local/mycat/lib/minlog-1.2.jar:/usr/local/mycat/lib/mongo-java-driver-2.11.4.jar:/usr/local/mycat/lib/Mycat-server-1.6.6.1-release.jar:/usr/local/mycat/lib/mysql-binlog-connector-java-0.16.1.jar:/usr/local/mycat/lib/mysql-connector-java-5.1.35.jar:/usr/local/mycat/lib/netty-3.7.0.Final.jar:/usr/local/mycat/lib/netty-buffer-4.1.9.Final.jar:/usr/local/mycat/lib/netty-common-4.1.9.Final.jar:/usr/local/mycat/lib/objenesis-1.2.jar:/usr/local/mycat/lib/reflectasm-1.03.jar:/usr/local/mycat/lib/sequoiadb-driver-1.12.jar:/usr/local/mycat/lib/slf4j-api-1.6.1.jar:/usr/local/mycat/lib/univocity-parsers-2.2.1.jar:/usr/local/mycat/lib/velocity-1.7.jar:/usr/local/mycat/lib/wrapper.jar:/usr/local/mycat/lib/zookeeper-3.4.6.jar" -server -Xms2G -Xmx2G -XX:MaxPermSize=64M -XX:+AggressiveOpts -XX:MaxDirectMemorySize=2G io.mycat.util.dataMigrator.DataMigrator -tempFileDir= -isAwaysUseMaster= true -mysqlBin= -cmdLength=110*1024 -charset=utf8 -deleteTempFileDir= true -threadCount= -delThreadCount= -queryPageSize= OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=64M; support was removed in 8.0 2018-11-23 17:25:57:664 [1]-> creating migrator schedule and temp files for migrate... +---------------------------------------------[mycatdb:user04] migrate info---------------------------------------------+ |tableSize = 50 | |migrate before = [dn1, dn2, dn3, dn13, dn14, dn15, dn25, dn26, dn27] | |migrate after = [dn1, dn2, dn3, dn4, dn5, dn6, dn13, dn14, dn15, dn16, dn17, dn18, dn25, dn26, dn27, dn28, dn29, dn30]| |rule function = PartitionByMurmurHash | +-----------------------------------------------------------------------------------------------------------------------+ +----------------[mycatdb:user04] migrate schedule----------------+ |dn13[4] -> [0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]| |dn14[3] -> [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 2, 0]| |dn15[4] -> [0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0]| |dn1[6] -> [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 1, 0, 0, 0, 1]| |dn25[9] -> [0, 0, 0, 0, 0, 0, 5, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1]| |dn26[5] -> [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0]| |dn27[8] -> [0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 1, 1, 0, 0, 0, 0]| |dn2[6] -> [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0]| |dn3[5] -> [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0]| +-----------------------------------------------------------------+ 2018-11-23 17:26:10:896 [2]-> start migrate data... [mycatdb:user04] dn1->dn26 completed in 740ms [mycatdb:user04] dn1->dn17 completed in 792ms [mycatdb:user04] dn1->dn30 completed in 472ms [mycatdb:user04] dn2->dn17 completed in 474ms [mycatdb:user04] dn2->dn18 completed in 400ms [mycatdb:user04] dn2->dn25 completed in 458ms [mycatdb:user04] dn2->dn28 completed in 388ms [mycatdb:user04] dn3->dn16 completed in 477ms [mycatdb:user04] dn3->dn25 completed in 410ms [mycatdb:user04] dn3->dn29 completed in 423ms [mycatdb:user04] dn13->dn4 completed in 455ms [mycatdb:user04] dn13->dn17 completed in 483ms [mycatdb:user04] dn14->dn27 completed in 496ms [mycatdb:user04] dn14->dn29 completed in 449ms [mycatdb:user04] dn15->dn6 completed in 399ms [mycatdb:user04] dn15->dn17 completed in 395ms [mycatdb:user04] dn15->dn25 completed in 512ms [mycatdb:user04] dn25->dn13 completed in 486ms [mycatdb:user04] dn25->dn16 completed in 473ms [mycatdb:user04] dn25->dn18 completed in 375ms [mycatdb:user04] dn25->dn30 completed in 395ms [mycatdb:user04] dn25->dn27 completed in 482ms [mycatdb:user04] dn26->dn17 completed in 422ms [mycatdb:user04] dn26->dn14 completed in 495ms [mycatdb:user04] dn26->dn25 completed in 440ms [mycatdb:user04] dn26->dn28 completed in 438ms [mycatdb:user04] dn26->dn29 completed in 424ms [mycatdb:user04] dn27->dn15 completed in 438ms [mycatdb:user04] dn27->dn25 completed in 404ms [mycatdb:user04] dn27->dn26 completed in 396ms 2018-11-23 17:26:18:106 [3]-> cleaning redundant data... [mycatdb:user04] clean dataNode dn2 completed in 97ms [mycatdb:user04] clean dataNode dn3 completed in 130ms [mycatdb:user04] clean dataNode dn2 completed in 201ms [mycatdb:user04] clean dataNode dn3 completed in 279ms [mycatdb:user04] clean dataNode dn1 completed in 243ms [mycatdb:user04] clean dataNode dn1 completed in 274ms [mycatdb:user04] clean dataNode dn3 completed in 200ms [mycatdb:user04] clean dataNode dn1 completed in 187ms [mycatdb:user04] clean dataNode dn2 completed in 199ms [mycatdb:user04] clean dataNode dn2 completed in 183ms [mycatdb:user04] clean dataNode dn15 completed in 120ms [mycatdb:user04] clean dataNode dn15 completed in 146ms [mycatdb:user04] clean dataNode dn13 completed in 155ms [mycatdb:user04] clean dataNode dn13 completed in 223ms [mycatdb:user04] clean dataNode dn14 completed in 166ms [mycatdb:user04] clean dataNode dn14 completed in 234ms [mycatdb:user04] clean dataNode dn15 completed in 221ms [mycatdb:user04] clean dataNode dn25 completed in 111ms [mycatdb:user04] clean dataNode dn27 completed in 118ms [mycatdb:user04] clean dataNode dn25 completed in 152ms [mycatdb:user04] clean dataNode dn26 completed in 186ms [mycatdb:user04] clean dataNode dn27 completed in 149ms [mycatdb:user04] clean dataNode dn26 completed in 182ms [mycatdb:user04] clean dataNode dn25 completed in 183ms [mycatdb:user04] clean dataNode dn26 completed in 208ms [mycatdb:user04] clean dataNode dn25 completed in 164ms [mycatdb:user04] clean dataNode dn27 completed in 207ms [mycatdb:user04] clean dataNode dn25 completed in 242ms [mycatdb:user04] clean dataNode dn26 completed in 179ms [mycatdb:user04] clean dataNode dn26 completed in 129ms 2018-11-23 17:26:21:423 [4]-> validating tables migrate result... +------migrate result-------+ |[mycatdb:user04] -> success| +---------------------------+ 2018-11-23 17:26:22:385 migrate data complete in 24736ms
|
六、重命名newSchema.xml和newRule.xml
扩容成功后,将newSchema.xml和newRule.xml重命名为schema.xml和rule.xml并替换掉原文件,重启mycat服务,整个扩容过程完成。
七、验证数据
node1
12345678910111213141516171819202122232425262728293031323334353637383940414243 |
mysql> select * from testdb01.user04; + ----+--------+ | id | name | + ----+--------+ | 16 | steven | | 49 | steven | + ----+--------+ 2 rows in set (0.01 sec) mysql> select * from testdb02.user04; + ----+--------+ | id | name | + ----+--------+ | 46 | steven | | 48 | steven | + ----+--------+ 2 rows in set (0.00 sec) mysql> select * from testdb03.user04; + ----+--------+ | id | name | + ----+--------+ | 24 | steven | | 40 | steven | + ----+--------+ 2 rows in set (0.00 sec) mysql> select * from testdb04.user04; + ----+--------+ | id | name | + ----+--------+ | 20 | steven | | 25 | steven | | 39 | steven | + ----+--------+ 3 rows in set (0.00 sec) mysql> select * from testdb05.user04; Empty set (0.01 sec) mysql> select * from testdb06.user04; + ----+--------+ | id | name | + ----+--------+ | 32 | steven | | 36 | steven | + ----+--------+ 2 rows in set (0.00 sec)
|
node2
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657 |
mysql> select * from testdb13.user04; + ----+--------+ | id | name | + ----+--------+ | 6 | steven | | 19 | steven | | 23 | steven | | 28 | steven | | 29 | steven | + ----+--------+ 5 rows in set (0.01 sec) mysql> select * from testdb14.user04; + ----+--------+ | id | name | + ----+--------+ | 5 | steven | + ----+--------+ 1 row in set (0.00 sec) mysql> select * from testdb15.user04; + ----+--------+ | id | name | + ----+--------+ | 7 | steven | | 26 | steven | | 30 | steven | | 43 | steven | | 47 | steven | + ----+--------+ 5 rows in set (0.01 sec) mysql> select * from testdb16.user04; + ----+--------+ | id | name | + ----+--------+ | 33 | steven | | 37 | steven | + ----+--------+ 2 rows in set (0.00 sec) mysql> select * from testdb17.user04; + ----+--------+ | id | name | + ----+--------+ | 10 | steven | | 12 | steven | | 14 | steven | | 15 | steven | | 34 | steven | | 38 | steven | + ----+--------+ 6 rows in set (0.01 sec) mysql> select * from testdb18.user04; + ----+--------+ | id | name | + ----+--------+ | 31 | steven | | 45 | steven | + ----+--------+ 2 rows in set (0.00 sec)
|
node3
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354 |
mysql> select * from testdb25.user04; + ----+--------+ | id | name | + ----+--------+ | 2 | steven | | 18 | steven | | 22 | steven | | 35 | steven | | 44 | steven | + ----+--------+ 5 rows in set (0.01 sec) mysql> select * from testdb26.user04; + ----+--------+ | id | name | + ----+--------+ | 3 | steven | | 17 | steven | + ----+--------+ 2 rows in set (0.00 sec) mysql> select * from testdb27.user04; + ----+--------+ | id | name | + ----+--------+ | 21 | steven | | 27 | steven | | 41 | steven | + ----+--------+ 3 rows in set (0.00 sec) mysql> select * from testdb28.user04; + ----+--------+ | id | name | + ----+--------+ | 9 | steven | | 42 | steven | + ----+--------+ 2 rows in set (0.00 sec) mysql> select * from testdb29.user04; + ----+--------+ | id | name | + ----+--------+ | 1 | steven | | 4 | steven | | 11 | steven | | 50 | steven | + ----+--------+ 4 rows in set (0.00 sec) mysql> select * from testdb30.user04; + ----+--------+ | id | name | + ----+--------+ | 8 | steven | | 13 | steven | + ----+--------+ 2 rows in set (0.00 sec)
|
可以看到user04分片由原来的9个分片变成了18个分片,验证完毕。
下面我们将缩减分片至9个分片
八、配置schema.xml,rule.xml
复制schema.xml、rule.xml并重命名为newSchema.xml、newRule.xml放于conf目录下,修改newSchema.xml和newRule.xml配置文件为缩容后的mycat配置参数(表的节点数、数据源、路由规则)
newSchema.xml
12345678910111213141516171819202122232425262728293031323334353637383940414243444546 |
<?xml version= "1.0" ?> <!DOCTYPE mycat:schema SYSTEM "schema.dtd" > <mycat:schema xmlns:mycat=" <schema name= "mycatdb" checkSQLschema= "false" sqlMaxLimit= "100" > <table name= "user04" dataNode= "dn$1-3,dn$13-15,dn$25-27" rule= "sharding-by-murmur-user04-id" >< /table > <table name= "user05" dataNode= "dn$1-36" rule= "sharding-by-date-adddate" >< /table > <table name= "user06" dataNode= "dn$1-36" rule= "sharding-by-month-adddate" >< /table > <table name= "user07" dataNode= "dn$1-36" rule= "sharding-by-hour-adddate" >< /table > < /schema > <!-- <dataNode name= "dn1$0-743" dataHost= "localhost1" database= "db$0-743" /> --> <dataNode name= "dn1" dataHost= "node1" database= "testdb01" /> <dataNode name= "dn2" dataHost= "node1" database= "testdb02" /> <dataNode name= "dn3" dataHost= "node1" database= "testdb03" /> <dataNode name= "dn13" dataHost= "node2" database= "testdb13" /> <dataNode name= "dn14" dataHost= "node2" database= "testdb14" /> <dataNode name= "dn15" dataHost= "node2" database= "testdb15" /> <dataNode name= "dn25" dataHost= "node3" database= "testdb25" /> <dataNode name= "dn26" dataHost= "node3" database= "testdb26" /> <dataNode name= "dn27" dataHost= "node3" database= "testdb27" /> <!--<dataNode name= "dn4" dataHost= "sequoiadb1" database= "SAMPLE" /> <dataNode name= "jdbc_dn1" dataHost= "jdbchost" database= "db1" /> <dataNode name= "jdbc_dn2" dataHost= "jdbchost" database= "db2" /> <dataNode name= "jdbc_dn3" dataHost= "jdbchost" database= "db3" /> --> <dataHost name= "node1" maxCon= "1000" minCon= "10" balance= "1" writeType= "0" dbType= "mysql" dbDriver= "native" switchType= "1" slaveThreshold= "100" > <heartbeat> select user()< /heartbeat > <!-- can have multi write hosts --> <writeHost host= "192.168.8.31" url= "192.168.8.31:3306" user= "root" password= "mysql" >< /writeHost > < /dataHost > <dataHost name= "node2" maxCon= "1000" minCon= "10" balance= "1" writeType= "0" dbType= "mysql" dbDriver= "native" switchType= "1" slaveThreshold= "100" > <heartbeat> select user()< /heartbeat > <!-- can have multi write hosts --> <writeHost host= "192.168.8.32" url= "192.168.8.32:3306" user= "root" password= "mysql" >< /writeHost > < /dataHost > <dataHost name= "node3" maxCon= "1000" minCon= "10" balance= "1" writeType= "0" dbType= "mysql" dbDriver= "native" switchType= "1" slaveThreshold= "100" > <heartbeat> select user()< /heartbeat > <!-- can have multi write hosts --> <writeHost host= "192.168.8.33" url= "192.168.8.33:3306" user= "root" password= "mysql" >< /writeHost > < /dataHost > < /mycat :schema>
|
newRule.xml
12345678910111213 |
<tableRule name= "sharding-by-murmur-user04-id" > <rule> <columns> id < /columns > <algorithm>murmur- id < /algorithm > < /rule > < /tableRule > < function name= "murmur-id" class= "io.mycat.route.function.PartitionByMurmurHash" > <property name= "seed" >0< /property ><!-- 默认是 0 --> <property name= "type" >0< /property ><!-- 默认是 0, 表示 integer,非 0 表示 string--> <property name= "count" >9< /property ><!-- 要分片的数据库节点数量,必须指定,否则没法分片 --> <property name= "virtualBucketTimes" >160< /property ><!-- 一个实际的数据库节点被映射为这么多虚拟节点,默认是 160 倍,也就是虚拟节点数是物理节点数 的 160 倍 --> < /function >
|
九、重新分区
修改 bin 目录下的 dataMigrate.sh 脚本文件,
../bin/dataMigrate.sh
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697 |
[root@mycat conf] # ../bin/dataMigrate.sh "/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-2.b14.el7.x86_64/jre/bin/java" -DMYCAT_HOME= "/usr/local/mycat" -classpath "/usr/local/mycat/conf:/usr/local/mycat/lib/classes:/usr/local/mycat/lib/asm-4.0.jar:/usr/local/mycat/lib/commons-collections-3.2.1.jar:/usr/local/mycat/lib/commons-lang-2.6.jar:/usr/local/mycat/lib/curator-client-2.11.0.jar:/usr/local/mycat/lib/curator-framework-2.11.0.jar:/usr/local/mycat/lib/curator-recipes-2.11.0.jar:/usr/local/mycat/lib/disruptor-3.3.4.jar:/usr/local/mycat/lib/dom4j-1.6.1.jar:/usr/local/mycat/lib/druid-1.0.26.jar:/usr/local/mycat/lib/ehcache-core-2.6.11.jar:/usr/local/mycat/lib/fastjson-1.2.12.jar:/usr/local/mycat/lib/guava-19.0.jar:/usr/local/mycat/lib/hamcrest-core-1.3.jar:/usr/local/mycat/lib/hamcrest-library-1.3.jar:/usr/local/mycat/lib/jline-0.9.94.jar:/usr/local/mycat/lib/joda-time-2.9.3.jar:/usr/local/mycat/lib/jsr305-2.0.3.jar:/usr/local/mycat/lib/kryo-2.10.jar:/usr/local/mycat/lib/leveldb-0.7.jar:/usr/local/mycat/lib/leveldb-api-0.7.jar:/usr/local/mycat/lib/log4j-1.2.17.jar:/usr/local/mycat/lib/log4j-1.2-api-2.5.jar:/usr/local/mycat/lib/log4j-api-2.5.jar:/usr/local/mycat/lib/log4j-core-2.5.jar:/usr/local/mycat/lib/log4j-slf4j-impl-2.5.jar:/usr/local/mycat/lib/mapdb-1.0.7.jar:/usr/local/mycat/lib/minlog-1.2.jar:/usr/local/mycat/lib/mongo-java-driver-2.11.4.jar:/usr/local/mycat/lib/Mycat-server-1.6.6.1-release.jar:/usr/local/mycat/lib/mysql-binlog-connector-java-0.16.1.jar:/usr/local/mycat/lib/mysql-connector-java-5.1.35.jar:/usr/local/mycat/lib/netty-3.7.0.Final.jar:/usr/local/mycat/lib/netty-buffer-4.1.9.Final.jar:/usr/local/mycat/lib/netty-common-4.1.9.Final.jar:/usr/local/mycat/lib/objenesis-1.2.jar:/usr/local/mycat/lib/reflectasm-1.03.jar:/usr/local/mycat/lib/sequoiadb-driver-1.12.jar:/usr/local/mycat/lib/slf4j-api-1.6.1.jar:/usr/local/mycat/lib/univocity-parsers-2.2.1.jar:/usr/local/mycat/lib/velocity-1.7.jar:/usr/local/mycat/lib/wrapper.jar:/usr/local/mycat/lib/zookeeper-3.4.6.jar" -server -Xms2G -Xmx2G -XX:MaxPermSize=64M -XX:+AggressiveOpts -XX:MaxDirectMemorySize=2G io.mycat.util.dataMigrator.DataMigrator -tempFileDir= -isAwaysUseMaster= true -mysqlBin= -cmdLength=110*1024 -charset=utf8 -deleteTempFileDir= true -threadCount= -delThreadCount= -queryPageSize= OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=64M; support was removed in 8.0 2018-11-23 17:47:02:047 [1]-> creating migrator schedule and temp files for migrate... +---------------------------------------------[mycatdb:user04] migrate info---------------------------------------------+ |tableSize = 50 | |migrate before = [dn1, dn2, dn3, dn4, dn5, dn6, dn13, dn14, dn15, dn16, dn17, dn18, dn25, dn26, dn27, dn28, dn29, dn30]| |migrate after = [dn1, dn2, dn3, dn13, dn14, dn15, dn25, dn26, dn27] | |rule function = PartitionByMurmurHash | +-----------------------------------------------------------------------------------------------------------------------+ +--[mycatdb:user04] migrate schedule---+ |dn13[5] -> [0, 0, 0, 0, 0, 0, 5, 0, 0]| |dn14[1] -> [0, 0, 0, 0, 0, 0, 0, 1, 0]| |dn15[5] -> [0, 0, 0, 0, 0, 0, 0, 0, 5]| |dn16[2] -> [0, 0, 1, 0, 0, 0, 1, 0, 0]| |dn17[6] -> [2, 1, 0, 1, 0, 1, 0, 1, 0]| |dn18[2] -> [0, 1, 0, 0, 0, 0, 1, 0, 0]| |dn1[2] -> [0, 0, 0, 0, 0, 0, 0, 0, 0]| |dn25[5] -> [0, 1, 1, 0, 0, 1, 0, 1, 1]| |dn26[2] -> [1, 0, 0, 0, 0, 0, 0, 0, 1]| |dn27[3] -> [0, 0, 0, 0, 1, 0, 1, 0, 0]| |dn28[2] -> [0, 1, 0, 0, 0, 0, 0, 1, 0]| |dn29[4] -> [0, 0, 1, 0, 2, 0, 0, 1, 0]| |dn2[2] -> [0, 0, 0, 0, 0, 0, 0, 0, 0]| |dn30[2] -> [1, 0, 0, 0, 0, 0, 1, 0, 0]| |dn3[2] -> [0, 0, 0, 0, 0, 0, 0, 0, 0]| |dn4[3] -> [0, 0, 0, 3, 0, 0, 0, 0, 0]| |dn5[0] -> [0, 0, 0, 0, 0, 0, 0, 0, 0]| |dn6[2] -> [0, 0, 0, 0, 0, 2, 0, 0, 0]| +--------------------------------------+ 2018-11-23 17:47:06:683 [2]-> start migrate data... [mycatdb:user04] dn6->dn15 completed in 763ms [mycatdb:user04] dn4->dn13 completed in 835ms [mycatdb:user04] dn13->dn25 completed in 473ms [mycatdb:user04] dn14->dn26 completed in 468ms [mycatdb:user04] dn15->dn27 completed in 414ms [mycatdb:user04] dn16->dn3 completed in 406ms [mycatdb:user04] dn16->dn25 completed in 439ms [mycatdb:user04] dn17->dn1 completed in 474ms [mycatdb:user04] dn17->dn2 completed in 426ms [mycatdb:user04] dn17->dn13 completed in 396ms [mycatdb:user04] dn17->dn15 completed in 408ms [mycatdb:user04] dn17->dn26 completed in 444ms [mycatdb:user04] dn18->dn2 completed in 420ms [mycatdb:user04] dn18->dn25 completed in 508ms [mycatdb:user04] dn25->dn2 completed in 439ms [mycatdb:user04] dn25->dn3 completed in 459ms [mycatdb:user04] dn25->dn15 completed in 422ms [mycatdb:user04] dn25->dn26 completed in 442ms [mycatdb:user04] dn25->dn27 completed in 448ms [mycatdb:user04] dn26->dn1 completed in 412ms [mycatdb:user04] dn26->dn27 completed in 434ms [mycatdb:user04] dn27->dn14 completed in 436ms [mycatdb:user04] dn27->dn25 completed in 442ms [mycatdb:user04] dn28->dn2 completed in 453ms [mycatdb:user04] dn28->dn26 completed in 397ms [mycatdb:user04] dn29->dn3 completed in 381ms [mycatdb:user04] dn29->dn14 completed in 405ms [mycatdb:user04] dn29->dn26 completed in 440ms [mycatdb:user04] dn30->dn1 completed in 437ms [mycatdb:user04] dn30->dn25 completed in 358ms 2018-11-23 17:47:13:659 [3]-> cleaning redundant data... [mycatdb:user04] clean dataNode dn6 completed in 267ms [mycatdb:user04] clean dataNode dn4 completed in 285ms [mycatdb:user04] clean dataNode dn17 completed in 154ms [mycatdb:user04] clean dataNode dn17 completed in 228ms [mycatdb:user04] clean dataNode dn15 completed in 141ms [mycatdb:user04] clean dataNode dn16 completed in 248ms [mycatdb:user04] clean dataNode dn18 completed in 241ms [mycatdb:user04] clean dataNode dn14 completed in 192ms [mycatdb:user04] clean dataNode dn16 completed in 316ms [mycatdb:user04] clean dataNode dn17 completed in 254ms [mycatdb:user04] clean dataNode dn17 completed in 325ms [mycatdb:user04] clean dataNode dn17 completed in 222ms [mycatdb:user04] clean dataNode dn13 completed in 170ms [mycatdb:user04] clean dataNode dn18 completed in 198ms [mycatdb:user04] clean dataNode dn25 completed in 101ms [mycatdb:user04] clean dataNode dn29 completed in 195ms [mycatdb:user04] clean dataNode dn30 completed in 240ms [mycatdb:user04] clean dataNode dn29 completed in 279ms [mycatdb:user04] clean dataNode dn27 completed in 172ms [mycatdb:user04] clean dataNode dn25 completed in 176ms [mycatdb:user04] clean dataNode dn28 completed in 258ms [mycatdb:user04] clean dataNode dn25 completed in 162ms [mycatdb:user04] clean dataNode dn25 completed in 202ms [mycatdb:user04] clean dataNode dn28 completed in 313ms [mycatdb:user04] clean dataNode dn26 completed in 185ms [mycatdb:user04] clean dataNode dn29 completed in 243ms [mycatdb:user04] clean dataNode dn30 completed in 258ms [mycatdb:user04] clean dataNode dn27 completed in 161ms [mycatdb:user04] clean dataNode dn25 completed in 168ms [mycatdb:user04] clean dataNode dn26 completed in 160ms 2018-11-23 17:47:17:586 [4]-> validating tables migrate result... +------migrate result-------+ |[mycatdb:user04] -> success| +---------------------------+ 2018-11-23 17:47:18:102 migrate data complete in 16057ms
|
十、重命名newSchema.xml和newRule.xml
缩容成功后,将newSchema.xml和newRule.xml重命名为schema.xml和rule.xml并替换掉原文件,重启mycat服务,整个缩容过程完成。
十一、验证数据
node1
1234567891011121314151617181920212223242526272829303132333435363738394041 |
mysql> select * from testdb01.user04; + ----+--------+ | id | name | + ----+--------+ | 8 | steven | | 14 | steven | | 16 | steven | | 17 | steven | | 34 | steven | | 49 | steven | + ----+--------+ 6 rows in set (0.00 sec) mysql> select * from testdb02.user04; + ----+--------+ | id | name | + ----+--------+ | 9 | steven | | 10 | steven | | 44 | steven | | 45 | steven | | 46 | steven | | 48 | steven | + ----+--------+ 6 rows in set (0.00 sec) mysql> select * from testdb03.user04; + ----+--------+ | id | name | + ----+--------+ | 11 | steven | | 24 | steven | | 33 | steven | | 35 | steven | | 40 | steven | + ----+--------+ 5 rows in set (0.01 sec) mysql> select * from testdb04.user04; Empty set (0.00 sec) mysql> select * from testdb05.user04; Empty set (0.01 sec) mysql> select * from testdb06.user04; Empty set (0.00 sec)
|
node2
1234567891011121314151617181920212223242526272829303132333435 |
mysql> select * from testdb13.user04; + ----+--------+ | id | name | + ----+--------+ | 20 | steven | | 25 | steven | | 38 | steven | | 39 | steven | + ----+--------+ 4 rows in set (0.00 sec) mysql> select * from testdb14.user04; + ----+--------+ | id | name | + ----+--------+ | 1 | steven | | 41 | steven | | 50 | steven | + ----+--------+ 3 rows in set (0.00 sec) mysql> select * from testdb15.user04; + ----+--------+ | id | name | + ----+--------+ | 12 | steven | | 18 | steven | | 32 | steven | | 36 | steven | + ----+--------+ 4 rows in set (0.00 sec) mysql> select * from testdb16.user04; Empty set (0.00 sec) mysql> select * from testdb17.user04; Empty set (0.00 sec) mysql> select * from testdb18.user04; Empty set (0.00 sec)
|
node3
12345678910111213141516171819202122232425262728293031323334353637383940414243444546 |
mysql> select * from testdb25.user04; + ----+--------+ | id | name | + ----+--------+ | 6 | steven | | 13 | steven | | 19 | steven | | 23 | steven | | 27 | steven | | 28 | steven | | 29 | steven | | 31 | steven | | 37 | steven | + ----+--------+ 9 rows in set (0.01 sec) mysql> select * from testdb26.user04; + ----+--------+ | id | name | + ----+--------+ | 4 | steven | | 5 | steven | | 15 | steven | | 22 | steven | | 42 | steven | + ----+--------+ 5 rows in set (0.01 sec) mysql> select * from testdb27.user04; + ----+--------+ | id | name | + ----+--------+ | 2 | steven | | 3 | steven | | 7 | steven | | 21 | steven | | 26 | steven | | 30 | steven | | 43 | steven | | 47 | steven | + ----+--------+ 8 rows in set (0.01 sec) mysql> select * from testdb28.user04; Empty set (0.00 sec) mysql> select * from testdb29.user04; Empty set (0.00 sec) mysql> select * from testdb30.user04; Empty set (0.00 sec)
|
缩容之后,user04的所有数据变成了9个分片,其他分片没有存放数据,验证完毕。
遇到的问题:
由于测试环境之前用作一主两从测试环境,开启了gtid_mode,所以出现报错:
ERROR 1840 (HY000) at line 3 in file: '/usr/local/mycat/temp/mycatdb-user04/dn1(old)-dn17(new)/user04.sql': @@GLOBAL.GTID_PURGED can only be set when @@GLOBAL.GTID_EXECUTED is empty. -> mysql -h192.168.8.32 -P3306 -uroot -pmysql -Dtestdb17 -f --default-character-set=utf8 -e "source /usr/local/mycat/temp/mycatdb-user04/dn1(old)-dn17(new)/user04.sql"
解决方法:关闭gtid_mode
另外,扩容也可以采用停机方法,备份逻辑表所有数据,重新进行分片,然后再导入备份的数据,从而完成扩容的目的。