MySQL 5.6.13基于MySQL-Proxy的读写分离

一、MySQL-Proxy简介

MySQL-Proxy是处在MySQL数据库客户和服务端之间的一个中间件,支持嵌入性脚本语言lua,可以用来分析,监控和变换通信数据,支持使用的场景包括:负载均衡和故障转移处理,查询分析日志,查询重写等。

MySQL Proxy更强大的一项功能是实现“读写分离(Read/Write Splitting)”。基本的原理是让主数据库处理事务性查询,而从数据库处理SELECT查询。数据库复制被用来把事务性查询导致的变更同步到集群中的从数据库。


二、实验环境

192.168.30.115 OS:CentOS 6.4 x86_64  mysql-proxy.luojianlong.com

192.168.30.116 OS:CentOS 6.4 x86_64  master.luojianlong.com

192.168.30.117 OS:CentOS 6.4 x86_64  slave.luojianlong.com

MySQL version:mysql-5.6.13-linux-glibc2.5-x86_64

MySQL-Proxy version:mysql-proxy-0.8.2-1.el6.x86_64

拓扑图:

MySQL 5.6.13基于MySQL-Proxy的读写分离



由于,读写分离的是基于主从复制的,上一篇博客已经介绍了mysql 5.6的主从复制,所有这里就不介绍了,直接使用做好的主从架构。

首先,在mysql-proxy.luojianlong.com上面安装mysql-proxy,使用epel的yum源来安装

[root@mysql-proxy ~]# cat /etc/yum.repos.d/epel.repo
[epel]
name=Extra Packages for Enterprise Linux 6 - $basearch
baseurl=http://mirrors.sohu.com/fedora-epel/6/x86_64/
enabled=1
gpgcheck=0
[root@mysql-proxy ~]# yum -y install mysql-proxy


修改服务脚本配置文件/etc/sysconfig/mysql-proxy

# 修改为
# Options for mysql-proxy
ADMIN_USER="admin"
ADMIN_PASSWORD="admin"
ADMIN_LUA_SCRIPT="/usr/lib64/mysql-proxy/lua/admin.lua"
PROXY_USER="mysql-proxy"
PROXY_OPTIONS="--daemon --log-level=info --log-use-syslog --plugins=proxy --plugins=admin --proxy-backend-addresses=192.168.30.116:3306 --proxy-read-only-backend-addresses=192.168.30.117:3306 --proxy-lua-script=/usr/lib64/mysql-proxy/lua/rw-splitting.lua"

其中的proxy-backend-addresses选项和proxy-read-only-backend-addresses选项均可重复使用多次,以实现指定多个读写服务器或只读服务器。

mysql-proxy的配置选项:

--proxy-address=host:port:代理服务监听的地址和端口;

--admin-address=host:port:管理模块监听的地址和端口;

--proxy-backend-addresses=host:port:后端mysql服务器的地址和端口;

--proxy-read-only-backend-addresses=host:port:后端只读mysql服务器的地址和端口;

--proxy-lua-script=file_name:完成mysql代理功能的Lua脚本;

--daemon:以守护进程模式启动mysql-proxy;

--keepalive:在mysql-proxy崩溃时尝试重启之;

--log-file=/path/to/log_file_name:日志文件名称;

--log-level=level:日志级别;

--log-use-syslog:基于syslog记录日志;

--plugins=plugin:在mysql-proxy启动时加载的插件;

--user=user_name:运行mysql-proxy进程的用户;

--defaults-file=/path/to/conf_file_name:默认使用的配置文件路径;其配置段使用[mysql-proxy]标识;

--proxy-skip-profiling:禁用profile;

--pid-file=/path/to/pid_file_name:进程文件名;


这里使用mysql-proxy-0.8.3提供的读写分离的lua脚本,将其复制保存为/usr/lib64/mysql-proxy/lua/proxy/rw-splitting.lua,就可以启动服务了

[root@mysql-proxy ~]# vi /usr/lib64/mysql-proxy/lua/rw-splitting.lua
--[[ $%BEGINLICENSE%$
 Copyright (c) 2007, 2012, Oracle and/or its affiliates. All rights reserved.
 This program is free software; you can redistribute it and/or
 modify it under the terms of the GNU General Public License as
 published by the Free Software Foundation; version 2 of the
 License.
 This program is distributed in the hope that it will be useful,
 but WITHOUT ANY WARRANTY; without even the implied warranty of
 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 GNU General Public License for more details.
 You should have received a copy of the GNU General Public License
 along with this program; if not, write to the Free Software
 Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
 02110-1301  USA
 $%ENDLICENSE%$ --]]
---
-- a flexible statement based load balancer with connection pooling
--
-- * build a connection pool of min_idle_connections for each backend and maintain
--   its size
-- *
--
--
local commands    = require("proxy.commands")
local tokenizer   = require("proxy.tokenizer")
local lb          = require("proxy.balance")
local auto_config = require("proxy.auto-config")
--- config
--
-- connection pool
if not proxy.global.config.rwsplit then
    proxy.global.config.rwsplit = {
        min_idle_connections = 4,
        max_idle_connections = 8,
        is_debug = false
    }
end
---
-- read/write splitting sends all non-transactional SELECTs to the slaves
--
-- is_in_transaction tracks the state of the transactions
local is_in_transaction       = false
-- if this was a SELECT SQL_CALC_FOUND_ROWS ... stay on the same connections
local is_in_select_calc_found_rows = false
---
-- get a connection to a backend
--
-- as long as we don‘t have enough connections in the pool, create new connections
--
function connect_server()
    local is_debug = proxy.global.config.rwsplit.is_debug
    -- make sure that we connect to each backend at least ones to
    -- keep the connections to the servers alive
    --
    -- on read_query we can switch the backends again to another backend
    if is_debug then
        print()
        print("[connect_server] " .. proxy.connection.client.src.name)
    end
    local rw_ndx = 0
    -- init all backends
    for i = 1, #proxy.global.backends do
        local s        = proxy.global.backends[i]
        local pool     = s.pool -- we don‘t have a username yet, try to find a connections which is idling
        local cur_idle = pool.users[""].cur_idle_connections
        pool.min_idle_connections = proxy.global.config.rwsplit.min_idle_connections
        pool.max_idle_connections = proxy.global.config.rwsplit.max_idle_connections
                                                                                                                                
        if is_debug then
            print("  [".. i .."].connected_clients = " .. s.connected_clients)
            print("  [".. i .."].pool.cur_idle     = " .. cur_idle)
            print("  [".. i .."].pool.max_idle     = " .. pool.max_idle_connections)
            print("  [".. i .."].pool.min_idle     = " .. pool.min_idle_connections)
            print("  [".. i .."].type = " .. s.type)
            print("  [".. i .."].state = " .. s.state)
        end
        -- prefer connections to the master
        if s.type == proxy.BACKEND_TYPE_RW and
           s.state ~= proxy.BACKEND_STATE_DOWN and
           cur_idle < pool.min_idle_connections then
            proxy.connection.backend_ndx = i
            break
        elseif s.type == proxy.BACKEND_TYPE_RO and
               s.state ~= proxy.BACKEND_STATE_DOWN and
               cur_idle < pool.min_idle_connections then
            proxy.connection.backend_ndx = i
            break
        elseif s.type == proxy.BACKEND_TYPE_RW and
               s.state ~= proxy.BACKEND_STATE_DOWN and
               rw_ndx == 0 then
            rw_ndx = i
        end
    end
    if proxy.connection.backend_ndx == 0 then
        if is_debug then
            print("  [" .. rw_ndx .. "] taking master as default")
        end
        proxy.connection.backend_ndx = rw_ndx
    end
    -- pick a random backend
    --
    -- we someone have to skip DOWN backends
    -- ok, did we got a backend ?
    if proxy.connection.server then
        if is_debug then
            print("  using pooled connection from: " .. proxy.connection.backend_ndx)
        end
        -- stay with it
        return proxy.PROXY_IGNORE_RESULT
    end
    if is_debug then
        print("  [" .. proxy.connection.backend_ndx .. "] idle-conns below min-idle")
    end
    -- open a new connection
end
---
-- put the successfully authed connection into the connection pool
--
-- @param auth the context information for the auth
--
-- auth.packet is the packet
function read_auth_result( auth )
    if is_debug then
        print("[read_auth_result] " .. proxy.connection.client.src.name)
    end
    if auth.packet:byte() == proxy.MYSQLD_PACKET_OK then
        -- auth was fine, disconnect from the server
        proxy.connection.backend_ndx = 0
    elseif auth.packet:byte() == proxy.MYSQLD_PACKET_EOF then
        -- we received either a
        --
        -- * MYSQLD_PACKET_ERR and the auth failed or
        -- * MYSQLD_PACKET_EOF which means a OLD PASSWORD (4.0) was sent
        print("(read_auth_result) ... not ok yet");
    elseif auth.packet:byte() == proxy.MYSQLD_PACKET_ERR then
        -- auth failed
    end
end
---
-- read/write splitting
function read_query( packet )
    local is_debug = proxy.global.config.rwsplit.is_debug
    local cmd      = commands.parse(packet)
    local c        = proxy.connection.client
    local r = auto_config.handle(cmd)
    if r then return r end
    local tokens
    local norm_query
    -- looks like we have to forward this statement to a backend
    if is_debug then
        print("[read_query] " .. proxy.connection.client.src.name)
        print("  current backend   = " .. proxy.connection.backend_ndx)
        print("  client default db = " .. c.default_db)
        print("  client username   = " .. c.username)
        if cmd.type == proxy.COM_QUERY then
            print("  query             = "        .. cmd.query)
        end
    end
    if cmd.type == proxy.COM_QUIT then
        -- don‘t send COM_QUIT to the backend. We manage the connection
        -- in all aspects.
        proxy.response = {
            type = proxy.MYSQLD_PACKET_OK,
        }
                                                                                                                            
        if is_debug then
            print("  (QUIT) current backend   = " .. proxy.connection.backend_ndx)
        end
        return proxy.PROXY_SEND_RESULT
    end
                                                                                                                            
    -- COM_BINLOG_DUMP packet can‘t be balanced
    --
    -- so we must send it always to the master
    if cmd.type == proxy.COM_BINLOG_DUMP then
        -- if we don‘t have a backend selected, let‘s pick the master
        --
        if proxy.connection.backend_ndx == 0 then
            proxy.connection.backend_ndx = lb.idle_failsafe_rw()
        end
        return
    end
    proxy.queries:append(1, packet, { resultset_is_needed = true })
    -- read/write splitting
    --
    -- send all non-transactional SELECTs to a slave
    if not is_in_transaction and
       cmd.type == proxy.COM_QUERY then
        tokens     = tokens or assert(tokenizer.tokenize(cmd.query))
        local stmt = tokenizer.first_stmt_token(tokens)
        if stmt.token_name == "TK_SQL_SELECT" then
            is_in_select_calc_found_rows = false
            local is_insert_id = false
            for i = 1, #tokens do
                local token = tokens[i]
                -- SQL_CALC_FOUND_ROWS + FOUND_ROWS() have to be executed
                -- on the same connection
                -- print("token: " .. token.token_name)
                -- print("  val: " .. token.text)
                                                                                                                                        
                if not is_in_select_calc_found_rows and token.token_name == "TK_SQL_SQL_CALC_FOUND_ROWS" then
                    is_in_select_calc_found_rows = true
                elseif not is_insert_id and token.token_name == "TK_LITERAL" then
                    local utext = token.text:upper()
                    if utext == "LAST_INSERT_ID" or
                       utext == "@@INSERT_ID" then
                        is_insert_id = true
                    end
                end
                -- we found the two special token, we can‘t find more
                if is_insert_id and is_in_select_calc_found_rows then
                    break
                end
            end
            -- if we ask for the last-insert-id we have to ask it on the original
            -- connection
            if not is_insert_id then
                local backend_ndx = lb.idle_ro()
                if backend_ndx > 0 then
                    proxy.connection.backend_ndx = backend_ndx
                end
            else
                print("   found a SELECT LAST_INSERT_ID(), staying on the same backend")
            end
        end
    end
    -- no backend selected yet, pick a master
    if proxy.connection.backend_ndx == 0 then
        -- we don‘t have a backend right now
        --
        -- let‘s pick a master as a good default
        --
        proxy.connection.backend_ndx = lb.idle_failsafe_rw()
    end
    -- by now we should have a backend
    --
    -- in case the master is down, we have to close the client connections
    -- otherwise we can go on
    if proxy.connection.backend_ndx == 0 then
        return proxy.PROXY_SEND_QUERY
    end
    local s = proxy.connection.server
    -- if client and server db don‘t match, adjust the server-side
    --
    -- skip it if we send a INIT_DB anyway
    if cmd.type ~= proxy.COM_INIT_DB and
       c.default_db and c.default_db ~= s.default_db then
        print("    server default db: " .. s.default_db)
        print("    client default db: " .. c.default_db)
        print("    syncronizing")
        proxy.queries:prepend(2, string.char(proxy.COM_INIT_DB) .. c.default_db, { resultset_is_needed = true })
    end
    -- send to master
    if is_debug then
        if proxy.connection.backend_ndx > 0 then
            local b = proxy.global.backends[proxy.connection.backend_ndx]
            print("  sending to backend : " .. b.dst.name);
            print("    is_slave         : " .. tostring(b.type == proxy.BACKEND_TYPE_RO));
            print("    server default db: " .. s.default_db)
            print("    server username  : " .. s.username)
        end
        print("    in_trans        : " .. tostring(is_in_transaction))
        print("    in_calc_found   : " .. tostring(is_in_select_calc_found_rows))
        print("    COM_QUERY       : " .. tostring(cmd.type == proxy.COM_QUERY))
    end
    return proxy.PROXY_SEND_QUERY
end
---
-- as long as we are in a transaction keep the connection
-- otherwise release it so another client can use it
function read_query_result( inj )
    local is_debug = proxy.global.config.rwsplit.is_debug
    local res      = assert(inj.resultset)
    local flags    = res.flags
    if inj.id ~= 1 then
        -- ignore the result of the USE <default_db>
        -- the DB might not exist on the backend, what do do ?
        --
        if inj.id == 2 then
            -- the injected INIT_DB failed as the slave doesn‘t have this DB
            -- or doesn‘t have permissions to read from it
            if res.query_status == proxy.MYSQLD_PACKET_ERR then
                proxy.queries:reset()
                proxy.response = {
                    type = proxy.MYSQLD_PACKET_ERR,
                    errmsg = "can‘t change DB ".. proxy.connection.client.default_db ..
                        " to on slave " .. proxy.global.backends[proxy.connection.backend_ndx].dst.name
                }
                return proxy.PROXY_SEND_RESULT
            end
        end
        return proxy.PROXY_IGNORE_RESULT
    end
    is_in_transaction = flags.in_trans
    local have_last_insert_id = (res.insert_id and (res.insert_id > 0))
    if not is_in_transaction and
       not is_in_select_calc_found_rows and
       not have_last_insert_id then
        -- release the backend
        proxy.connection.backend_ndx = 0
    elseif is_debug then
        print("(read_query_result) staying on the same backend")
        print("    in_trans        : " .. tostring(is_in_transaction))
        print("    in_calc_found   : " .. tostring(is_in_select_calc_found_rows))
        print("    have_insert_id  : " .. tostring(have_last_insert_id))
    end
end
---
-- close the connections if we have enough connections in the pool
--
-- @return nil - close connection
--         IGNORE_RESULT - store connection in the pool
function disconnect_client()
    local is_debug = proxy.global.config.rwsplit.is_debug
    if is_debug then
        print("[disconnect_client] " .. proxy.connection.client.src.name)
    end
    -- make sure we are disconnection from the connection
    -- to move the connection into the pool
    proxy.connection.backend_ndx = 0
end


启动mysql-proxy

[root@mysql-proxy ~]# service mysql-proxy start
Starting mysql-proxy:                                      [  OK  ]
[root@mysql-proxy ~]# ss -anptl
State       Recv-Q Send-Q                                                               Local Address:Port                                                                 Peer Address:Port
LISTEN      0      128                                                                             :::111                                                                            :::*      users:(("rpcbind",1627,11))
LISTEN      0      128                                                                              *:111                                                                             *:*      users:(("rpcbind",1627,8))
LISTEN      0      128                                                                             :::22                                                                             :::*      users:(("sshd",2079,4))
LISTEN      0      128                                                                              *:22                                                                              *:*      users:(("sshd",2079,3))
LISTEN      0      128                                                                      127.0.0.1:631                                                                             *:*      users:(("cupsd",1938,7))
LISTEN      0      128                                                                            ::1:631                                                                            :::*      users:(("cupsd",1938,6))
LISTEN      0      128                                                                              *:47000                                                                           *:*      users:(("rpc.statd",1837,9))
LISTEN      0      100                                                                            ::1:25                                                                             :::*      users:(("master",2174,13))
LISTEN      0      100                                                                      127.0.0.1:25                                                                              *:*      users:(("master",2174,12))
LISTEN      0      128                                                                             :::53369                                                                          :::*      users:(("rpc.statd",1837,11))
LISTEN      0      128                                                                              *:4040                                                                            *:*      users:(("mysql-proxy",19973,10))
LISTEN      0      128                                                                              *:4041                                                                            *:*      users:(("mysql-proxy",19973,11))


在mysql-proxy上登录

[root@mysql-proxy ~]# mysql -u root -pmypass -h 192.168.30.115 -P4040
mysql> create database testdb;
Query OK, 1 row affected (0.01 sec)
mysql> use testdb
    server default db:
    client default db: testdb
    syncronizing
Database changed
mysql> create table testA(id int);
Query OK, 0 rows affected (0.43 sec)
# 分别在主从服务器上使用tcpdump命令抓包
[root@master ~]# tcpdump -i eth0 -s0 -nn -A tcp dst port 3306 and dst host 192.168.30.116
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
[root@slave ~]# tcpdump -i eth0 -s0 -nn -A tcp dst port 3306 and dst host 192.168.30.117
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes

批量插入和查询数据

[root@mysql-proxy ~]# for i in {1..50}; do mysql -u root -pmypass -h 192.168.30.115 -P4040 -e ‘use testdb; insert into testA values(5);‘; done
[root@master ~]# tcpdump -i eth0 -s0 -nn -A tcp dst port 3306 and dst host 192.168.30.116
18:47:09.733244 IP 192.168.30.115.33567 > 192.168.30.116.3306: Flags [P.], seq 301:333, ack 155, win 131, options [nop,nop,TS val 21000991 ecr 23848994], length 32
E..Tqe@.@.
....s...t....[...Jb".....mh.....
.@s..k.".....insert into testA values(5)
18:47:09.749683 IP 192.168.30.115.33570 > 192.168.30.116.3306: Flags [.], ack 144, win 131, options [nop,nop,TS val 21001008 ecr 23848972], length 0
E..4..@.@..|...s...t."..r....u.h....>Z.....
.@s0.k..
18:47:09.775507 IP 192.168.30.117.38626 > 192.168.30.116.3306: Flags [.], ack 5185, win 499, options [nop,nop,TS val 22264873 ecr 23849027], length 0
E..4..@.@......u...t.........mm.....$......
.S.).k.C
18:47:09.778730 IP 192.168.30.115.33572 > 192.168.30.116.3306: Flags [P.], seq 269:301, ack 144, win 115, options [nop,nop,TS val 21001037 ecr 23848927], length 32
E..TA.@.@.:....s...t.$..+L.D\..n...s+0.....
.@sM.k.......root....f
.....J.....:c.P..
18:47:09.779117 IP 192.168.30.115.33572 > 192.168.30.116.3306: Flags [.], ack 155, win 115, options [nop,nop,TS val 21001037 ecr 23849040], length 0
E..4A.@.@.:....s...t.$..+L.d\..y...ss......
.@sM.k.P
18:47:09.783718 IP 192.168.30.115.33570 > 192.168.30.116.3306: Flags [P.], seq 290:301, ack 144, win 131, options [nop,nop,TS val 21001040 ecr 23848972], length 11
E..?..@.@..p...s...t."..r....u.h...........
[root@slave ~]# tcpdump -i eth0 -s0 -nn -A tcp dst port 3306 and dst host 192.168.30.117
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes

插入50条数据发现,只有master服务器抓到数据更新的包,slave没有

下面模拟50次查询请求,查看效果

[root@master ~]# tcpdump -i eth0 -s0 -nn -A tcp dst port 3306 and dst host 192.168.30.116
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
18:55:32.032655 IP 192.168.30.115.33567 > 192.168.30.116.3306: Flags [P.], seq 204:215, ack 100, win 131, options [nop,nop,TS val 21503291 ecr 24351256], length 11
E..?sf@.@.  ....s...t....[..{Jb-............
.H.;.s.......testdb
18:55:32.048136 IP 192.168.30.115.33572 > 192.168.30.116.3306: Flags [P.], seq 183:215, ack 100, win 115, options [nop,nop,TS val 21503307 ecr 24351259], length 32
E..TC.@.@.8....s...t.$..+M..\..n...s.f.....
.H.K.s.......root....f
.....J.....:c.P..
18:55:32.048802 IP 192.168.30.115.33572 > 192.168.30.116.3306: Flags [.], ack 111, win 115, options [nop,nop,TS val 21503308 ecr 24351310], length 0
E..4C.@.@.8....s...t.$..+M..\..y...s.T.....
.H.L.s.N
18:55:32.051857 IP 192.168.30.115.33570 > 192.168.30.116.3306: Flags [P.], seq 204:215, ack 100, win 131, options [nop,nop,TS val 21503310 ecr 24351274], length 11
E..?..@.@..l...s...t."..r..9.u.].....E.....
.H.N.s.*.....testdb
18:55:32.056674 IP 192.168.30.115.33574 > 192.168.30.116.3306: Flags [.], ack 100, win 115, options [nop,nop,TS val 21503314 ecr 24351277], length 0
E..4.n@.@......s...t.&....M2..x....s.......
.H.R.s.-
18:55:32.071103 IP 192.168.30.115.33574 > 192.168.30.116.3306: Flags [P.], seq 183:215, ack 100, win 115, options [nop,nop,TS val 21503329 ecr 24351277], length 32
E..T.o@.@......s...t.&....M2..x....s.......
.H.a.s.-.....root...Z.5o....|........#".
18:55:32.073535 IP 192.168.30.115.33567 > 192.168.30.116.3306: Flags [.], ack 111, win 131, options [nop,nop,TS val 21503332 ecr 24351294], length 0
E..4sg@.@.  ....s...t....[...Jb-............
.H.d.s.>
18:55:32.074669 IP 192.168.30.115.33576 > 192.168.30.116.3306: Flags [P.], seq 204:215, ack 100, win 115, options [nop,nop,TS val 21503333 ecr 24351291], length 11
E..?..@.@.u....s...t.(...‘..L=`M...s.......
.H.e.s.;.....testdb
18:55:32.075503 IP 192.168.30.115.33576 > 192.168.30.116.3306: Flags [.], ack 111, win 115, options [nop,nop,TS val 21503334 ecr 24351336], length 0
E..4..@.@.u....s...t.(...‘..L=`X...sB......
.H.f.s.h
18:55:32.090373 IP 192.168.30.115.33567 > 192.168.30.116.3306: Flags [P.], seq 215:247, ack 111, win 131, options [nop,nop,TS val 21503349 ecr 24351294], length 32
E..Tsh@.@......s...t....[...Jb-.....L......
.H.u.s.>.....root..S....9r.=*........ct.
18:55:32.091299 IP 192.168.30.115.33567 > 192.168.30.116.3306: Flags [.], ack 122, win 131, options [nop,nop,TS val 21503349 ecr 24351352], length 0
E..4si@.@.  ....s...t....[...Jb.......9.....
[root@slave ~]# tcpdump -i eth0 -s0 -nn -A tcp dst port 3306 and dst host 192.168.30.117
18:55:33.701885 IP 192.168.30.115.60055 > 192.168.30.117.3306: Flags [P.], seq 3683:3720, ack 36062, win 501, options [nop,nop,TS val 21504886 ecr 22768728], length 37
E..Y3E@.@.I....s...u....{.....UJ.....8.....
.H#v.[lX!....select @@version_comment limit 1
18:55:33.702932 IP 192.168.30.115.60053 > 192.168.30.117.3306: Flags [P.], seq 3693:3698, ack 36058, win 501, options [nop,nop,TS val 21504887 ecr 22768696], length 5
E..9..@.@._....s...u.....A...........{.....
.H#w.[l8.....
18:55:33.703341 IP 192.168.30.115.60053 > 192.168.30.117.3306: Flags [P.], seq 3698:3720, ack 36091, win 501, options [nop,nop,TS val 21504888 ecr 22768729], length 22
E..J..@.@._....s...u.....A.....7....\......
.H#x.[lY.....SELECT DATABASE()
18:55:33.704708 IP 192.168.30.115.60049 > 192.168.30.117.3306: Flags [P.], seq 3696:3720, ack 35492, win 501, options [nop,nop,TS val 21504889 ecr 22768711], length 24
E..L..@.@......s...u..........A......a.....
.H#y.[lG.....select * from testA
18:55:33.725988 IP 192.168.30.115.60057 > 192.168.30.117.3306: Flags [.], ack 36161, win 501, options [nop,nop,TS val 21504911 ecr 22768712], length 0
E..4i.@.@......s...u.....j.=........o......
.H#..[lH
18:55:33.726996 IP 192.168.30.115.60051 > 192.168.30.117.3306: Flags [.], ack 36161, win 501, options [nop,nop,TS val 21504912 ecr 22768714], length 0
E..4..@.@......s...u.....m...f.............
.H#..[lJ
18:55:33.740992 IP 192.168.30.115.60055 > 192.168.30.117.3306: Flags [.], ack 36161, win 501, options [nop,nop,TS val 21504926 ecr 22768728], length 0
E..43F@.@.I=...s...u....{.....U......_.....
.H#..[lX
18:55:33.742949 IP 192.168.30.115.60053 > 192.168.30.117.3306: Flags [.], ack 36161, win 501, options [nop,nop,TS val 21504928 ecr 22768730], length 0
E..4..@.@._....s...u.....A.....}...........
.H#..[lZ
18:55:33.744968 IP 192.168.30.115.60049 > 192.168.30.117.3306: Flags [.], ack 36161, win 501, options [nop,nop,TS val 21504930 ecr 22768731], length 0
E..4..@.@......s...u..........DE...........
.H#..[l[


测试发现读请求被平均分配到俩台服务器


到此,MySQL 5.6.13基于MySQL-Proxy的读写分离配置完成

本文出自 “龍行天下” 博客,请务必保留此出处http://luojianlong.blog.51cto.com/4412415/1393030

MySQL 5.6.13基于MySQL-Proxy的读写分离,布布扣,bubuko.com

MySQL 5.6.13基于MySQL-Proxy的读写分离

上一篇:rsyslog概要以及源码安装mysql,rsyslog输出到Mysql基于loganalyzer查看分析


下一篇:10G RAC节点2宕机通过修改listener.ora实现客户端通过节点2VIP连接到数据库