redis主从之间的复制分为两部分:
- 全量复制
- 增量复制
全量复制
redis之间在第一次实现主从关系时会进行全量复制,通过观察第一次主从关系的建立时的主从日志文件,主从关系首次建立时复制过程如下:
master 81
1401:M 19 Sep 2021 23:49:47.566 * Replica 10.0.0.82:6379 asks for synchronization
1401:M 19 Sep 2021 23:49:47.566 * Full resync requested by replica 10.0.0.82:6379 收到请求全量同步
1401:M 19 Sep 2021 23:49:47.566 * Starting BGSAVE for SYNC with target: disk
1401:M 19 Sep 2021 23:49:47.567 * Background saving started by pid 1466 后台fork子进程执行bgsave
1466:C 19 Sep 2021 23:49:47.568 * DB saved on disk
1466:C 19 Sep 2021 23:49:47.568 * RDB: 0 MB of memory used by copy-on-write
1401:M 19 Sep 2021 23:49:47.598 * Background saving terminated with success
1401:M 19 Sep 2021 23:49:47.598 * Synchronization with replica 10.0.0.82:6379 succeeded
slave 82
1269:S 19 Sep 2021 23:49:47.469 * Ready to accept connections
1269:S 19 Sep 2021 23:49:47.469 * Connecting to MASTER 10.0.0.81:6379
1269:S 19 Sep 2021 23:49:47.469 * MASTER <-> REPLICA sync started
1269:S 19 Sep 2021 23:49:47.470 * Non blocking connect for SYNC fired the event.
1269:S 19 Sep 2021 23:49:47.470 * Master replied to PING, replication can continue... 主从关系检测
1269:S 19 Sep 2021 23:49:47.472 * Partial resynchronization not possible (no cached master) no部分复制
1269:S 19 Sep 2021 23:49:47.473 * Full resync from master: e0e82f16a47b8955d76937b317bf2f7925c9b856:0 从master全量复制:master_replid
1269:S 19 Sep 2021 23:49:47.504 * MASTER <-> REPLICA sync: receiving 187 bytes from master
1269:S 19 Sep 2021 23:49:47.504 * MASTER <-> REPLICA sync: Flushing old data
1269:S 19 Sep 2021 23:49:47.504 * MASTER <-> REPLICA sync: Loading DB in memory
1269:S 19 Sep 2021 23:49:47.513 * MASTER <-> REPLICA sync: Finished with success
全量复制过程如下:
- slave向master发送psync指令
MASTER <-> REPLICA sync started
- master将数据同步的相关消息(runid、offset)回应给slave
Full resync requested by replica 10.0.0.82:6379
- slave保存同步消息
- master的redis主进程fork出子进程执行bgsave(非阻塞)将内存中数据快照保存为rdb文件;在bgsave的同时将新写入的数据保存到buffer中
- master发送RDB
- master发送buffer
- slave清楚原有旧数据
Flushing old data
- load RDB
Loading DB in memory
- load buffer
增量复制
如果由于网络原因造成原因造成主从断开,期间有数据写入master,再次形成主从时则会形成增量复制。
[root@81 etc]# iptables -A INPUT -s 10.0.0.82 -j REJECT 断开主从
127.0.0.1:6379> set haha hehe 断开期间写入数据到master
OK
[root@81 etc]# iptables -F 恢复主从
观察master、slave日志如下
master 81
1401:M 20 Sep 2021 00:42:23.762 # Connection with replica 10.0.0.82:6379 lost. 失联
1401:M 20 Sep 2021 00:44:46.245 * Replica 10.0.0.82:6379 asks for synchronization
1401:M 20 Sep 2021 00:44:46.246 * Partial resynchronization request from 10.0.0.82:6379 accepted. Sending 127 bytes of backlog starting from offset 4408. 接受从的增量同步
slave 82
1269:S 20 Sep 2021 00:44:45.139 * MASTER <-> REPLICA sync started
1269:S 20 Sep 2021 00:44:45.139 # Error condition on socket for SYNC: Connection refused
1269:S 20 Sep 2021 00:44:46.154 * Connecting to MASTER 10.0.0.81:6379
1269:S 20 Sep 2021 00:44:46.154 * MASTER <-> REPLICA sync started
1269:S 20 Sep 2021 00:44:46.155 * Non blocking connect for SYNC fired the event.
1269:S 20 Sep 2021 00:44:46.156 * Master replied to PING, replication can continue...
1269:S 20 Sep 2021 00:44:46.158 * Trying a partial resynchronization (request e0e82f16a47b8955d76937b317bf2f7925c9b856:4408). 尝试部分同步(请求 master_replid)
1269:S 20 Sep 2021 00:44:46.159 * Successful partial resynchronization with master.
1269:S 20 Sep 2021 00:44:46.159 * MASTER <-> REPLICA sync: Master accepted a Partial Resynchronization.
增量复制过程如下
- connection lost
- slave connection to master
- master发送同步信息到slave
- 进行部分同步,master传输主从断开期间的积压数据backlog
总结
repl-backlog-size 该配置为master写入数据的缓冲区,用于记录上一次同步到下一次同步期间的写入命令,在主从断开或在bgsave时,可能由于buffer过小,写入数据将buffer写满后冲刷buffer中的旧数据,容易造成主从数据不同不;从而造成复制失败或者循环的全量复制;所以buffer 需要根据不同环境合理设置其大小;
repl-timeout x 复制超时时长
repl-backlog-size y 缓冲区大小
repl-backlog-size z 如果z秒没有slave的连接,则释放缓冲区
其大小设置一般为大于 master的每秒写入量*允许从节点的最大断开时间
由于全量复制可能会造成大量网络与磁盘IO,所以需要尽量避免全量复制
- 避免一主多从架构造成主服务器压力过大,可以通过 主——从——从 实现级联复制
- 多实例情况下避免多主集中在同一台服务器