Redis踩坑系列(二)Spark批量Load大量数据到Redis,主从同步问题

需求

  1. 每天定时批量刷新大量数据进Redis
  2. Redis集群是哨兵模式
  3. 主从同步时间不做要求

现象

  1. Spark批量load数据到redis,主节点没问题,大概10分钟可以写入完成
  2. 网络IO负载较大,从节点报警is stop,主节点报主从同步异常
  3. 从节点重启后,从磁盘load数据入内存,十几分钟时间后redis集群恢复正常
  4. 只要有大量的写都会导致从节点stop,主节点没问题

定位分析

  1. 日志
316495:C 19 Mar 16:18:38.002 * RDB: 9198 MB of memory used by copy-on-write
107122:S 19 Mar 16:18:42.962 * Background saving terminated with success
107122:S 19 Mar 16:19:43.098 * 100000 changes in 60 seconds. Saving...
107122:S 19 Mar 16:19:45.920 * Background saving started by pid 328571
107122:S 19 Mar 16:29:00.487 * MASTER <-> SLAVE sync: receiving 50693570874 bytes from master
107122:S 19 Mar 16:30:13.042 * MASTER <-> SLAVE sync: Flushing old data
328571:C 19 Mar 16:30:49.893 * DB saved on disk
328571:C 19 Mar 16:30:52.653 * RDB: 32184 MB of memory used by copy-on-write
107122:S 19 Mar 16:33:06.687 * MASTER <-> SLAVE sync: Loading DB in memory
107122:S 19 Mar 16:41:11.525 * MASTER <-> SLAVE sync: Finished with success
107122:S 19 Mar 16:41:11.525 * Background saving terminated with success
107122:S 19 Mar 16:55:36.373 * 10000 changes in 300 seconds. Saving...
107122:S 19 Mar 16:55:39.103 * Background saving started by pid 361997
361997:C 19 Mar 17:07:55.325 * DB saved on disk
361997:C 19 Mar 17:07:57.084 * RDB: 15119 MB of memory used by copy-on-write
107122:S 19 Mar 17:08:01.229 * Background saving terminated with success
107122:S 19 Mar 17:09:02.050 * 100000 changes in 60 seconds. Saving...
107122:S 19 Mar 17:09:05.179 * Background saving started by pid 376019
  1. 配置
repl-timeout 600
  1. 问题分析
    结合Redis主从同步原理,配置和日志分析,短时间内更新大量Key,网络IO比较紧张的情况下,同步的超时时间设置的10分钟(600s),从日志发现实际时间12分钟多,主节点认为主从同步异常了会将从节点杀死,接下来重启后会从本地磁盘load数据入内存。

  2. 解决方式

减慢同步的量,分批次多次刷redis。
结合具体业务,修改配置repl-timeout 600,加大主从同步时长。

参考链接

上一篇:计算机组成与系统结构第一课(计算机系统概述)


下一篇:批量生产数据放入队列在批量获取结果