由于业务需要,在公司用了Redis去存储Token,用于用户登录验证以及权限校验。原本的Redis方案是一个单体的Redis,存在当Redis节点down掉之后,整套分布式微服务都不可用的风险,于是进行Redis高可用方案的技术选型,综合考虑各种方案的可用性、并发性和复杂度,最后选择了使用Redis哨兵的方案。
1. Redis Sentinel原理
简单贴个图,就不细讲了,本篇主要是记录应用过程,不涉及太多原理讲解
2. Redis哨兵一主两从搭建
2.1 环境准备
Linux服务器,下载Redis安装包
a. wget http://download.redis.io/releases/redis-5.0.5.tar.gz b.解压 tar -xvf redis-5.0.5.tar.gz cd redis-5.0.5 make make install
2.2 Redis服务搭建
主redis配置文件
# redis_master.conf protected-mode no port 6380 requirepass "${redis密码}" daemonize yes # 先创建对应的文件夹 dir "/data/redis/redis_slave_one" logfile "/data/redis/redis_slave_one.log" masterauth "${redis密码}" client-output-buffer-limit normal 0 0 0 # Generated by CONFIG REWRITE client-output-buffer-limit replica 512mb 128mb 120 replica-read-only no
二从配置文件
# redis_slave_one.conf protected-mode no port 6381 requirepass ${redis密码} daemonize yes # 先创建对应的文件夹 dir "/data/redis/redis_slave_two" logfile "/data/redis/redis_slave_two.log" replicaof ${ip} 6380 masterauth ${redis密码} client-output-buffer-limit replica 512mb 128mb 120 replica-read-only no -------------------------------------------- # redis_slave_two.conf protected-mode no port 6382 requirepass "${redis密码}" daemonize yes # 先创建对应的文件夹 dir "/data/redis/redis_master" logfile "/data/redis/redis_master.log" masterauth "${redis密码}" client-output-buffer-limit normal 0 0 0 # Generated by CONFIG REWRITE replicaof ${ip} 6380 client-output-buffer-limit replica 512mb 128mb 120 replica-read-only no
启动主redis:/usr/local/redis-5.0.5/src/redis-server /etc/redis/redis_master.conf
启动两从:
/usr/local/redis-5.0.5/src/redis-server /etc/redis/redis_slave_one.conf
/usr/local/redis-5.05/src/redis-server /etc/redis/redis_slave_two.conf
2.3 Redis Sentinel搭建
Sentinel配置文件
# master的守护者文件配置 port 26380 # 先创建对应的文件夹 dir "/data/redis/master_sentinel" # sentinel启动后日志文件 logfile "/data/redis/sentinel/master_sentinel.log" # 启动sentinel是否是后台应用程序,默认是no,修改成yes后台启动 daemonize yes # 格式:sentinel <option_name> <master_name> <option_value>;#该行的意思是:监控的master的名字叫做mymaster(自定义),地址为127.0.0.1:6378 ,行尾最后的一个1代表在sentinel集群中,多少个sentinel认为masters死了,才能真正认为该master不可用了,该值需要小于当前到sentinel个数(基数个),不然启动sentinel一直提示 +sentinel-address-switch 信息。 sentinel monitor mymaster ${ip} 6380 2 # sentinel会向master发送心跳PING来确认master是否存活,如果master在“一定时间范围”内不回应PONG 或者是回复了一个错误消息,那么这个sentinel会主观地(单方面地)认为这个master已经不可用了(subjectively down, 也简称为SDOWN)。而这个down-after-milliseconds就是用来指定这个“一定时间范围”的,单位是毫秒,默认30秒。 sentinel down-after-milliseconds mymaster 15000 # failover过期时间,当failover开始后,在此时间内仍然没有触发任何failover操作,当前sentinel将会认为此次failoer失败。默认180秒,即3分钟。 sentinel failover-timeout mymaster 120000 # 在发生failover主备切换时,这个选项指定了最多可以有多少个slave同时对新的master进行同步,这个数字越小,完成failover所需的时间就越长,但是如果这个数字越大,就意味着越多的slave因为replication而不可用。可以通过将这个值设为 1 来保证每次只有一个slave处于不能处理命令请求的状态。 sentinel deny-scripts-reconfig yes sentinel auth-pass mymaster ${redis密码}
启动主Redis的哨兵:
/user/local/redis-5.0.5/src/redis-sentinel /etc/redis/redis_master_sentinel.conf
也可:
/user/local/redis-5.0.5/src/redis-server /etc/redis/redis_master_sentinel.conf --sentinel
启动两从Redis的哨兵:
/usr/local/redis-5.0.5/src/redis-sentinel /etc/redis/redis_slave_one_sentinel.conf
/usr/local/redis-5.0.5/src/redis-sentinel /etc/redis/redis_slave_two_sentinel.conf
看看结果:
首先是服务有没有正常启动:
然后是一主两从:
最后是哨兵是否监控到了redis:
ok,都没有问题,接着下一步。
2.4 测试主从复制
主库添加一个key
查看从库是否有记录:
ok,没有问题。
用的redis连接工具是RedisInsight,需要的话可以去官网下载一个 Redis连接工具下载
2.5 测试自动切换主从库
验证完了,都没什么问题,继续下一步(记得重启一下6380服务)。
3. SpringBoot集成
<!--版本号我是根据自己的springboot版本来的,如果有问题的话可以看一下是不是和自己的版本不匹配--> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> <version>2.2.8.RELEASE</version> </dependency> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-pool2</artifactId> <version>2.6.1</version> </dependency>
spring: data: redis: repositories: enabled: false redis: #集群模式的配置start password: ${password} sentinel: master: mymaster nodes: ${ip}:26382,${ip}:26380,${ip}:26381 # 哨兵的IP:Port列表 #集群模式的配置stop #单机模式的配置start # host: ${ip} # password: ${password} # port: 6379 #单机模式的配置stop timeout: 5000 database: 0 lettuce: pool: max-active: 8 max-wait: -1 max-idle: 8 min-idle: 0 myconfig: time-to-live: 86400
package xx; import com.fasterxml.jackson.annotation.JsonAutoDetect; import com.fasterxml.jackson.annotation.PropertyAccessor; import com.fasterxml.jackson.databind.DeserializationFeature; import com.fasterxml.jackson.databind.ObjectMapper; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.cache.CacheManager; import org.springframework.cache.annotation.CachingConfigurerSupport; import org.springframework.cache.annotation.EnableCaching; import org.springframework.cache.interceptor.KeyGenerator; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Primary; import org.springframework.data.redis.cache.RedisCacheConfiguration; import org.springframework.data.redis.cache.RedisCacheManager; import org.springframework.data.redis.connection.RedisConnectionFactory; import org.springframework.data.redis.core.*; import org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer; import org.springframework.data.redis.serializer.RedisSerializationContext; import org.springframework.data.redis.serializer.RedisSerializer; import org.springframework.data.redis.serializer.StringRedisSerializer; import javax.annotation.Resource; import java.lang.reflect.Method; import java.lang.reflect.Proxy; import java.time.Duration; /** * 缓存配置类 * @author xxx * @Date 20xx-xx-xx xx:xx **/ @Configuration @EnableCaching public class RedisConfig extends CachingConfigurerSupport { private static final Logger logger = LoggerFactory.getLogger(RedisConfig.class); @Resource private RedisConnectionFactory factory; /** * 默认两小时 */ private static final long DURATION_SECOND_7200 = 7200L; private static final long DURATION_SECOND_300 = 300L; @Override @Bean public KeyGenerator keyGenerator() { return new KeyGenerator() { @SuppressWarnings("rawtypes") @Override public Object generate(Object target, Method method, Object... params) { StringBuilder sb = new StringBuilder(RedisAutoCacheValue.AUTO_KEY_PREFIX); if(target instanceof Proxy) { //如果是代理类 Class[] i = target.getClass().getInterfaces(); if(i != null && i.length > 0) { //取第一个即可 sb.append(i[0].getName()); }else { sb.append(target.getClass().getName()); } } else if(target instanceof org.springframework.cglib.proxy.Factory){ //如果是cglib代理,需要手动去除 $$ 后面的 String className = target.getClass().getName(); sb.append(className, 0, className.indexOf("$$")); } else { sb.append(target.getClass().getName()); } sb.append("."); sb.append(method.getName()); sb.append("_"); for (Object obj : params) { if (obj != null) { Class cls = obj.getClass(); if (cls.isArray()) { //对于基础数据处理 logger.info("keyGenerator : {}", cls.getComponentType()); if (cls.isAssignableFrom(long.class)) { long[] ay = (long[]) obj; for (long o : ay) { sb.append(o).append(""); } } else if (cls.isAssignableFrom(int.class)) { int[] ay = (int[]) obj; for (int o : ay) { sb.append(o).append(""); } } else if (cls.isAssignableFrom(float.class)) { float[] ay = (float[]) obj; for (float o : ay) { sb.append(o).append(""); } } else if (cls.isAssignableFrom(double.class)) { double[] ay = (double[]) obj; for (double o : ay) { sb.append(o).append(""); } } else if (cls.isAssignableFrom(String.class)) { String[] ay = (String[]) obj; for (String o : ay) { sb.append(o).append(""); } } else { sb.append(obj.toString()); } //TODO 对其他类型数组处理 } else { sb.append(obj.toString()); } } else { sb.append("null"); } sb.append("_"); //sb.append(obj == null ? "null" : obj.toString()); } sb.delete(sb.length()-1, sb.length()); return sb.toString(); } }; } /** * 默认的缓存管理,存放时效较长的缓存 * @param redisTemplate * @return */ @SuppressWarnings({"rawtypes", "Duplicates"}) @Primary @Bean public CacheManager cacheManager(RedisTemplate redisTemplate) { RedisCacheConfiguration config = RedisCacheConfiguration .defaultCacheConfig() //过期时间 .entryTtl(Duration.ofSeconds(DURATION_SECOND_7200)) //不缓存null值 //.disableCachingNullValues() //明确manager中的序列化与template一样,防止莫名其妙的问题 .serializeKeysWith(RedisSerializationContext.SerializationPair.fromSerializer(this.keySerializer())) .serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(this.valueSerializer())); RedisCacheManager rcm = RedisCacheManager.builder(redisTemplate.getConnectionFactory()).cacheDefaults(config).transactionAware().build(); return rcm; } /** * 存放时效较短的缓存(5分钟) * @param redisTemplate * @return */ @SuppressWarnings({"rawtypes", "Duplicates"}) @Bean public CacheManager cacheManagerIn5Minutes(RedisTemplate redisTemplate) { RedisCacheConfiguration config = RedisCacheConfiguration .defaultCacheConfig() //过期时间 .entryTtl(Duration.ofSeconds(DURATION_SECOND_300)) //不缓存null值 //.disableCachingNullValues() //明确manager中的序列化与template一样,防止莫名其妙的问题 .serializeKeysWith(RedisSerializationContext.SerializationPair.fromSerializer(this.keySerializer())) .serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(this.valueSerializer())); RedisCacheManager rcm = RedisCacheManager.builder(redisTemplate.getConnectionFactory()).cacheDefaults(config).transactionAware().build(); return rcm; } /*@SuppressWarnings({"rawtypes", "unchecked"}) @Bean public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory factory) { //factory = connectionFactory(3,"172.20.11.134",6379,"123456",2000,100,1,1000,2000); StringRedisTemplate template = new StringRedisTemplate(factory); template.setKeySerializer(keySerializer()); template.setHashKeySerializer(keySerializer()); template.setValueSerializer(valueSerializer()); template.setHashValueSerializer(valueSerializer()); template.afterPropertiesSet(); return template; } */ private RedisSerializer<String> keySerializer() { return new StringRedisSerializer(); } private RedisSerializer<Object> valueSerializer() { // Jackson2JsonRedisSerializer jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer(Object.class); // ObjectMapper om = new ObjectMapper(); // om.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY); // om.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL); // //略过不匹配的属性 // om.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false); // // jackson2JsonRedisSerializer.setObjectMapper(om); // return jackson2JsonRedisSerializer; ObjectMapper om = new ObjectMapper(); om.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY); om.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL); //略过不匹配的属性 om.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false); GenericJackson2JsonRedisSerializer genericJackson2JsonRedisSerializer = new GenericJackson2JsonRedisSerializer(om); return genericJackson2JsonRedisSerializer; } // ==========注解:以上是用来适配Cacheable缓存注解的配置,自定义缓存类型和时长============================================= // ==========手动:以下是用来适配原来的Redis的配置,用于手动添加Redis缓存,现在把Redis做成新版的缓存配置================== @Bean public RedisTemplate<String, Object> redisTemplate() { RedisTemplate<String, Object> redisTemplate = new RedisTemplate<>(); redisTemplate.setKeySerializer(new StringRedisSerializer()); redisTemplate.setHashKeySerializer(new StringRedisSerializer()); redisTemplate.setHashValueSerializer(new StringRedisSerializer()); redisTemplate.setValueSerializer(new StringRedisSerializer()); redisTemplate.setConnectionFactory(factory); return redisTemplate; } @Bean public HashOperations<String, String, Object> hashOperations(RedisTemplate<String, Object> redisTemplate) { return redisTemplate.opsForHash(); } @Bean public ValueOperations<String, Object> valueOperations(RedisTemplate<String, Object> redisTemplate) { return redisTemplate.opsForValue(); } @Bean public ListOperations<String, Object> listOperations(RedisTemplate<String, Object> redisTemplate) { return redisTemplate.opsForList(); } @Bean public SetOperations<String, Object> setOperations(RedisTemplate<String, Object> redisTemplate) { return redisTemplate.opsForSet(); } @Bean public ZSetOperations<String, Object> zSetOperations(RedisTemplate<String, Object> redisTemplate) { return redisTemplate.opsForZSet(); } }
然后调用该接口,看一下redis中是否存入该数据,再次调用,查看前后结果是否一致,如果一致,说明第二次调用直接从缓存中拿的数据,而不是重新生成一个随机数。
Over!!!