redis nosql

21.3 Redis介绍
什么是Redis
Redis和Memcached类似,也是NoSQL的一种,是一个基于内存的高性能key-value(k-v)数据库。

Redis支持string、list、set、zset(sorted set)和hash类型数据,这些数据类型都支持push/pop、add/remove及取交集、并集和差集及更丰富的操作,对这些数据的操作都是原子性的。

与memcached一样,为了保证效率,数据都是缓存在内存中。区别的是redis会周期性的把更新的数据写入磁盘或者把修改操作写入追加的记录文件,并且在此基础上实现了master-slave(主从)同步。redis的出现,很大程度补偿了memcached这类key/value存储的不足,在部 分场合可以对关系数据库起到很好的补充作用。它提供了Java,C/C++,C#,PHP,JavaScript,Perl,Object-C,Python,Ruby,Erlang等客户端,使用很方便。

数据可以从主服务器向任意数量的从服务器上同步,从服务器可以是关联其他从服务器的主服务器。这使得Redis可执行单层树复制。存盘可以有意无意的对数据进行写操作。由于完全实现了发布/订阅机制,使得从数据库在任何地方同步树时,可订阅一个频道并接收主服务器完整的消息发布记录。同步对读取操作的可扩展性和数据冗余很有帮助。

此外,Redis使用了两种文件格式:全量数据(RDB)和增量请求(aof)。全量数据格式是把内存中的数据写入磁盘,便于下次读取文件时进行加载;增量请求则是把内存中的数据序列转化为操作请求,用于读取文件进行replay得到数据,这种类似于mysql的binlog。Redis的存储分为内存存储、磁盘存储和log文件三部分。

Redis安装
Redis官网:https://redis.io/ ,最新版本4.0.11

下载redis:
# cd /usr/local/src/

# wget http://download.redis.io/releases/redis-4.0.11.tar.gz

# tar zxf redis-4.0.11.tar.gz

# cd redis-4.0.11

# make && make install

# echo $?
0

# redis- #按两次Tab键出现下面内容就说明安装成功
redis-benchmark redis-check-rdb redis-sentinel
redis-check-aof redis-cli redis-server

修改配置:
# cp redis.conf /etc/redis.conf

# vim /etc/redis.conf #做下面更改
daemonize yes #后台启动
logfile "/var/log/redis.log" #设置日志存放路径
dir /data/redis #设置RDB存放目录
改为 appendonly yes

# mkdir /data/redis

启动redis:
# redis-server /etc/redis.conf

# ps aux |grep redis
root 11920 0.2 0.2 145308 2188 ? Ssl 22:51 0:00 redis-server 127.0.0.1:6379

# less /var/log/redis.log #提示修改内核参数,如果不去修改也没很大影响
11920:M 20 Jul 22:51:40.532 # WARNING overcommit_memory is set to 0! Background save may fail under low memory con
dition. To fix this issue add ‘vm.overcommit_memory = 1‘ to /etc/sysctl.conf and then reboot or run the command ‘s
ysctl vm.overcommit_memory=1‘ for this to take effect.
11920:M 20 Jul 22:51:40.533 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This w
ill create latency and memory usage issues with Redis. To fix this issue run the command ‘echo never > /sys/kernel
/mm/transparent_hugepage/enabled‘ as root, and add it to your /etc/rc.local in order to retain the setting after a
reboot. Redis must be restarted after THP is disabled.

# vim /etc/rc.local #修改内核参数,并开机执行
sysctl vm.overcommit_memory=1
echo never > /sys/kernel/mm/transparent_hugepage/enabled

Redis持久化
Redis提供了两种持久化的方式:RDB(Redis DataBase)和AOFAppend Only File)。

RDB,简而言之,就是在不同的时间点,将redis存储的数据生成快照并存储到磁盘等介质上。

AOF,则是将redis执行过的所有写指令记录下来,在下次redis重新启动时,只要把这些写指令从前到后再重复执行一遍,就可以实现数据恢复了。

RDB和AOF两种方式也可以同时使用,在这种情况下,如果redis重启,则会优先采用AOF方式来进行数据恢复,这是因为AOF方式的数据恢复完整度更高。如果没有数据持久化的需求,也可以完全关闭RDB和AOF,这样的话,redis将变成一个纯内存数据库,就像memcached一样。

# cat /etc/redis.conf

save 900 1 #表示900s内发生了1次更改
save 300 10 #表示300s内发生了10次更改
save 60 10000 #表示60s内发生了10000次更改

决定RDB什么时候将数据写入磁盘是由这些参数决定的。三选一,我们就保持默认即可。

关闭RDB:
# cat /etc/redis.conf #做下面更改

save ""
#save 900 1
#save 300 10
#save 60 10000

AOF也有三种选择:
# cat /etc/redis.conf

# appendfsync always #一直记录,每次有变更就记录
appendfsync everysec #每秒记录一次
# appendfsync no #每隔一段时间记录一次,根据系统里面算法决定,不安全

Redis数据类型
前面已经讲过,Redis的数据类型有五种:string、list、set、zset、hash

string
string是最简单的类型,与Memcached一样的类型,一个key对应一个value,其支持的操作与Memcached的操作类似,不过功能更丰富,可以存二进制的对象。

# redis-cli #进入redis命令行

127.0.0.1:6379> set mykey "123"
OK

127.0.0.1:6379> get mykey
"123"
127.0.0.1:6379> MSET k1 1 k2 2 k3 3 #支持Tab键将命令可以自动补全
OK

127.0.0.1:6379> mget k1 k2 k3 mykey
1) "1"
2) "2"
3) "3"
4) "123"

list
list是一个链表结构,主要功能是push、pop、获取一个范围的所有值等。操作中key可理解为链表的名字。

使用list结构,可以轻松实现最新消息排行等功能(比如新浪微博的TimeLine)。list的另一个应用就是消息队列,可以利用list的push操作将任务存在list中,然后工作线程再用pop操作将任务取出执行。

127.0.0.1:6379> LPUSH list1 "lzx" #LPUSH表示推进去
(integer) 1

127.0.0.1:6379> LPUSH list1 "123"
(integer) 2

127.0.0.1:6379> LPUSH list1 "aaa"
(integer) 3

127.0.0.1:6379> LRANGE list1 0 -1 #0表示第一位,-1表示最后一位
1) "aaa"
2) "123"
3) "lzx" #最先推进去的排在后面

127.0.0.1:6379> LPOP list1 #LPOP表示取出来,取出最前面的
"aaa"

127.0.0.1:6379> LRANGE list1 0 -1
1) "123"
2) "lzx" #取出来的数据就不在list1中了
11

set
set是集合,和我们数学中的集合概念相似,对集合的操作有添加、删除元素,有对多个集合求交并差等操作。操作中key可理解为集合的名字。

比如在微博应用中,可以将一个用户所有的关注人存在一个集合中,将其所有粉丝存在一个集合中。因为Redis非常人性化地为集合提供了求交集、并集、差集等操作,那么可以很方便地实现如共同关注、共同爱好、二度好友等功能,对上面的所有集合操作,你还可以使用不同的命令选择将结果返回给客户端还是存集到一个新的集合中。

127.0.0.1:6379> SADD set1 a
(integer) 1

127.0.0.1:6379> SADD set1 b
(integer) 1

127.0.0.1:6379> SADD set1 c
(integer) 1

127.0.0.1:6379> SMEMBERS set1
1) "a"
2) "b"
3) "c"

127.0.0.1:6379> SADD set2 1
(integer) 1

127.0.0.1:6379> SADD set2 2
(integer) 1

127.0.0.1:6379> SADD set2 c
(integer) 1

127.0.0.1:6379> SMEMBERS set2
1) "2"
2) "c"
3) "1"

127.0.0.1:6379> SUNION set1 set2 #SUNION表示求并集
1) "2"
2) "b"
3) "c"
4) "a"
5) "1"

127.0.0.1:6379> SINTER set1 set2 #SINTER表示求交集
1) "c"

127.0.0.1:6379> SDIFF set1 set2 #SDIFF表示求差集,针对set1求差集,去除交集的部分
1) "a"
2) "b"

127.0.0.1:6379> SREM set1 b #SREM表示删除元素
(integer) 1

127.0.0.1:6379> SMEMBERS set1
1) "a"
2) "c"

zset
zset是有序集合,它比set多了一个权重参数score,使集合中的元素能够按score进行有序排列。

比如一个存储全班同学成绩的Sorted Sets,其集合中value可以是同学的学号,而score就可以是其考试分数,这样在数据插入集合的时候,就已经进行了天然的排序。

127.0.0.1:6379> ZADD set3 12 abc
(integer) 1

127.0.0.1:6379> ZADD set3 2 "cde 123"
(integer) 1

127.0.0.1:6379> ZADD set3 24 "lzx"
(integer) 1

127.0.0.1:6379> ZADD set3 4 "linux"
(integer) 1

127.0.0.1:6379> ZRANGE set3 0 -1 #顺序排列set3,0表示第一位,-1表示最后一位
1) "cde 123"
2) "linux"
3) "abc"
4) "lzx"

127.0.0.1:6379> ZREVRANGE set3 0 -1 #倒序排列set3
1) "lzx"
2) "abc"
3) "linux"
4) "cde 123"

hash
hash,Redis和Memcached类似,在Memcached中,我们经常将一些结构化的信息打包成hashmap,在客户端序列化后存储为一个字符串的值(一般是JSON格式),比如用户的昵称、年龄、性别、积分等。

127.0.0.1:6379> HSET hash1 name lzx
(integer) 1

127.0.0.1:6379> HSET hash1 age 20
(integer) 1

127.0.0.1:6379> HSET hash1 job it //添加了3对子元素,每一对子元素都可以看做是一个key-value
(integer) 1

127.0.0.1:6379> HSET hash1 name lzx
(integer) 1

127.0.0.1:6379> HSET hash1 age 20
(integer) 1

127.0.0.1:6379> HSET hash1 job it
(integer) 1

127.0.0.1:6379> HGET hash1 name #查看hash1中name
"lzx"

127.0.0.1:6379> HGET hash1 age
"20"

127.0.0.1:6379> HGET hash1 job
"it"

127.0.0.1:6379> HGETALL hash1 #查看hash1全部
1) "name"
2) "lzx"
3) "age"
4) "20"
5) "job"
6) "it"

Redis常用操作
string,list
127.0.0.1:6379> set key1 lzx
OK

127.0.0.1:6379> set key2 123
OK

127.0.0.1:6379> set key1 aaa
OK

127.0.0.1:6379> get key1
"aaa" #第二次赋值将之前赋值覆盖

127.0.0.1:6379> SETNX key1 linux #key1存在,所以返回0
(integer) 0

127.0.0.1:6379> get key1
"aaa" #并且不改变key1的value

127.0.0.1:6379> SETNX key3 111 #key3不存在,所以直接创建,返回1
(integer) 1

127.0.0.1:6379> get key3
"111"

127.0.0.1:6379> set key3 111 ex 10 #给key3设置过期时间,为10s,set时ex不可省略
OK

127.0.0.1:6379> get key3 #过10s之后查看,value为空
(nil)

127.0.0.1:6379> SETEX key3 1000 222 #SETEX设置过期时间为1000s,后面222为value,若key3已存在则覆盖
OK

127.0.0.1:6379> get key3
"222"

127.0.0.1:6379> LPUSH list2 aaa
(integer) 1

127.0.0.1:6379> LPUSH list2 bbb
(integer) 2

127.0.0.1:6379> LPUSH list2 ccc
(integer) 3

127.0.0.1:6379> LRANGE list2 0 -1
1) "ccc"
2) "bbb"
3) "aaa"

127.0.0.1:6379> LPOP list2 #取出的是最后加入的值
"ccc"

127.0.0.1:6379> LRANGE list2 0 -1 #取出的值已经不在list2中
1) "bbb"
2) "aaa"

127.0.0.1:6379> RPOP list2 #取出最先加入的值,跟LPOP相反
"aaa"

127.0.0.1:6379> LRANGE list2 0 -1
1) "bbb"

127.0.0.1:6379> LINSERT list2 before bbb 111 #LINSERT表示插入,在bbb前面插入111
(integer) 2

127.0.0.1:6379> LINSERT list2 after bbb ddd #在bbb后面插入ddd
(integer) 3

127.0.0.1:6379> LINSERT list2 after ddd fff
(integer) 4

127.0.0.1:6379> LRANGE list2 0 -1
1) "111"
2) "bbb"
3) "ddd"
4) "fff"

127.0.0.1:6379> LSET list2 1 123 #LSET表示修改,将第2个值改为123
OK

127.0.0.1:6379> LSET list2 3 abc #将第4个值改为abc
OK

127.0.0.1:6379> LRANGE list2 0 -1
1) "111"
2) "123"
3) "ddd"
4) "abc"

127.0.0.1:6379> LINDEX list2 1 #LINDEX表示查看,查看第2个值
"123"

127.0.0.1:6379> LINDEX list2 3 #查看第4个值
"abc"

127.0.0.1:6379> LLEN list2 #LLEN查看列表中有几个元素
(integer) 4 #表示list2中有4个元素

set
127.0.0.1:6379> SADD set1 aaa #SADD表示添加元素,添加元素aaa
(integer) 1

127.0.0.1:6379> SADD set1 bbb
(integer) 1

127.0.0.1:6379> SADD set1 ccc
(integer) 1

127.0.0.1:6379> SMEMBERS set1 #SMEMBERS表示查看集合中所有元素
1) "bbb"
2) "aaa"
3) "ccc"

127.0.0.1:6379> SREM set1 aaa #SREM表示删除元素,删除元素aaa
(integer) 1

127.0.0.1:6379> SMEMBERS set1
1) "bbb"
2) "ccc"

127.0.0.1:6379> SPOP set1 1 #SPOP表示取出元素,1表示第2个元素
1) "ccc"

127.0.0.1:6379> SMEMBERS set1
1) "bbb" #取出的元素就不再在set1中

127.0.0.1:6379> SADD set1 aaa
(integer) 1

127.0.0.1:6379> SADD set1 ccc
(integer) 1

127.0.0.1:6379> SADD set1 222
(integer) 1

127.0.0.1:6379> SMEMBERS set1
1) "bbb"
2) "aaa"
3) "222"
4) "ccc"

127.0.0.1:6379> SADD set2 aaa
(integer) 1

127.0.0.1:6379> SADD set2 ccc
(integer) 1

127.0.0.1:6379> SADD set2 111
(integer) 1

127.0.0.1:6379> SMEMBERS set2
1) "aaa"
2) "111"
3) "ccc"

127.0.0.1:6379> SDIFF set1 set2 #SDIFF表示求差集,set1在前面就是以set1为标准
1) "bbb"
2) "222"

127.0.0.1:6379> SDIFF set2 set1
1) "111"

127.0.0.1:6379> SDIFFSTORE set3 set1 set2 #SDIFFSTORE表示求出差集并保存到新集合中,set3为新集合
(integer) 2

127.0.0.1:6379> SMEMBERS set3
1) "bbb"
2) "222"

127.0.0.1:6379> SINTER set1 set2 #SINTER表示求交集
1) "aaa"
2) "ccc"

127.0.0.1:6379> SINTERSTORE set4 set1 set2 #SINTERSTORE表示求交集并保存到新集合中,set4为新集合
(integer) 2

127.0.0.1:6379> SMEMBERS set4
1) "aaa"
2) "ccc"

127.0.0.1:6379> SUNION set1 set2 #SUNION表示求并集
1) "ccc"
2) "222"
3) "bbb"
4) "aaa"
5) "111"

127.0.0.1:6379> SUNIONSTORE set5 set1 set2 #SUNIONSTORE表示求并集并保存到新集合中,set5为新集合
(integer) 5

127.0.0.1:6379> SMEMBERS set5
1) "ccc"
2) "222"
3) "bbb"
4) "aaa"
5) "111"

127.0.0.1:6379> SISMEMBER set1 1 #SISMEMBER表示判断一个元素是否在集合中
(integer) 0 #返回0表示不存在

127.0.0.1:6379> SISMEMBER set1 aaa
(integer) 1 #返回1表示存在

127.0.0.1:6379> SRANDMEMBER set1 #SRANDMEMBER表示随机取出元素,但不删除
"ccc"

127.0.0.1:6379> SRANDMEMBER set1 2 #2表示随机取出2个元素
1) "ccc"
2) "222"

127.0.0.1:6379> SMEMBERS set1
1) "bbb"
2) "aaa"
3) "222"
4) "ccc"

zset
127.0.0.1:6379> ZADD zset1 11 123 #ZADD表示添加元素
(integer) 1

127.0.0.1:6379> ZADD zset1 2 lab
(integer) 1

127.0.0.1:6379> ZADD zset1 25 k
(integer) 1

127.0.0.1:6379> ZRANGE zset1 0 -1 #ZRANGE表示查看元素,按顺序显示,0表示第一位,-1表示最后一位
1) "lab"
2) "123"
3) "k"

127.0.0.1:6379> ZREM zset1 123 #ZREM表示删除元素,123为具体的值(输入score(11)则不对)
(integer) 1

127.0.0.1:6379> ZRANGE zset1 0 -1
1) "lab"
2) "k"

127.0.0.1:6379> ZADD zset1 9 la
(integer) 1

127.0.0.1:6379> ZADD zset1 100 sss
(integer) 1

127.0.0.1:6379> ZRANGE zset1 0 -1
1) "lab"
2) "la"
3) "k"
4) "sss"

127.0.0.1:6379> ZRANK zset1 k #ZRANK返回元素的索引值,索引值从0开始,按sore正向排序,score即value前面的值,如la的sore是9,sss的sore为100
(integer) 2

127.0.0.1:6379> ZREVRANK zset1 k #ZREVRANK返回元素的索引值,索引值从0开始,按score反向排序
(integer) 1
1
2
127.0.0.1:6379> ZREVRANGE zset1 0 -1 #ZREVRANGE表示反序排序
1) "sss"
2) "k"
3) "la"
4) "lab"

127.0.0.1:6379> ZCARD zset1 #ZCARD返回元素个数
(integer) 4

127.0.0.1:6379> ZCOUNT zset1 1 10 #ZCOUNT返回score范围中的元素个数,1 10表示范围为1-10
(integer) 2 #1-10的元素个数为2

127.0.0.1:6379> ZRANGEBYSCORE zset1 1 10 #ZRANGEBYSCORE返回score范围中的元素
1) "lab"
2) "la"

(integer) 2

127.0.0.1:6379> ZRANGE zset1 0 -1
1) "k"
2) "sss"

127.0.0.1:6379> ZADD zset1 6 111
(integer) 1

127.0.0.1:6379> ZADD zset1 31 all
(integer) 1

127.0.0.1:6379> ZRANGE zset1 0 -1
1) "111"
2) "k"
3) "all"
4) "sss"

127.0.0.1:6379> ZREMRANGEBYSCORE zset1 1 10 #ZREMRANGEBYSCORE表示删除score(分值)范围中的元素,1 10表示范围为1-10
(integer) 1

127.0.0.1:6379> ZRANGE zset1 0 -1
1) "k"
2) "all"
3) "sss"

hash
127.0.0.1:6379> HSET usera name lzx #HSET建立hash
(integer) 1

127.0.0.1:6379> HGET usera name #HGET查看
"lzx"

127.0.0.1:6379> HMSET userb name lzx age 20 job it #HMSET批量创建键值对
OK

127.0.0.1:6379> HMGET userb name age job #HMGET批量查看
1) "lzx"
2) "20"
3) "it"

127.0.0.1:6379> HGETALL userb #HGETALL查看hash中所有键值对,这样查看也可以
1) "name"
2) "lzx"
3) "age"
4) "20"
5) "job"
6) "it"

127.0.0.1:6379> HDEL userb age #HDEL删除对应的键值对
(integer) 1

127.0.0.1:6379> HGETALL userb
1) "name"
2) "lzx"
3) "job"
4) "it" #age对应的键值对消失

127.0.0.1:6379> HKEYS userb #HKEYS显示hash中所有的key
1) "name"
2) "job"

127.0.0.1:6379> HVALS userb #HVALS显示hash中所有的value
1) "lzx"
2) "it"

127.0.0.1:6379> HLEN userb #HLEN查看hash中有多少个键值对
(integer) 2

Redis操作键值
127.0.0.1:6379> KEYS * #KEYS列出redis中所有的key
1) "key1"
2) "k2"
3) "set3"
4) "k3"
5) "seta"
6) "mykey"
7) "key2"
8) "set5"
9) "zset1"
10) "k1"
11) "set2"
12) "set4"
13) "list1"
14) "userb"
15) "usera"
16) "hash1"
17) "set1"
18) "list2"


127.0.0.1:6379> KEYS my* #模糊匹配
1) "mykey"

127.0.0.1:6379> EXISTS key1 #EXISTS查看是否存在某个键,存在返回1,不存在返回0
(integer) 1

127.0.0.1:6379> DEL key1 #DEL删除某个键
(integer) 1

127.0.0.1:6379> EXISTS key1
(integer) 0

127.0.0.1:6379> EXPIRE k2 10 #EXPIRE给某个键设置过期时间,10表示10s
(integer) 1

127.0.0.1:6379> GET k2
"2"

127.0.0.1:6379> GET k2 #过10s之后查看,k2的value为空
(nil)

127.0.0.1:6379> EXPIRE key2 100
(integer) 1

127.0.0.1:6379> TTL key2 #TTL查看键的过期时间
(integer) 94

127.0.0.1:6379> TTL key2
(integer) 90

127.0.0.1:6379> SELECT 1 #SELECT选择库,总共有16个库,默认为0库
OK

127.0.0.1:6379[1]> KEYS * #[1]表示现在在1库
(empty list or set) #1库没有数据

127.0.0.1:6379[1]> SELECT 0
OK

127.0.0.1:6379> KEYS *
1) "set3"
2) "k3"
3) "seta"
4) "mykey"
5) "set5"
6) "zset1"
7) "k1"
8) "set2"
9) "set4"
10) "list1"
11) "userb"
12) "usera"
13) "hash1"
14) "set1"
15) "list2"

127.0.0.1:6379> MOVE set2 1 #MOVE移动数据到另一个库
(integer) 1

127.0.0.1:6379> SELECT 1
OK

127.0.0.1:6379[1]> KEYS *
1) "set2"

127.0.0.1:6379> EXPIRE set1 100
(integer) 1

127.0.0.1:6379> TTL set1
(integer) 94

127.0.0.1:6379> PERSIST set1 #PERSIST取消过期时间
(integer) 1

127.0.0.1:6379> TTL set1
(integer) -1 #-1表示没有过期时间,即永不过期

127.0.0.1:6379> RANDOMKEY #随机返回一个key
"userb"

127.0.0.1:6379> RANDOMKEY
"k3"

127.0.0.1:6379> RANDOMKEY
"k1"

127.0.0.1:6379> RENAME set1 setc #RENAME表示重命名key
OK

127.0.0.1:6379> KEYS set*
1) "set3"
2) "seta"
3) "set5"
4) "set4"
5) "setc"

127.0.0.1:6379> TYPE zset1 #TYPE查看key类型
zset

127.0.0.1:6379> TYPE list1
list

127.0.0.1:6379> TYPE setc
set

127.0.0.1:6379> DBSIZE #DBSIZE返回当前数据库中key的数量
(integer) 14
------------------------------------------------------------------------
127.0.0.1:6379> INFO #返回redis数据库状态信息
# Server
redis_version:4.0.11
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:f1b08454f8b4e56c
redis_mode:standalone
os:Linux 3.10.0-693.el7.x86_64 x86_64
arch_bits:64
multiplexing_api:epoll
atomicvar_api:atomic-builtin
gcc_version:4.8.5
process_id:36457
run_id:d580da83e18929d6b37d826991dc705172de928c
tcp_port:6379
uptime_in_seconds:24729
uptime_in_days:0
hz:10
lru_clock:3752638
executable:/usr/local/src/redis-4.0.11/redis-server
config_file:/etc/redis.conf

# Clients
connected_clients:1
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0

# Memory
used_memory:851096
used_memory_human:831.15K
used_memory_rss:7970816
used_memory_rss_human:7.60M
used_memory_peak:851096
used_memory_peak_human:831.15K
used_memory_peak_perc:100.08%
used_memory_overhead:837134
used_memory_startup:786584
used_memory_dataset:13962
used_memory_dataset_perc:21.64%
total_system_memory:3958075392
total_system_memory_human:3.69G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
mem_fragmentation_ratio:9.36
mem_allocator:jemalloc-4.0.3
active_defrag_running:0
lazyfree_pending_objects:0

# Persistence
loading:0
rdb_changes_since_last_save:3
rdb_bgsave_in_progress:0
rdb_last_save_time:1530478637
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:0
rdb_current_bgsave_time_sec:-1
rdb_last_cow_size:6545408
aof_enabled:1
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_last_cow_size:0
aof_current_size:3560
aof_base_size:0
aof_pending_rewrite:0
aof_buffer_length:0
aof_rewrite_buffer_length:0
aof_pending_bio_fsync:0
aof_delayed_fsync:0

# Stats
total_connections_received:2
total_commands_processed:211
instantaneous_ops_per_sec:0
total_net_input_bytes:7390
total_net_output_bytes:24177
instantaneous_input_kbps:0.00
instantaneous_output_kbps:0.00
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:4
expired_stale_perc:0.00
expired_time_cap_reached_count:0
evicted_keys:0
keyspace_hits:97
keyspace_misses:10
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:927
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0

# Replication
role:master
connected_slaves:0
master_replid:91c7968e04d1852516098681ea3bbd3e052b4252
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

# CPU
used_cpu_sys:17.50
used_cpu_user:8.47
used_cpu_sys_children:0.32
used_cpu_user_children:0.00

# Cluster
cluster_enabled:0

# Keyspace
db0:keys=14,expires=0,avg_ttl=0
db1:keys=1,expires=0,avg_ttl=0
--------------------------------------------------------------------------

127.0.0.1:6379> FLUSHDB #FLUSHDB清空当前数据库中所有的key
OK

127.0.0.1:6379> KEYS *
(empty list or set)

127.0.0.1:6379> FLUSHALL #FLUSHALL清空所有库中所有的key
OK

127.0.0.1:6379> KEYS *
(empty list or set)

127.0.0.1:6379> SELECT 1
OK

127.0.0.1:6379[1]> KEYS *
(empty list or set)

127.0.0.1:6379> BGSAVE #保存数据到磁盘,在后台运行
Background saving started

127.0.0.1:6379> save #保存数据到磁盘,在前台运行
OK

127.0.0.1:6379> CONFIG GET * #获取所有配置参数
1
127.0.0.1:6379> CONFIG GET dir #获取配置参数
1) "dir"
2) "/data/redis"

127.0.0.1:6379> CONFIG GET dbfilename
1) "dbfilename"
2) "dump.rdb"

127.0.0.1:6379> CONFIG SET timeout 100 #更改配置参数
OK

127.0.0.1:6379> CONFIG GET timeout
1) "timeout"
2) "100"

redis数据恢复:首先定义或者确定dir目录和dbfilename,然后把备份的rdb文件放到dir目录下面,重启redis服务即可恢复数据。

Redis安全设置
设置监听IP:
# vim /etc/redis.conf
bind 127.0.0.1 192.168.1.1 #设置内网IP,可以是多个,用空格分隔

设置监听端口:
# vim /etc/redis.conf
port 16000 #不要设置为默认端口6379

设置密码:
# vim /etc/redis.conf
requirepass 123123 #密码为123123

# redis-cli -a ‘123123‘ #重启服务之后再次登录redis

把redis命令行里面的config命令改名:
# vim /etc/redis.conf
rename-command CONFIG lzx #把config命令改为lzx命令,保存退出重启服务

禁掉config命令:
# vim /etc/redis.conf
rename-command CONFIG "" #保存退出重启服务

Redis慢查询日志
针对慢查询日志,可以设置两个参数,一个是执行时长,另一个是慢查询日志的长度。当一个新的命令被写入日志时,最老的一条会从命令日志队列中移除出去。
编辑配置文件:
# vim /etc/redis.conf #默认配置
slowlog-log-slower-than 10000 //单位:μs,即10ms
slowlog-max-len 128

在redis命令行下,
slowlog get #列出所有的慢查询日志
slowlog get 2 #只列出2条
slowlog len #查看慢查询日志条数

PHP安装redis扩展模块
下载redis:
# cd /usr/local/src/
# wget https://codeload.github.com/phpredis/phpredis/zip/develop
# mv develop phpredis.zip
# unzip phpredis.zip
# cd phpredis-develop/
# /usr/local/php-fpm/bin/phpize
编译安装:
# ./configure --with-php-config=/usr/local/php-fpm/bin/php-config
# echo $?
0
# make
# echo $?
0
# make install
# echo $?
0

# vim /usr/local/php-fpm/etc/php.ini #增加一行
extension=redis.so

# /usr/local/php-fpm/bin/php -m
[PHP Modules]
Core
ctype
curl
date
dom
ereg
exif
fileinfo
filter
ftp
gd
hash
iconv
json
libxml
mbstring
mcrypt
mysql
openssl
pcre
PDO
pdo_sqlite
Phar
posix
redis #有redis说明没问题
Reflection
session
SimpleXML
soap
SPL
sqlite3
standard
tokenizer
xml
xmlreader
xmlwriter
zlib

[Zend Modules]

# /etc/init.d/php-fpm restart #重启php-fpm服务
Gracefully shutting down php-fpm . done
Starting php-fpm done
-----------------------------------------------------------
Redis存储session
编辑配置文件:
# vim /usr/local/php-fpm/etc/php-fpm.conf
php_value[session.save_handler] = redis
php_value[session.save_path] = "tcp://127.0.0.1:6379"

编辑存session记录的脚本:
# vim session.php #写入下面内容

<?php
session_start();
if (!isset($_SESSION[‘TEST‘])) {
$_SESSION[‘TEST‘] = time();
}
$_SESSION[‘TEST3‘] = time();
print $_SESSION[‘TEST‘];
print "<br><br>";
print $_SESSION[‘TEST3‘];
print "<br><br>";
print session_id();
?>

# mv session.php /usr/local/nginx/html/

进行测试:
# curl localhost/session.php
1530489156<br><br>1530489156<br><br>st5k899dgb370g9ul54kalhf14

# curl localhost/session.php
1530489158<br><br>1530489158<br><br>nt89nio8q5s81imr51op8as6i4

# curl localhost/session.php
1530489159<br><br>1530489159<br><br>fjn7oi5tn0dmus2fds18earvr6

# curl localhost/session.php
1530489160<br><br>1530489160<br><br>gpc9vtajh3sdsv7h0hufmembb7

# curl localhost/session.php
1530489163<br><br>1530489163<br><br>q03c4qq5j6ts781d5su3lmqr73

# redis-cli

127.0.0.1:6379> KEYS *
1) "PHPREDIS_SESSION:gpc9vtajh3sdsv7h0hufmembb7"
2) "PHPREDIS_SESSION:q03c4qq5j6ts781d5su3lmqr73"
3) "PHPREDIS_SESSION:nt89nio8q5s81imr51op8as6i4"
4) "PHPREDIS_SESSION:st5k899dgb370g9ul54kalhf14"
5) "PHPREDIS_SESSION:fjn7oi5tn0dmus2fds18earvr6"

127.0.0.1:6379> GET PHPREDIS_SESSION:fjn7oi5tn0dmus2fds18earvr6 #查询对应key的value
"TEST|i:1530489159;TEST3|i:1530489159;" #查出来的这个value可以和上面curl出来的对应

这就说明,我们上面的配置没有问题,Redis中保存了session。

Redis主从配置
Redis的主从配置比MySQL的主从配置要简单,这里为了方便测试,我直接在一台机器上配置。

配置redis从服务:
# cp /etc/redis.conf /etc/redis2.conf

# vim !$
port 6380 #不能与主冲突
pidfile /var/run/redis_6380.pid #pid文件不能相同
logfile "/var/log/redis2.log" #日志文件不能相同
dir /data/redis2 #dir不能相同
slaveof 127.0.0.1 6379 #说明它是哪个的从,salveof 主IP 主服务端口

如果主上设置了密码,那么从也需要增加一行:

masterauth 123123 #设置从的密码,假设主的密码为123123

启动redis从服务:
# mkdir /data/redis2

# redis-server /etc/redis2.conf

# ps aux |grep redis
root 36457 0.1 0.2 147356 9900 ? Ssl 7月01 0:39 redis-server 127.0.0.1:6379
root 81816 1.0 0.2 147356 9720 ? Ssl 08:16 0:00 redis-server 127.0.0.1:6380

从不需要手动去同步数据,它会自动同步主上面的数据:

# redis-cli

127.0.0.1:6379> KEYS *
(empty list or set)

127.0.0.1:6379> set key1 10
OK

127.0.0.1:6379> KEYS *
1) "key1"

127.0.0.1:6379> set key2 100
OK

# redis-cli -h 127.0.0.1 -p 6380

127.0.0.1:6380> KEYS *
1) "key1"
2) "key2" #可以看到刚刚在主上创建的key,在从上可以看到

127.0.0.1:6380> CONFIG GET dir
1) "dir"
2) "/data/redis2"

127.0.0.1:6380> CONFIG GET dbfilename
2) "dump.rdb"

127.0.0.1:6380> set key3 aaa
(error) READONLY You can‘t write against a read only slave. #slave上不可以写入数据,配置文件里面有定义

整个过程十分简单,Redis主从配置完成。

Redis集群搭建
当数据量达到很大时,单台的Redis服务器是无法满足需求的,这时候,就需要对Redis进行集群,由多台机器组成分布式的Redis集群,这样新增节点非常方便。

Redis集群有以下特点:

1. 多个redis节点网络互联,数据共享
2. 所有的节点都是一主一从(也可以是一主多从),其中从不提供服务,仅作为备用
3. 不支持同时处理多个key(如MSET/MGET),因为redis需要把key均匀分布在各个节点上,
并发量很高的情况下同时创建key-value会降低性能并导致不可预测的行为
4. 支持在线增加、删除节点
5. 客户端可以连任何一个主节点进行读写

场景设置:

两台机器,关闭防火墙和SElinux,分别开启三个redis服务(端口)
机器A:192.168.100.150 7000 7002 7004
机器B:192.168.100.160 7001 7003 7005

A机器上面配置
修改配置文件:
# cd /etc/

# vim redis_7000.conf

port 7000
bind 192.168.100.150
daemonize yes
pidfile /var/run/redis_7000.pid
dir /data/redis_data/7000
cluster-enabled yes
cluster-config-file nodes_7000.conf
cluster-node-timeout 10100
appendonly yes

# vim redis_7002.conf

port 7002
bind 192.168.100.150
daemonize yes
pidfile /var/run/redis_7002.pid
dir /data/redis_data/7002
cluster-enabled yes
cluster-config-file nodes_7002.conf
cluster-node-timeout 10100
appendonly yes

# vim redis_7004.conf

port 7004
bind 192.168.100.150
daemonize yes
pidfile /var/run/redis_7004.pid
dir /data/redis_data/7004
cluster-enabled yes
cluster-config-file nodes_7004.conf
cluster-node-timeout 10100
appendonly yes
5

启动redis服务:
# mkdir -p /data/redis_data/{7000,7002,7004}

# redis-server /etc/redis_7000.conf #启动7000端口的redis服务
13375:C 23 Aug 21:37:11.272 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
13375:C 23 Aug 21:37:11.272 # Redis version=4.0.11, bits=64, commit=00000000, modified=0, pid=13375, just started
13375:C 23 Aug 21:37:11.272 # Configuration loaded

# redis-server /etc/redis_7002.conf #启动7002端口的redis服务
13380:C 23 Aug 21:38:18.483 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
13380:C 23 Aug 21:38:18.483 # Redis version=4.0.11, bits=64, commit=00000000, modified=0, pid=13380, just started
13380:C 23 Aug 21:38:18.483 # Configuration loaded

# redis-server /etc/redis_7004.conf #启动7004端口的redis服务
13385:C 23 Aug 21:38:23.564 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
13385:C 23 Aug 21:38:23.564 # Redis version=4.0.11, bits=64, commit=00000000, modified=0, pid=13385, just started
13385:C 23 Aug 21:38:23.564 # Configuration loaded

# ps aux |grep redis
root 13376 0.1 0.1 145312 7572 ? Ssl 21:37 0:00 redis-server 192.168.100.150:7000 [cluster]
root 13381 0.1 0.1 145312 7572 ? Ssl 21:38 0:00 redis-server 192.168.100.150:7002 [cluster]
root 13386 0.1 0.1 145312 7576 ? Ssl 21:38 0:00 redis-server 192.168.100.150:7004 [cluster]

B机器上操作
修改配置文件:
# cd /etc/

# vim redis_7001.conf

port 7001
bind 192.168.100.160
daemonize yes
pidfile /var/run/redis_7001.pid
dir /data/redis_data/7001
cluster-enabled yes
cluster-config-file nodes_7001.conf
cluster-node-timeout 10100
appendonly yes

# vim redis_7003.conf

port 7003
bind 192.168.100.160
daemonize yes
pidfile /var/run/redis_7003.pid
dir /data/redis_data/7003
cluster-enabled yes
cluster-config-file nodes_7003.conf
cluster-node-timeout 10100
appendonly yes

# vim redis_7005.conf

port 7005
bind 192.168.100.160
daemonize yes
pidfile /var/run/redis_7005.pid
dir /data/redis_data/7005
cluster-enabled yes
cluster-config-file nodes_7005.conf
cluster-node-timeout 10100
appendonly yes

启动redis服务:
# mkdir -p /data/redis_data/{7001,7003,7005}

# redis-server /etc/redis_7001.conf #启动7001端口的redis服务
1855:C 23 Aug 21:39:43.723 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1855:C 23 Aug 21:39:43.723 # Redis version=4.0.11, bits=64, commit=00000000, modified=0, pid=1855, just started
1855:C 23 Aug 21:39:43.723 # Configuration loaded

# redis-server /etc/redis_7003.conf ##启动7003端口的redis服务
1860:C 23 Aug 21:39:48.375 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1860:C 23 Aug 21:39:48.376 # Redis version=4.0.11, bits=64, commit=00000000, modified=0, pid=1860, just started
1860:C 23 Aug 21:39:48.376 # Configuration loaded

# redis-server /etc/redis_7005.conf #启动7005端口的redis服务
1865:C 23 Aug 21:39:52.655 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1865:C 23 Aug 21:39:52.656 # Redis version=4.0.11, bits=64, commit=00000000, modified=0, pid=1865, just started
1865:C 23 Aug 21:39:52.656 # Configuration loaded

# ps aux |grep redis
root 1856 0.1 0.1 145312 7576 ? Ssl 21:39 0:00 redis-server 192.168.100.160:7001 [cluster]
root 1861 0.1 0.1 145312 7576 ? Ssl 21:39 0:00 redis-server 192.168.100.160:7003 [cluster]
root 1866 0.1 0.1 145312 7576 ? Ssl 21:39 0:00 redis-server 192.168.100.160:7005 [cluster]

在A机器上继续操作
安装ruby2.2,不能yum直接安装(在一台机器上安装即可):
# yum -y groupinstall "Development Tools"

# yum install -y gdbm-devel libdb4-devel libffi-devel libyaml libyaml-devel ncurses-devel openssl-devel readline-devel tcl-devel

# mkdir -p rpmbuild/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS}

# wget http://cache.ruby-lang.org/pub/ruby/2.2/ruby-2.2.3.tar.gz -P rpmbuild/SOURCES

# wget http://raw.githubusercontent.com/tjinjin/automate-ruby-rpm/master/ruby22x.spec -P rpmbuild/SPECS

# rpmbuild -bb rpmbuild/SPECS/ruby22x.spec

# ls rpmbuild/RPMS/x86_64/ruby-2.2.3-1.el7.centos.x86_64.rpm
rpmbuild/RPMS/x86_64/ruby-2.2.3-1.el7.centos.x86_64.rpm

# yum localinstall -y rpmbuild/RPMS/x86_64/ruby-2.2.3-1.el7.centos.x86_64.rpm #也可以rpm -ivh方式安装

# gem install redis #主要是为了安装这个,用来配置集群
Fetching: redis-4.0.2.gem (100%)
Successfully installed redis-4.0.2
Parsing documentation for redis-4.0.2
Installing ri documentation for redis-4.0.2
Done installing documentation for redis after 1 seconds
1 gem installed

继续配置:
# cp /usr/local/src/redis-4.0.11/src/redis-trib.rb /usr/bin/

# redis-trib.rb create --replicas 1 192.168.100.150:7000 192.168.100.150:7002 192.168.100.150:7004 192.168.100.160:7001 192.168.100.160:7003 192.168.100.160:7005 #上面的一切都是为了这条命令能够顺利执行
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.100.150:7000
192.168.100.160:7001
192.168.100.150:7002 #三个主分别是7000,7001,7002
Adding replica 192.168.100.160:7005 to 192.168.100.150:7000 #7005是7000的从
Adding replica 192.168.100.150:7004 to 192.168.100.160:7001 #7004是7001的从
Adding replica 192.168.100.160:7003 to 192.168.100.150:7002 #7003是7002的从
M: c007437678a379cd57e811843811e5b047337457 192.168.100.150:7000
slots:0-5460 (5461 slots) master
M: 6d53d46c401de9804c1fd48c07685f22d693f39c 192.168.100.150:7002
slots:10923-16383 (5461 slots) master
S: 9866e7651c9e26a533112e96f578477293de9670 192.168.100.150:7004
replicates 0550a444f45604dfc1f01191df67f513d5c5fc5b
M: 0550a444f45604dfc1f01191df67f513d5c5fc5b 192.168.100.160:7001
slots:5461-10922 (5462 slots) master
S: 3cb24dc6d295fa9c261e36df67fda11151de7ef7 192.168.100.160:7003
replicates 6d53d46c401de9804c1fd48c07685f22d693f39c
S: bfc152ca2c9d140d980e3b8c3c1868239e32efe8 192.168.100.160:7005
replicates c007437678a379cd57e811843811e5b047337457
Can I set the above configuration? (type ‘yes‘ to accept): yes #输入yes表示接受上面配置
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join.....
>>> Performing Cluster Check (using node 192.168.100.150:7000)
M: c007437678a379cd57e811843811e5b047337457 192.168.100.150:7000
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: bfc152ca2c9d140d980e3b8c3c1868239e32efe8 192.168.100.160:7005
slots: (0 slots) slave
replicates c007437678a379cd57e811843811e5b047337457
S: 3cb24dc6d295fa9c261e36df67fda11151de7ef7 192.168.100.160:7003
slots: (0 slots) slave
replicates 6d53d46c401de9804c1fd48c07685f22d693f39c
M: 0550a444f45604dfc1f01191df67f513d5c5fc5b 192.168.100.160:7001
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 9866e7651c9e26a533112e96f578477293de9670 192.168.100.150:7004
slots: (0 slots) slave
replicates 0550a444f45604dfc1f01191df67f513d5c5fc5b
M: 6d53d46c401de9804c1fd48c07685f22d693f39c 192.168.100.150:7002
slots:10923-16383 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered. #执行成功,集群搭建完毕

# echo $?
0
Redis集群操作
命令行常用操作:
# redis-cli -c -h 192.168.100.150 -p 7000 #-c表示使用集群方式登录

192.168.100.150:7000> set key1 123
-> Redirected to slot [9189] located at 192.168.100.160:7001 #说明定位到7001上
OK

192.168.100.160:7001> set key2 abc
-> Redirected to slot [4998] located at 192.168.100.150:7000 #说明又定位到7000上
OK

192.168.100.150:7000> set key3 aaa #没有提示表示回到了本机
OK

192.168.100.150:7000> get key1
-> Redirected to slot [9189] located at 192.168.100.160:7001
"123"

192.168.100.160:7001> get key2
-> Redirected to slot [4998] located at 192.168.100.150:7000
"abc"

集群操作:
192.168.100.150:7000> CLUSTER NODES #列出节点
bfc152ca2c9d140d980e3b8c3c1868239e32efe8 192.168.100.160:7005@17005 slave c007437678a379cd57e811843811e5b047337457 0 1535036884135 6 connected
3cb24dc6d295fa9c261e36df67fda11151de7ef7 192.168.100.160:7003@17003 slave 6d53d46c401de9804c1fd48c07685f22d693f39c 0 1535036885139 5 connected
0550a444f45604dfc1f01191df67f513d5c5fc5b 192.168.100.160:7001@17001 master - 0 1535036885140 4 connected 5461-10922
c007437678a379cd57e811843811e5b047337457 192.168.100.150:7000@17000 myself,master - 0 1535036884000 1 connected 0-5460
9866e7651c9e26a533112e96f578477293de9670 192.168.100.150:7004@17004 slave 0550a444f45604dfc1f01191df67f513d5c5fc5b 0 1535036885000 4 connected
6d53d46c401de9804c1fd48c07685f22d693f39c 192.168.100.150:7002@17002 master - 0 1535036886147 2 connected 10923-16383

192.168.100.150:7000> CLUSTER info #查看集群信息
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:1021
cluster_stats_messages_pong_sent:967
cluster_stats_messages_sent:1988
cluster_stats_messages_ping_received:962
cluster_stats_messages_pong_received:1021
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:1988

192.168.100.150:7000> CLUSTER MEET 192.168.100.160 7007 #添加节点,进行这一步之前,先在B机器上配置好7007服务
OK

192.168.100.150:7000> CLUSTER NODES
bfc152ca2c9d140d980e3b8c3c1868239e32efe8 192.168.100.160:7005@17005 slave c007437678a379cd57e811843811e5b047337457 0 1535037337457 6 connected
247db306c69342ce8642e2a276199349aa25c6e7 192.168.100.160:7007@17007 master - 0 1535037335448 0 connected #以master身份添加进来
3cb24dc6d295fa9c261e36df67fda11151de7ef7 192.168.100.160:7003@17003 slave 6d53d46c401de9804c1fd48c07685f22d693f39c 0 1535037335000 5 connected
0550a444f45604dfc1f01191df67f513d5c5fc5b 192.168.100.160:7001@17001 master - 0 1535037335000 4 connected 5461-10922
c007437678a379cd57e811843811e5b047337457 192.168.100.150:7000@17000 myself,master - 0 1535037333000 1 connected 0-5460
9866e7651c9e26a533112e96f578477293de9670 192.168.100.150:7004@17004 slave 0550a444f45604dfc1f01191df67f513d5c5fc5b 0 1535037336000 4 connected

192.168.100.150:7000> CLUSTER MEET 192.168.100.150 7006 #进行这一步之前,先在A机器上配置好7006服务
OK

192.168.100.150:7000> CLUSTER NODES
bfc152ca2c9d140d980e3b8c3c1868239e32efe8 192.168.100.160:7005@17005 slave c007437678a379cd57e811843811e5b047337457 0 1535037595784 6 connected
247db306c69342ce8642e2a276199349aa25c6e7 192.168.100.160:7007@17007 master - 0 1535037596790 0 connected
3cb24dc6d295fa9c261e36df67fda11151de7ef7 192.168.100.160:7003@17003 slave 6d53d46c401de9804c1fd48c07685f22d693f39c 0 1535037595000 5 connected
0550a444f45604dfc1f01191df67f513d5c5fc5b 192.168.100.160:7001@17001 master - 0 1535037596087 4 connected 5461-10922
c007437678a379cd57e811843811e5b047337457 192.168.100.150:7000@17000 myself,master - 0 1535037596000 1 connected 0-5460
9866e7651c9e26a533112e96f578477293de9670 192.168.100.150:7004@17004 slave 0550a444f45604dfc1f01191df67f513d5c5fc5b 0 1535037595079 4 connected
6d53d46c401de9804c1fd48c07685f22d693f39c 192.168.100.150:7002@17002 master - 0 1535037595000 2 connected 10923-16383
a9fbe573f52ac2aa272963ff89131f0246055cfc 192.168.100.150:7006@17006 master - 0 1535037597793 0 connected #以master身份添加进来

# redis-cli -c -h 192.168.100.150 -p 7006

192.168.100.150:7006> CLUSTER REPLICATE 247db306c69342ce8642e2a276199349aa25c6e7 #将当前节点设置为指定节点的从,更改7006为7007的从,后面随机字符串为node_id
OK

192.168.100.150:7006> CLUSTER NODES
0550a444f45604dfc1f01191df67f513d5c5fc5b 192.168.100.160:7001@17001 master - 0 1535037791000 4 connected 5461-10922
bfc152ca2c9d140d980e3b8c3c1868239e32efe8 192.168.100.160:7005@17005 slave c007437678a379cd57e811843811e5b047337457 0 1535037790000 1 connected
6d53d46c401de9804c1fd48c07685f22d693f39c 192.168.100.150:7002@17002 master - 0 1535037790000 2 connected 10923-16383
3cb24dc6d295fa9c261e36df67fda11151de7ef7 192.168.100.160:7003@17003 slave 6d53d46c401de9804c1fd48c07685f22d693f39c 0 1535037791000 2 connected
a9fbe573f52ac2aa272963ff89131f0246055cfc 192.168.100.150:7006@17006 myself,slave 247db306c69342ce8642e2a276199349aa25c6e7 0 1535037791000 0 connected
c007437678a379cd57e811843811e5b047337457 192.168.100.150:7000@17000 master - 0 1535037790742 1 connected 0-5460
247db306c69342ce8642e2a276199349aa25c6e7 192.168.100.160:7007@17007 master - 0 1535037791749 7 connected
9866e7651c9e26a533112e96f578477293de9670 192.168.100.150:7004@17004 slave 0550a444f45604dfc1f01191df67f513d5c5fc5b 0 1535037792751 4 connected

192.168.100.150:7006> CLUSTER FORGET 247db306c69342ce8642e2a276199349aa25c6e7 #删除指定节点,但无法删除master节点,而且不能删除本身节点(登录节点)
(error) ERR Can‘t forget my master!

192.168.100.150:7006> CLUSTER FORGET bfc152ca2c9d140d980e3b8c3c1868239e32efe8 #删除7005节点
OK

192.168.100.150:7006> CLUSTER NODES
0550a444f45604dfc1f01191df67f513d5c5fc5b 192.168.100.160:7001@17001 master - 0 1535038150000 4 connected 5461-10922
6d53d46c401de9804c1fd48c07685f22d693f39c 192.168.100.150:7002@17002 master - 0 1535038151864 2 connected 10923-16383
3cb24dc6d295fa9c261e36df67fda11151de7ef7 192.168.100.160:7003@17003 slave 6d53d46c401de9804c1fd48c07685f22d693f39c 0 1535038150860 2 connected
a9fbe573f52ac2aa272963ff89131f0246055cfc 192.168.100.150:7006@17006 myself,slave 247db306c69342ce8642e2a276199349aa25c6e7 0 1535038151000 0 connected
c007437678a379cd57e811843811e5b047337457 192.168.100.150:7000@17000 master - 0 1535038152867 1 connected 0-5460
247db306c69342ce8642e2a276199349aa25c6e7 192.168.100.160:7007@17007 master - 0 1535038151000 7 connected
9866e7651c9e26a533112e96f578477293de9670 192.168.100.150:7004@17004 slave 0550a444f45604dfc1f01191df67f513d5c5fc5b 0 1535038151000 4 connected

# ls /data/redis_data/7000
appendonly.aof dump.rdb nodes_7000.conf

# redis-cli -c -h 192.168.100.150 -p 7000

192.168.100.150:7000> CLUSTER SAVECONFIG #保存当前所有配置
OK

# cat /data/redis_data/7000/nodes_7000.conf
bfc152ca2c9d140d980e3b8c3c1868239e32efe8 192.168.100.160:7005@17005 slave c007437678a379cd57e811843811e5b047337457 0 1535038376000 6 connected
247db306c69342ce8642e2a276199349aa25c6e7 192.168.100.160:7007@17007 master - 0 1535038377000 7 connected
3cb24dc6d295fa9c261e36df67fda11151de7ef7 192.168.100.160:7003@17003 slave 6d53d46c401de9804c1fd48c07685f22d693f39c 0 1535038376000 5 connected
0550a444f45604dfc1f01191df67f513d5c5fc5b 192.168.100.160:7001@17001 master - 0 1535038376777 4 connected 5461-10922
c007437678a379cd57e811843811e5b047337457 192.168.100.150:7000@17000 myself,master - 0 1535038375000 1 connected 0-5460
9866e7651c9e26a533112e96f578477293de9670 192.168.100.150:7004@17004 slave 0550a444f45604dfc1f01191df67f513d5c5fc5b 0 1535038378080 4 connected
6d53d46c401de9804c1fd48c07685f22d693f39c 192.168.100.150:7002@17002 master - 0 1535038378081 2 connected 10923-16383
a9fbe573f52ac2aa272963ff89131f0246055cfc 192.168.100.150:7006@17006 slave 247db306c69342ce8642e2a276199349aa25c6e7 0 1535038377779 7 connected
vars currentEpoch 7 lastVoteEpoch 0 #文件内容发生变化,所有节点的配置文件都会发生变化
85

检查集群状态:
# redis-trib.rb check 192.168.100.150:7000

>>> Performing Cluster Check (using node 192.168.100.150:7000)
M: c007437678a379cd57e811843811e5b047337457 192.168.100.150:7000
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: bfc152ca2c9d140d980e3b8c3c1868239e32efe8 192.168.100.160:7005
slots: (0 slots) slave
replicates c007437678a379cd57e811843811e5b047337457
M: 247db306c69342ce8642e2a276199349aa25c6e7 192.168.100.160:7007
slots: (0 slots) master
1 additional replica(s)
S: 3cb24dc6d295fa9c261e36df67fda11151de7ef7 192.168.100.160:7003
slots: (0 slots) slave
replicates 6d53d46c401de9804c1fd48c07685f22d693f39c
M: 0550a444f45604dfc1f01191df67f513d5c5fc5b 192.168.100.160:7001
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 9866e7651c9e26a533112e96f578477293de9670 192.168.100.150:7004
slots: (0 slots) slave
replicates 0550a444f45604dfc1f01191df67f513d5c5fc5b
M: 6d53d46c401de9804c1fd48c07685f22d693f39c 192.168.100.150:7002
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: a9fbe573f52ac2aa272963ff89131f0246055cfc 192.168.100.150:7006
slots: (0 slots) slave
replicates 247db306c69342ce8642e2a276199349aa25c6e7
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

更多资料参考:
深入浅出Redis-redis哨兵集群
Redis Sentinel机制与用法(一)
https://segmentfault.com/a/1190000002680804
https://www.cnblogs.com/jaycekon/p/6237562.html

redis nosql

上一篇:(3)SQL Server表分区


下一篇:MySQL分页时使用 limit+order by 会出现数据重复问题