Redis3.0之后,Redis官方提供了完整的集群解决方案。该方案采用去中心化的方式,包括:sharding(分区)、replication(复制)、failover(故障转移),称为RedisCluster。Redis5.0前采用redis-trib进行集群的创建和管理,需要ruby支持。Redis5.0可以直接使用Redis-cli进行集群的创建和管理。这里主要介绍使用Redis5.0.10搭建RedisCluster集群
1.环境准备 1.1 集群规划 1 2 3 4 5 6 7 8 Maste1 - 192.168.1.161:6379 Slave1 - 192.168.1.161:6380 Maste2 - 192.168.1.162:6379 Slave2 - 192.168.1.162:6380 Maste3 - 192.168.1.163:6379 Slave3 - 192.168.1.163:6380 Maste4(扩容) - 192.168.1.165:6379 Slave4(扩容) - 192.168.1.165:6380
Redis集群最少需要6个节点,可以分布在一台或者多台主机上。本次使用4台虚拟机,先161,162,163上搭建一个三主三从的集群,每台虚拟机上安装6379和6380两个节点。集群搭建成功后,在165上新建两个节点(一主一从),将两个新的节点加入集群,来验证集群扩容。
1.2 安装包下载准备 1 2 3 4 5 6 7 8 wget https://download.redis.io/releases/redis-5.0.10.tar.gz tar -zxvf redis-5.0.10.tar.gz cd redis-5.0.10 mkdir -p /opt/redis-cluster/6379 /opt/redis-cluster/6380 make install PREFIX=/opt/redis-cluster/6379 make install PREFIX=/opt/redis-cluster/6380 cp redis.conf /opt/redis-cluster/6379/bin cp redis.conf /opt/redis-cluster/6380/bin
2.集群配置 2.1 修改每台服务器上的节点配置文件 修改/opt/redis-cluster/6379/bin/redis.conf
1 2 3 4 5 6 7 8 # # bind 127.0.0.1# protected-mode no # cluster-enable yes # daemonize yes
修改/opt/redis-cluster/6380/bin/redis.conf
1 2 3 4 5 6 7 8 9 10 11 12 # # bind 127.0.0.1# protected-mode no # cluster-enable yes # daemonize yes # port 6380 # pidfile /var/run/redis_6380.pid
2.2 启动节点 1 2 3 4 5 6 cd /opt/redis-cluster/6379/bin/ ./redis-server redis.conf cd /opt/redis-cluster/6380/bin/ ./redis-server redis.conf # ps -ef|grep redis
2.3 创建Redis集群 选取一个节点执行创建集群命令,如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 [root@master bin]# cd /opt/redis-cluster/6379/bin/ [root@master bin]# ./redis-cli --cluster create 192.168.1.161:6379 192.168.1.162:6379 192.168.1.163:6379 192.168.1.161:6380 192.168.1.162:6380 192.168.1.163:6380 --cluster-replicas 1 > >> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 192.168.1.162:6380 to 192.168.1.161:6379 Adding replica 192.168.1.163:6380 to 192.168.1.162:6379 Adding replica 192.168.1.161:6380 to 192.168.1.163:6379 M: 64364a4b2de9a82653b46be040e86f600fb5ac2d 192.168.1.161:6379 slots:[0-5460] (5461 slots) master M: 9ee1767e39b07033b480c82337620ed006162c8a 192.168.1.162:6379 slots:[5461-10922] (5462 slots) master M: 041fcc81090840f67efed70d3cd623076d15dbc4 192.168.1.163:6379 slots:[10923-16383] (5461 slots) master S: f59d4108d4fbe145c2d9c6f2d70e06991a4d63be 192.168.1.161:6380 replicates 041fcc81090840f67efed70d3cd623076d15dbc4 S: 113134e4cedc9069db1af667b2f6cf23097d0b3e 192.168.1.162:6380 replicates 64364a4b2de9a82653b46be040e86f600fb5ac2d S: 65ad86c316f6452ca6a7429febe4a9d96f6c2e43 192.168.1.163:6380 replicates 9ee1767e39b07033b480c82337620ed006162c8a Can I set the above configuration? (type 'yes' to accept): yes > >> Nodes configuration updated > >> Assign a different config epoch to each node > >> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join ... > >> Performing Cluster Check (using node 192.168.1.161:6379) M: 64364a4b2de9a82653b46be040e86f600fb5ac2d 192.168.1.161:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 113134e4cedc9069db1af667b2f6cf23097d0b3e 192.168.1.162:6380 slots: (0 slots) slave replicates 64364a4b2de9a82653b46be040e86f600fb5ac2d M: 041fcc81090840f67efed70d3cd623076d15dbc4 192.168.1.163:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) M: 9ee1767e39b07033b480c82337620ed006162c8a 192.168.1.162:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 65ad86c316f6452ca6a7429febe4a9d96f6c2e43 192.168.1.163:6380 slots: (0 slots) slave replicates 9ee1767e39b07033b480c82337620ed006162c8a S: f59d4108d4fbe145c2d9c6f2d70e06991a4d63be 192.168.1.161:6380 slots: (0 slots) slave replicates 041fcc81090840f67efed70d3cd623076d15dbc4 [OK] All nodes agree about slots configuration. > >> Check for open slots... > >> Check slots coverage... [OK] All 16384 slots covered. [root@master bin]#
如上,集群创建成功。可以通过redis-cli连接查看集群状态:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 ./redis-cli -h 127.0.0.1 -p 6379 -c # 127.0.0.1:6379> cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:168 cluster_stats_messages_pong_sent:164 cluster_stats_messages_sent:332 cluster_stats_messages_ping_received:159 cluster_stats_messages_pong_received:168 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:332 # 127.0.0.1:6379> cluster nodes 113134e4cedc9069db1af667b2f6cf23097d0b3e 192.168.1.162:6380@16380 slave 64364a4b2de9a82653b46be040e86f600fb5ac2d 0 1604321562000 5 connected 041fcc81090840f67efed70d3cd623076d15dbc4 192.168.1.163:6379@16379 master - 0 1604321563496 3 connected 10923-16383 64364a4b2de9a82653b46be040e86f600fb5ac2d 192.168.1.161:6379@16379 myself,master - 0 1604321563000 1 connected 0-5460 9ee1767e39b07033b480c82337620ed006162c8a 192.168.1.162:6379@16379 master - 0 1604321562000 2 connected 5461-10922 65ad86c316f6452ca6a7429febe4a9d96f6c2e43 192.168.1.163:6380@16380 slave 9ee1767e39b07033b480c82337620ed006162c8a 0 1604321564506 6 connected f59d4108d4fbe145c2d9c6f2d70e06991a4d63be 192.168.1.161:6380@16380 slave 041fcc81090840f67efed70d3cd623076d15dbc4 0 1604321562484 4 connected
3.集群扩容 3.1 添加新节点 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 [root@master bin]# cd /opt/redis-cluster/6379/bin/ [root@master bin]# ./redis-cli --cluster add-node 192.168.1.165:6379 192.168.1.161:6379 > >> Adding node 192.168.1.165:6379 to cluster 192.168.1.161:6379 > >> Performing Cluster Check (using node 192.168.1.161:6379) M: 64364a4b2de9a82653b46be040e86f600fb5ac2d 192.168.1.161:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 113134e4cedc9069db1af667b2f6cf23097d0b3e 192.168.1.162:6380 slots: (0 slots) slave replicates 64364a4b2de9a82653b46be040e86f600fb5ac2d M: 041fcc81090840f67efed70d3cd623076d15dbc4 192.168.1.163:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) M: 9ee1767e39b07033b480c82337620ed006162c8a 192.168.1.162:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 65ad86c316f6452ca6a7429febe4a9d96f6c2e43 192.168.1.163:6380 slots: (0 slots) slave replicates 9ee1767e39b07033b480c82337620ed006162c8a S: f59d4108d4fbe145c2d9c6f2d70e06991a4d63be 192.168.1.161:6380 slots: (0 slots) slave replicates 041fcc81090840f67efed70d3cd623076d15dbc4 [OK] All nodes agree about slots configuration. > >> Check for open slots... > >> Check slots coverage... [OK] All 16384 slots covered. > >> Send CLUSTER MEET to node 192.168.1.165:6379 to make it join the cluster. [OK] New node added correctly.
此时查看节点状态:
1 2 3 4 5 6 7 8 9 10 [root@master bin]# ./redis-cli -h 127.0.0.1 -p 6379 -c 127.0.0.1:6379> cluster nodes 113134e4cedc9069db1af667b2f6cf23097d0b3e 192.168.1.162:6380@16380 slave 64364a4b2de9a82653b46be040e86f600fb5ac2d 0 1604321847825 5 connected 041fcc81090840f67efed70d3cd623076d15dbc4 192.168.1.163:6379@16379 master - 0 1604321842740 3 connected 10923-16383 64364a4b2de9a82653b46be040e86f600fb5ac2d 192.168.1.161:6379@16379 myself,master - 0 1604321844000 1 connected 0-5460 9ee1767e39b07033b480c82337620ed006162c8a 192.168.1.162:6379@16379 master - 0 1604321846808 2 connected 5461-10922 828c48dc72d52ff5be972512d3d87b70236af87c 192.168.1.165:6379@16379 master - 0 1604321845792 0 connected 65ad86c316f6452ca6a7429febe4a9d96f6c2e43 192.168.1.163:6380@16380 slave 9ee1767e39b07033b480c82337620ed006162c8a 0 1604321847000 6 connected f59d4108d4fbe145c2d9c6f2d70e06991a4d63be 192.168.1.161:6380@16380 slave 041fcc81090840f67efed70d3cd623076d15dbc4 0 1604321844779 4 connected 127.0.0.1:6379>
看到165的6379节点已经存在,
3.2 新节点hash槽分配 需要给新节点进行hash槽分配,这样该主节才可以存储数据
连接上集群(连接集群中任意一个可用结点都行)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 [root@master bin]# ./redis-cli --cluster reshard 192.168.1.165:6379 > >> Performing Cluster Check (using node 192.168.1.165:6379) M: 828c48dc72d52ff5be972512d3d87b70236af87c 192.168.1.165:6379 slots: (0 slots) master M: 9ee1767e39b07033b480c82337620ed006162c8a 192.168.1.162:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) M: 64364a4b2de9a82653b46be040e86f600fb5ac2d 192.168.1.161:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) M: 041fcc81090840f67efed70d3cd623076d15dbc4 192.168.1.163:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: 65ad86c316f6452ca6a7429febe4a9d96f6c2e43 192.168.1.163:6380 slots: (0 slots) slave replicates 9ee1767e39b07033b480c82337620ed006162c8a S: 113134e4cedc9069db1af667b2f6cf23097d0b3e 192.168.1.162:6380 slots: (0 slots) slave replicates 64364a4b2de9a82653b46be040e86f600fb5ac2d S: f59d4108d4fbe145c2d9c6f2d70e06991a4d63be 192.168.1.161:6380 slots: (0 slots) slave replicates 041fcc81090840f67efed70d3cd623076d15dbc4 [OK] All nodes agree about slots configuration. > >> Check for open slots... > >> Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)?
输入要分配的槽数量
1 How many slots do you want to move (from 1 to 16384)? 4000
输入接收槽的结点id
通过cluster nodes 查看新增的192.168.1.165:6379 的id为828c48dc72d52ff5be972512d3d87b70236af87c
1 2 3 4 What is the receiving node ID? 828c48dc72d52ff5be972512d3d87b70236af87c Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs.
输入 all
然后输入输入yes 确认
此时再通过cluster nodes查看节点,可以看到新节点分配的槽为0-1332 5461-6794 10923-12255
1 2 3 4 5 6 7 8 9 127.0.0.1:6379> cluster nodes 113134e4cedc9069db1af667b2f6cf23097d0b3e 192.168.1.162:6380@16380 slave 64364a4b2de9a82653b46be040e86f600fb5ac2d 0 1604322686164 5 connected 041fcc81090840f67efed70d3cd623076d15dbc4 192.168.1.163:6379@16379 master - 0 1604322684000 3 connected 12256-16383 64364a4b2de9a82653b46be040e86f600fb5ac2d 192.168.1.161:6379@16379 myself,master - 0 1604322684000 1 connected 1333-5460 9ee1767e39b07033b480c82337620ed006162c8a 192.168.1.162:6379@16379 master - 0 1604322685155 2 connected 6795-10922 828c48dc72d52ff5be972512d3d87b70236af87c 192.168.1.165:6379@16379 master - 0 1604322686000 7 connected 0-1332 5461-6794 10923-12255 65ad86c316f6452ca6a7429febe4a9d96f6c2e43 192.168.1.163:6380@16380 slave 9ee1767e39b07033b480c82337620ed006162c8a 0 1604322683000 6 connected f59d4108d4fbe145c2d9c6f2d70e06991a4d63be 192.168.1.161:6380@16380 slave 041fcc81090840f67efed70d3cd623076d15dbc4 0 1604322685000 4 connected 127.0.0.1:6379>
3.3 给新节点添加从节点 给192.168.1.165:6379增加从节点192.168.1.165:6380
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 [root@master bin]# ./redis-cli --cluster add-node 192.168.1.165:6380 192.168.1.165:6379 --cluster-slave --cluster-master-id 828c48dc72d52ff5be972512d3d87b70236af87c > >> Adding node 192.168.1.165:6380 to cluster 192.168.1.165:6379 > >> Performing Cluster Check (using node 192.168.1.165:6379) M: 828c48dc72d52ff5be972512d3d87b70236af87c 192.168.1.165:6379 slots:[0-1332],[5461-6794],[10923-12255] (4000 slots) master M: 9ee1767e39b07033b480c82337620ed006162c8a 192.168.1.162:6379 slots:[6795-10922] (4128 slots) master 1 additional replica(s) M: 64364a4b2de9a82653b46be040e86f600fb5ac2d 192.168.1.161:6379 slots:[1333-5460] (4128 slots) master 1 additional replica(s) M: 041fcc81090840f67efed70d3cd623076d15dbc4 192.168.1.163:6379 slots:[12256-16383] (4128 slots) master 1 additional replica(s) S: 65ad86c316f6452ca6a7429febe4a9d96f6c2e43 192.168.1.163:6380 slots: (0 slots) slave replicates 9ee1767e39b07033b480c82337620ed006162c8a S: 113134e4cedc9069db1af667b2f6cf23097d0b3e 192.168.1.162:6380 slots: (0 slots) slave replicates 64364a4b2de9a82653b46be040e86f600fb5ac2d S: f59d4108d4fbe145c2d9c6f2d70e06991a4d63be 192.168.1.161:6380 slots: (0 slots) slave replicates 041fcc81090840f67efed70d3cd623076d15dbc4 [OK] All nodes agree about slots configuration. > >> Check for open slots... > >> Check slots coverage... [OK] All 16384 slots covered. > >> Send CLUSTER MEET to node 192.168.1.165:6380 to make it join the cluster. Waiting for the cluster to join > >> Configure node as replica of 192.168.1.165:6379. [OK] New node added correctly. [root@master bin]#