这边二个思路一个是实战哈希槽分布式,一个是redis-cli –cluster create命令,进行集群初始化。第一个
方法:一个集群只能有16384个槽,编号0-16383(0-2^14-1)。这些槽会分配给集群中的所有主节点,分配策略没有要求。可以指定哪些编号的槽分配给哪个主节点。集群会记录节点和槽的对应关系。解决了节点和槽的关系后,接下来就需要对key求哈希值,然后对16384取余,余数是几key就落入对应的槽里。slot =CRC16(key) %16384。以槽为单位移动数据,因为槽的数目是固定的,处理起来比较容易,这样数据移动问题就解决了。槽解决的是粒度问题,相当于把粒度变大了,这样便于数据移动。哈希解决的是映射问题,使用key的哈希值来计算所在的槽,便于数据分配。
优点:解决了一致性哈希算法的数据倾斜问题,在数据和节点之间又加入了一层,把这层称为哈希槽(slot),用于管理数据和节点之间的关系,现在就相当于节点上放的是槽,槽里放的是数据。
第二个是单纯的主从
实战哈希槽分布式写个脚本
!/bin/bash
Redis版本
REDIS_VERSION=”6.0.8″
定义节点的数量
NUM_NODES=6
迭代创建Redis节点
for ((i=1; i<=$NUM_NODES; i++))
do
# 定义节点名称
NODE_NAME=”redis-node-$i”
# 定义数据目录
DATA_DIR=”/data/redis/share/$NODE_NAME”
# 定义端口号
PORT=$((7000 + i))
# 创建Redis节点容器
docker run -d –name $NODE_NAME –net host –privileged=true -v $DATA_DIR:/data redis:$REDIS_VERSION \
–cluster-enabled yes –appendonly yes –port $PORT
# 输出节点信息
echo “Redis Node $i started on port $PORT”
done
输出集群配置提示
echo “Remember to configure and initialize your Redis cluster using ‘redis-cli’.”
我们现在来试一下
[root@VM-0-16-centos ~]# ./test.sh
138bbba4e1ee6827987956425c9ef80441a734fd2782810bb56271a5d9af6d20
Redis Node 1 started on port 7001
7055b8656676f2445a197c41537857656b443ec711166e73ce94505d09c51129
Redis Node 2 started on port 7002
32096b179af52b4acc13398863e08a383556512be98b0a1fe69cd57913d25d16
Redis Node 3 started on port 7003
Remember to configure and initialize your Redis cluster using ‘redis-cli’.
[root@VM-0-16-centos ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
32096b179af5 redis:6.0.8 “docker-entrypoint…” 10 seconds ago Up 9 seconds redis-node-3
7055b8656676 redis:6.0.8 “docker-entrypoint…” 10 seconds ago Up 9 seconds redis-node-2
138bbba4e1ee redis:6.0.8 “docker-entrypoint…” 10 seconds ago Up 9 seconds redis-node-1
45c7388bedfd mysql:5.7 “docker-entrypoint…” 2 days ago Up 9 hours 33060/tcp, 0.0.0.0:10050->3306/tcp mysql-master
[root@VM-0-16-centos /]# ls
bin boot data dev etc home lib lib64 lost+found media mnt mydata opt proc root run sbin srv sys tmp usr var
[root@VM-0-16-centos /]# cd da
-bash: cd: da: No such file or directory
[root@VM-0-16-centos /]# cd data/
[root@VM-0-16-centos data]# ls
redis
[root@VM-0-16-centos data]# cd redis/
[root@VM-0-16-centos redis]# ls
share
[root@VM-0-16-centos redis]# cd share/
[root@VM-0-16-centos share]# ls
redis-node-1 redis-node-2 redis-node-3
[root@VM-0-16-centos share]# cd redis-node-1
[root@VM-0-16-centos redis-node-1]# ls
appendonly.aof nodes.conf
[root@VM-0-16-centos redis-node-1]# cd !
-bash: cd: !: No such file or directory
[root@VM-0-16-centos redis-node-1]# cd ~
[root@VM-0-16-centos ~]# ls
test.sh
[root@VM-0-16-centos ~]# docker exec -it redis-node-1 /bin/bash
root@VM-0-16-centos:/data# ll
bash: ll: command not found
root@VM-0-16-centos:/data# ls
appendonly.aof nodes.conf
root@VM-0-16-centos:/data# docker ps -a
bash: docker: command not found
root@VM-0-16-centos:/data# redis-cli –cluster create 119.29.17.67:7001 119.29.17.67:7002 119.29.17.67:7003 182.61.40.160:7004 182.61.40.160:7005 182.61.40.160:7006 –cluster-replicas 1>>> Performing hash slots allocation on 6 nodes…
Master[0] -> Slots 0 – 5460
Master[1] -> Slots 5461 – 10922
Master[2] -> Slots 10923 – 16383
Adding replica 182.61.40.160:7006 to 119.29.17.67:7001
Adding replica 119.29.17.67:7003 to 182.61.40.160:7004
Adding replica 182.61.40.160:7005 to 119.29.17.67:7002
M: e8b84390a14df617e7bf732f5ea13cdb621719a5 119.29.17.67:7001
slots:[0-5460] (5461 slots) master
M: f82fdcff3fe2ca9c47758ef8f4d322de5cf22656 119.29.17.67:7002
slots:[10923-16383] (5461 slots) master
S: 25edd779e16f7d94e2fce77a1d6eb91347066902 119.29.17.67:7003
replicates b13f417e5c6785af87488d52c1fb742703960366
M: b13f417e5c6785af87488d52c1fb742703960366 182.61.40.160:7004
slots:[5461-10922] (5462 slots) master
S: a13ce20e71d8813c078cf7fe5a87d799e58d583e 182.61.40.160:7005
replicates f82fdcff3fe2ca9c47758ef8f4d322de5cf22656
S: d00826dab49649331fcec2aec471cc554b7baac8 182.61.40.160:7006
replicates e8b84390a14df617e7bf732f5ea13cdb621719a5
Can I set the above configuration? (type ‘yes’ to accept): yes
Nodes configuration updated
Assign a different config epoch to each node
Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
..
Performing Cluster Check (using node 119.29.17.67:7001)
M: e8b84390a14df617e7bf732f5ea13cdb621719a5 119.29.17.67:7001
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: d00826dab49649331fcec2aec471cc554b7baac8 182.61.40.160:7006
slots: (0 slots) slave
replicates e8b84390a14df617e7bf732f5ea13cdb621719a5
S: 25edd779e16f7d94e2fce77a1d6eb91347066902 119.29.17.67:7003
slots: (0 slots) slave
replicates b13f417e5c6785af87488d52c1fb742703960366
M: b13f417e5c6785af87488d52c1fb742703960366 182.61.40.160:7004
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: a13ce20e71d8813c078cf7fe5a87d799e58d583e 182.61.40.160:7005
slots: (0 slots) slave
replicates f82fdcff3fe2ca9c47758ef8f4d322de5cf22656
M: f82fdcff3fe2ca9c47758ef8f4d322de5cf22656 119.29.17.67:7002
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
Check for open slots…
Check slots coverage…
[OK] All 16384 slots covered.