forked from redis/redis
-
Notifications
You must be signed in to change notification settings - Fork 0
/
CLUSTER
156 lines (112 loc) · 7.43 KB
/
CLUSTER
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
CLUSTER README
==============
Redis Cluster is currently a work in progress, however there are a few things
that you can do already with it to see how it works.
The following guide show you how to setup a three nodes cluster and issue some
basic command against it.
... WORK IN PROGRESS ...
1) Show MIGRATE
2) Show CLUSTER MEET
3) Show link status detection with CLUSTER NODES
4) Show how to add slots with CLUSTER ADDSLOTS
5) Show redirection
6) Show cluster down
... WORK IN PROGRESS ...
TODO
====
*** WARNING: all the following probably has some meaning only for
*** me (antirez), most info are not updated, so please consider this file
*** as a private TODO list / brainstorming.
- disconnect FAIL clients after some pong idle time.
---------------------------------
* Majority rule: the cluster con continue when there are all the hash slots covered AND when there are the majority of masters.
* Shutdown on request rule: when a node sees many connections closed or even a timeout longer than usual on almost all the other nodes, it will usually wait for the normal timeout before to change the state, unless it receives a query from a client: in such a case it will put itself into error status.
--------------------------------
* When asked for a key that is not in a node's business it will reply:
-ASK 1.2.3.4:6379 (in case we want the client to ask just one time)
-MOVED <slotid> 1.2.3.4:6379 (in case the hash slot is permanently moved)
So with -ASK a client should just retry the query against this new node, a single time.
With -MOVED the client should update its hash slots table to reflect the fact that now the specified node is the one to contact for the specified hash slot.
* Nodes communicate using a binary protocol.
* Node failure detection.
1) Every node contains information about all the other nodes:
- If this node is believed to work ok or not
- The hash slots for which this node is responsible
- If the node is a master or a slave
- If it is a slave, the slave of which node
- if it is a master, the list of slave nodes
- The slaves are ordered for "<ip>:<port>" string from lower to higher
ordered lexicographically. When a master is down, the cluster will
try to elect the first slave in the list.
2) Every node also contains the unix time where every other node was
reported to work properly (that is, it replied to a ping or any other
protocol request correctly). For every node we also store the timestamp
at which we sent the latest ping, so we can easily compute the current
lag.
3) From time to time a node pings a random node, selected among the nodes
with the least recent "alive" time stamp. Three random nodes are selected
and the one with lower alive time stamp is pinged.
4) The ping packet contains also information about a few random nodes
alive time stamp. So that the receiver of the ping will update the
alive table if the received alive timestamp is more recent the
one present in the node local table.
In the ping packet every node "gossip" information is something like
this:
<ip>:<port>:<status>:<pingsent_timestamp>:<pongreceived_timestamp>
status is OK, POSSIBLE_FAILURE, FAILURE.
5) The node replies to ping with a pong packet, that also contains a random
selections of nodes timestamps.
A given node thinks another node may be in a failure state once there is a
ping timeout bigger than 30 seconds (configurable).
When a possible failure is detected the node performs the following action:
1) Is the average between all the other nodes big? For instance bigger
than 30 seconds / 2 = 15 seconds? Probably *we* are disconnected.
In such a case we don't trust our lag data, and reset all the
timestamps of sent ping to zero. This way when we'll reconnect there
is no risk that we'll claim many nodes are down, taking inappropriate
actions.
2) Messages from nodes marked as failed are *always* ignored by the other
nodes. A new node needs to be "introduced" by a good online node.
3) If we are well connected (that is, condition "1" is not true) and a
node timeout is > 30 seconds, we mark the node as POSSIBLE_FAILURE
(a flat in the cluster node structure). Every time we sent a ping
to another node we inform this other nodes that we detected this
condition, as already stated.
4) Once a node receives a POSSIBLE_FAILURE status for a node that is
already marked as POSSIBLE_FAILURE locally, it sends a message
to all the other nodes of type NODE_FAILURE_DETECTED, communicating the
ip/port of the specified node.
All the nodes need to update the status of this node setting it into
FAILURE.
5) If the computer in FAILURE state is a master node, what is needed is
to perform a Slave Election.
SLAVE ELECTION
1) The slave election is performed by the first slave (with slaves ordered
lexicographically). Actually it is the first functioning slave, so if
the first slave is marked as failing the next slave will perform the
election and so forth. Such a slave is called the "Successor".
2) The Successor starts checking that all the nodes in the cluster already
marked the master in FAILURE state. If at least one node does not agree
no action is performed.
3) If all the nodes agree that the master is failing, the Successor does
the following:
a) It will send a SUCCESSION message to all the other nodes, that will
upgrade the hash slot tables accordingly. It will make sure that all
the nodes are updated and if some node did not received the message
it will keep trying.
b) Once all nodes not marked as FAILURE accepted the SUCCESSION message
it will update his own table and will start acting as a master
accepting write queries.
c) Every node receiving the succession message, if not already informed
of the change will broadcast the same message to other three random
nodes. No action is performed if the specified host was already marked
as the master node.
d) A node that was a slave of the original master that failed will
switch master to the new one once the SUCCESSION message is received.
RANDOM
1) When selecting a slave, the system will try to pick one with an IP different than the master and other slaves, if possible.
2) The PING packet also contains information about the local configuration checksum. This is the SHA1 of the current configuration, without the bits that normally change form one node to another (like latest ping reply, failure status of nodes, and so forth). From time to time the local config SHA1 is checked against the list of the other nodes, and if there is a mismatch between our configuration and the most common one that lasts for more than N seconds, the most common configuration is asked and retrieved from another node. The event is logged.
3) Every time a node updates its internal cluster configuration, it dumps such a config in the cluster.conf file. On startup the configuration is reloaded.
Nodes can share the cluster configuration when needed (for instance if SHA1 does not match) using this exact same format.
CLIENTS
- Clients may be configured to use slaves to perform reads, when read-after-write consistency is not required.