New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make cluster meet reliable under link failures #461
base: unstable
Are you sure you want to change the base?
Conversation
When there is a link failure while an ongoing MEET request is sent the sending node stops sending anymore MEET and starts sending PINGs. Since every node responds to PINGs from unknown nodes with a PONG, the receiving node never adds the sending node. But the sending node adds the receiving node when it sees a PONG. This can lead to asymmetry in cluster membership. This changes makes the sender keep sending MEET until it sees a PONG, avoiding the asymmetry.
Posting this for initial comments. I can migrate the test based on the new framework once #442 is merged. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## unstable #461 +/- ##
============================================
+ Coverage 69.83% 69.94% +0.11%
============================================
Files 109 109
Lines 61801 61808 +7
============================================
+ Hits 43158 43234 +76
+ Misses 18643 18574 -69
|
* normal PING packets. */ | ||
node->flags &= ~CLUSTER_NODE_MEET; | ||
|
||
/* NOTE: We cannot clear the MEET flag from the node until we get a response |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes sense to me, but I thought we briefly discussed just only sending the meet once, and on reconnect just not sending another meet. The previously logic was "We'll only send a single meet", I'm wondering if there was any logic somewhere that relied on that behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we getting into the territory that CLUSTER MEET
should only be sent when it was generated by the user/admin?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The initial MEET is still triggered by an admin/user command.
/* We can clear the flag after the first packet is sent.
* If we'll never receive a PONG, we'll never send new packets
* to this node. Instead after the PONG is received and we
* are no longer in meet/handshake status, we want to send
* normal PING packets. */
The previous code comment above doesn't restrict the MEET to be sent exactly once. I can't think of any scenario where sending multiple MEETs will break - this is similar to when an admin sends MEET multiple times. We send MEET only when the connection is established in clusterLinkConnectHandler
, so we still limit sending MEET when connection is newly setup and not in every cluster cron run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is similar to when an admin sends MEET multiple times
I believe the cluster deduplicate multiple identical requests to the same IP/port.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, is there a worst case scenario where the node which was intended to connect to earlier has changed and we send the MEET command to the wrong node with this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't think of any. I went through the code and it seems okay to get a double MEET. My ask is can we update the test to also cause this edge case. I believe we can drop the pong messages on the source node, kill the connection, and trigger a second meet message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the test case I am using multiple MEETs to make sure we don't stop dropping the link early.
wait_for_condition 1000 50 {
[CI $b cluster_stats_messages_meet_received] >= 3
} else {
fail "Cluster node $a never sent multiple MEETs to $b"
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
B isn't processing these right? It's just immediately dropping the first three and not processing them, it only ever processes the 4th one once correct?
* normal PING packets. */ | ||
node->flags &= ~CLUSTER_NODE_MEET; | ||
|
||
/* NOTE: We cannot clear the MEET flag from the node until we get a response |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we getting into the territory that CLUSTER MEET
should only be sent when it was generated by the user/admin?
int cluster_close_link_on_packet_drop; /* Debug config that goes along with cluster_drop_packet_filter. | ||
When set, the link is closed on packet drop. */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any alternative you thought of for testing without introducing this particular flag?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to test this, the target node has to close the physical connection and not just simply drop the packet. Simply dropping the packet is not a valid scenario in production as we use TCP and we expect reliable transmission.
The only other approach that I can think of is to use iptables to drop packets while the connection is setup. I couldn't find examples in tcl tests that use iptables. Do you have a better approach in mind?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the cluster tests for failover, we use SIGPAUSE to stop node. (There's some TCL helper function for it, like proc pause_process
I think.) Then, it doesn't respond to any network traffic. SIGCONT is used to wake it up again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SIGPAUSE and SIGCONT may not reproduce the scenario that this PR is fixing. We want the connect to be established successfully so that the sender sends the MEET message. The connection has to be torn down before the MEET is received by the receiver. This makes it tricky.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK
I think it's worth investing on this redis/redis#11095 to avoid this issue altogether. |
Thanks, I wasn't aware of this linked issue. IMO these two issues can be solved independently. The linked issue tries to make the admin experience better for MEET command where as this PR tries to address a specific gap in MEET implementation.
The problem addressed in this PR (asymmetric cluster membership) can happen with SYNC MEET as well due to link failures. So, it is worth solving it. The handshake nodes will still be removed after the handshake timeout (same as node_timeout of 15s). Wdyt? |
Yeah, I still believe this a problem even with the #11095. |
Awesome material for our next release which will be full of cluster improvements. Is it worth mentioning in release notes? Btw @srgsanky you need to commit with -s. See the instructions on the DCO CI job's details page. |
I would also be inclined to backport it. |
When I tried to merge the new changes into my fork, I ended up with a merge commit
I want to signoff just 49a884c, but the rebase is adding a signoff to all commits 315b757..d52c8f3 which are not made by me. Do you have any recommendation to fix this? As an alternate option, I can start fresh and add a new commit from the tip of unstable. I am not sure if I will be able to reuse this PR. |
d8aa71c
to
2ff9879
Compare
I believe it's possible to undo a merge by If nothing works, then it's always possible to start from scratch with a new branch and cherry-pick all your commits into it. Then you can rename the branches and force-push to this PR's branch. |
@srgsanky The commit missing the DCO is just the top one. You should just be able to do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some minor nitpicks around the tests, it overall LGTM.
tags {tls:skip external:skip cluster} { | ||
|
||
set base_conf [list cluster-enabled yes] | ||
start_multiple_servers 2 [list overrides $base_conf] { | ||
|
||
test "Cluster nodes are reachable" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some tests have this indentation since we wanted to preserve git history, when we ported them from the old to new framework. For new tests, they should ideally be indented. Maybe we should just format them at this point, since we are already losing so much history.
set b 0 | ||
set a 1 | ||
|
||
test "Cluster nodes haven't met each other" { | ||
assert {[llength [get_cluster_nodes $a]] == 1} | ||
assert {[llength [get_cluster_nodes $b]] == 1} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
set b 0 | |
set a 1 | |
test "Cluster nodes haven't met each other" { | |
assert {[llength [get_cluster_nodes $a]] == 1} | |
assert {[llength [get_cluster_nodes $b]] == 1} | |
} | |
test "Cluster nodes haven't met each other" { | |
assert {[llength [get_cluster_nodes 1]] == 1} | |
assert {[llength [get_cluster_nodes 0]] == 1} | |
} |
If you much prefer a and b, I'm okay with, but it seems just as useful to just call them 0 and 1 to me.
|
||
set b_port [srv 0 port] | ||
|
||
R $a CLUSTER MEET 127.0.0.1 $b_port |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
R $a CLUSTER MEET 127.0.0.1 $b_port | |
R 1 CLUSTER MEET 127.0.0.1 [srv 0 port] |
Similar to previous point about avoiding renaming the nodes a and b.
@@ -44,6 +44,7 @@ test "Cluster nodes hard reset" { | |||
R $id config set repl-diskless-load disabled | |||
R $id config set cluster-announce-hostname "" | |||
R $id DEBUG DROP-CLUSTER-PACKET-FILTER -1 | |||
R $id DEBUG CLOSE-CLUSTER-LINK-ON-PACKET-DROP 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
R $id DEBUG CLOSE-CLUSTER-LINK-ON-PACKET-DROP 0 |
I assume this is no longer needed now that the tests were moved?
wait_for_condition 1000 50 { | ||
[CI $b cluster_stats_messages_meet_received] >= 3 | ||
} else { | ||
fail "Cluster node $a never sent multiple MEETs to $b" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fail "Cluster node $a never sent multiple MEETs to $b" | |
fail "Cluster node $a never sent multiple MEETs to $b" |
We should figure out a linter for TCL.
* normal PING packets. */ | ||
node->flags &= ~CLUSTER_NODE_MEET; | ||
|
||
/* NOTE: We cannot clear the MEET flag from the node until we get a response |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
B isn't processing these right? It's just immediately dropping the first three and not processing them, it only ever processes the 4th one once correct?
When there is a link failure while an ongoing MEET request is sent the sending node stops sending anymore MEET and starts sending PINGs. Since every node responds to PINGs from unknown nodes with a PONG, the receiving node never adds the sending node. But the sending node adds the receiving node when it sees a PONG. This can lead to asymmetry in cluster membership. This changes makes the sender keep sending MEET until it sees a PONG, avoiding the asymmetry.