Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update 'Stop a Node' with more draining info #2671

Merged
merged 13 commits into from
Mar 19, 2018
Prev Previous commit
Next Next commit
Remove version # from range leases bullet
  • Loading branch information
rmloveland committed Mar 15, 2018
commit 76a56222df363f3bf2bf63b0ee883342dc258b11
2 changes: 1 addition & 1 deletion v1.1/stop-a-node.md
Original file line number Diff line number Diff line change
@@ -15,7 +15,7 @@ For information about permanently removing nodes to downsize a cluster or react
### How It Works

- Cancels all current sessions without waiting.
- Transfers all *range leases* and Raft leadership to other nodes. (1.1.6)
- Transfers all *range leases* and Raft leadership to other nodes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: bold instead of italics.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, fixed in bb14e1b

- Gossips its draining state to the cluster so that no leases are transferred to the draining node. Note that this is a best effort that times out after the duration specified by the `???` cluster setting, so other nodes may not receive the gossip info in time. (1.1.6)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This still happens as of 1.1.6. In 1.1.5 and earlier, this part was broken.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for confirming. Do you know if there is a cluster setting for this in 1.1.6? None of the documented cluster settings for 1.1.6 look to be the one. And it wasn't clear from a quick SHOW ALL CLUSTER SETTINGS on a 1.1.6 binary (though I may have missed it).

- No new ranges are transferred to the draining node, to avoid a possible loss of quorum after the node shuts down. (1.1.5)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is true for all 1.1.x versions, as far as I'm aware.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Again, leaving in but removing the version number.