-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
client: send node secret with every client-to-server RPC #16799
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Getting a more predictable node registration flow is a nice benefit 😄
client/client.go
Outdated
// Block until we've registered at least once so that we know the server has | ||
// our node secret and we can authenticate | ||
select { | ||
case <-c.registeredCh: | ||
case <-c.shutdownCh: | ||
return | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This substantially changes what happens if a disconnected client agent with workloads restarts. Currently if a workload crashes, it can be restarted locally without reconnecting to the server. With this change after a restart the client will be blocked here and not monitoring workloads for exits, so they will not be restarted.
I think this also breaks disconnected client agents with Consul based TTL health checks as Nomad won't heartbeat those checks until the runners start.
Luckily I think there's a pretty easy fix that's also a nice optimization: persist whether first registration has occurred. This means we only ever block on for first registration when there's no workloads running anyway! If there are runners running then we know we've successfully registered so we should start running them again ASAP.
Refactoring
This does make me realize I don't think anyone writing a Nomad Client from scratch would structure the code this way... breaking the currently monolithic NewClient
into its 3 actual phases:
- NewClient - initialize the struct, no side effects or IO
- Register - only blocks on first run
- Run - ...the runners! all the non-registration goroutines get spawned here
...would maybe make all of this easier to reason about, but that would be a pretty major refactor and touch innumerable tests. I highly doubt this is worth pursuing, but it's fun to dream.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gah, I totally missed the restart case 🤦 Persisting registration is simple enough, will fix. And that'll probably make an easy way to block the RPCs specifically as well (as noted above).
I sort of wish the server was what handed out the node secret because that would make it really easy to identify a node that hadn't yet registered because it wouldn't yet have a secret. Maybe we can revisit that workflow when we look at secure introduction of nodes in the near-ish future, and that might be a good time to consider the refactoring you're talking about here as well.
In Nomad 1.5.3 we fixed a security bug that allowed bypass of ACL checks if the request came thru a client node first. But this fix broke (knowingly) the identification of many client-to-server RPCs. These will be now measured as if they were anonymous. The reason for this is that many client-to-server RPCs do not send the node secret and instead rely on the protection of mTLS. This changeset ensures that the node secret is being sent with every client-to-server RPC request. In a future version of Nomad we can add enforcement on the server side, but this was left out of this changeset to reduce risks to the safe upgrade path. Sending the node secret as an auth token introduces a new problem during initial introduction of a client. Clients send many RPCs concurrently with `Node.Register`, but until the node is registered the node secret is unknown to the server and will be rejected as invalid. This causes permission denied errors. To fix that, this changeset introduces a gate on having successfully made a `Node.Register` RPC before any other RPCs can be sent (except for `Status.Ping`, which we need earlier but which also ignores the error because that handler doesn't do an authorization check). This ensures that we only send requests with a node secret already known to the server. This also makes client startup a little easier to reason about because we know `Node.Register` must succeed first, and it should make for a good place to hook in future plans for secure introduction of nodes. The tradeoff is that an existing client that has running allocs will take slightly longer (a second or two) to transition to ready after a restart, because the transition in `Node.UpdateStatus` is gated at the server by first submitting `Node.UpdateAlloc` with client alloc updates.
Ok @schmichael I think I've got this working as discussed. I do want to do a little more end-to-end testing of this before merging but I figure in the meantime I'd try to get feedback on the approach. |
Co-authored-by: Michael Schurter <[email protected]>
The RPC handlers expect to see `nil` ACL objects whenever ACLs are disabled. By using `nil` as a sentinel value, we have the risk of nil pointer exceptions and improper handling of `nil` when returned from our various auth methods that can lead to privilege escalation bugs. This is the third in a series to eliminate the use of `nil` ACLs as a sentinel value for when ACLs are disabled. This patch involves creating a new "virtual" ACL object for checking permissions on client operations and a matching `AuthenticateClientOnly` method for client-only RPCs that can produce that ACL. Unlike the server ACLs PR, this also includes a special case for "legacy" client RPCs where the client was not previously sending the secret as it should (leaning on mTLS only). Those client RPCs were fixed in Nomad 1.6.0, but it'll take a while before we can guarantee they'll be present during upgrades. Ref: hashicorp/nomad-enterprise#1218 Ref: #18703 Ref: #18715 Ref: #16799
The RPC handlers expect to see `nil` ACL objects whenever ACLs are disabled. By using `nil` as a sentinel value, we have the risk of nil pointer exceptions and improper handling of `nil` when returned from our various auth methods that can lead to privilege escalation bugs. This is the third in a series to eliminate the use of `nil` ACLs as a sentinel value for when ACLs are disabled. This patch involves creating a new "virtual" ACL object for checking permissions on client operations and a matching `AuthenticateClientOnly` method for client-only RPCs that can produce that ACL. Unlike the server ACLs PR, this also includes a special case for "legacy" client RPCs where the client was not previously sending the secret as it should (leaning on mTLS only). Those client RPCs were fixed in Nomad 1.6.0, but it'll take a while before we can guarantee they'll be present during upgrades. Ref: hashicorp/nomad-enterprise#1218 Ref: #18703 Ref: #18715 Ref: #16799
The RPC handlers expect to see `nil` ACL objects whenever ACLs are disabled. By using `nil` as a sentinel value, we have the risk of nil pointer exceptions and improper handling of `nil` when returned from our various auth methods that can lead to privilege escalation bugs. This is the third in a series to eliminate the use of `nil` ACLs as a sentinel value for when ACLs are disabled. This patch involves leveraging the refactored `auth` package to remove the weird "mixed auth" helper functions that only support the Variables read/list RPC handlers. Instead, pass the ACL object and claim together into the `AllowVariableOperations` method in the usual `acl` package. Ref: hashicorp/nomad-enterprise#1218 Ref: #18703 Ref: #18715 Ref: #16799 Ref: #18730 Fixes: #15875
The RPC handlers expect to see `nil` ACL objects whenever ACLs are disabled. By using `nil` as a sentinel value, we have the risk of nil pointer exceptions and improper handling of `nil` when returned from our various auth methods that can lead to privilege escalation bugs. This is the third in a series to eliminate the use of `nil` ACLs as a sentinel value for when ACLs are disabled. This patch involves leveraging the refactored `auth` package to remove the weird "mixed auth" helper functions that only support the Variables read/list RPC handlers. Instead, pass the ACL object and claim together into the `AllowVariableOperations` method in the usual `acl` package. Ref: hashicorp/nomad-enterprise#1218 Ref: #18703 Ref: #18715 Ref: #16799 Ref: #18730 Fixes: #15875
The RPC handlers expect to see `nil` ACL objects whenever ACLs are disabled. By using `nil` as a sentinel value, we have the risk of nil pointer exceptions and improper handling of `nil` when returned from our various auth methods that can lead to privilege escalation bugs. This is the final patch in a series to eliminate the use of `nil` ACLs as a sentinel value for when ACLs are disabled. This patch adds a new virtual ACL policy field for when ACLs are disabled and updates our authentication logic to use it. Included: * Extends auth package tests to demonstrate that nil ACLs are treated as failed auth and disabled ACLs succeed auth. * Adds a new `AllowDebug` ACL check for the weird special casing we have for pprof debugging when ACLs are disabled. * Removes the remaining unexported methods (and repeated tests) from the `nomad/acl.go` file. * Update the semgrep rules to detect improper nil ACL checking and remove the old invalid ACL checks. * Update the contributing guide for RPC authentication. Ref: hashicorp/nomad-enterprise#1218 Ref: #18703 Ref: #18715 Ref: #16799 Ref: #18730 Ref: #18744
The RPC handlers expect to see `nil` ACL objects whenever ACLs are disabled. By using `nil` as a sentinel value, we have the risk of nil pointer exceptions and improper handling of `nil` when returned from our various auth methods that can lead to privilege escalation bugs. This is the final patch in a series to eliminate the use of `nil` ACLs as a sentinel value for when ACLs are disabled. This patch adds a new virtual ACL policy field for when ACLs are disabled and updates our authentication logic to use it. Included: * Extends auth package tests to demonstrate that nil ACLs are treated as failed auth and disabled ACLs succeed auth. * Adds a new `AllowDebug` ACL check for the weird special casing we have for pprof debugging when ACLs are disabled. * Removes the remaining unexported methods (and repeated tests) from the `nomad/acl.go` file. * Update the semgrep rules to detect improper nil ACL checking and remove the old invalid ACL checks. * Update the contributing guide for RPC authentication. Ref: hashicorp/nomad-enterprise#1218 Ref: #18703 Ref: #18715 Ref: #16799 Ref: #18730 Ref: #18744
In Nomad 1.5.3 we fixed a security bug that allowed bypass of ACL checks if the request came thru a client node first. But this fix broke (knowingly) the identification of many client-to-server RPCs. These will be now measured as if they were anonymous. The reason for this is that many client-to-server RPCs do not send the node secret and instead rely on the protection of mTLS.
This changeset ensures that the node secret is being sent with every client-to-server RPC request. In a future version of Nomad we can add enforcement on the server side, but this was left out of this changeset to reduce risks to the safe upgrade path.
Sending the node secret as an auth token introduces a new problem during initial introduction of a client. Clients send many RPCs concurrently with
Node.Register
, but until the node is registered the node secret is unknown to the server and will be rejected as invalid. This causes permission denied errors.To fix that, this changeset introduces a gate on having successfully made a
Node.Register
RPC before any other RPCs can be sent (except forStatus.Ping
, which we need earlier but which also ignores the error because that handler doesn't do an authorization check). This ensures that we only send requests with a node secret already known to the server. This also makes client startup a little easier to reason about because we knowNode.Register
must succeed first, and it should make for a good place to hook in future plans for secure introduction of nodes. The tradeoff is that an existing client that has running allocs will take slightly longer (a second or two) to transition to ready after a restart, because the transition inNode.UpdateStatus
is gated at the server by first submittingNode.UpdateAlloc
with client alloc updates.Fixes: #16798
Fixes: https://github.com/hashicorp/nomad-enterprise/issues/1069