Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request Limiter #25093

Merged
merged 5 commits into from
Jan 26, 2024
Merged

Request Limiter #25093

merged 5 commits into from
Jan 26, 2024

Conversation

mpalmi
Copy link
Contributor

@mpalmi mpalmi commented Jan 26, 2024

This commit introduces two new adaptive concurrency limiters in Vault, which should handle overloading of the server during periods of untenable request rate. The limiter adjusts the number of allowable in-flight requests based on latency measurements performed across the request duration. This approach allows us to reject entire requests prior to doing any work and prevents clients from exceeding server capacity.

The limiters intentionally target two separate vectors that have been proven to lead to server over-utilization.

  • Back pressure from the storage backend, resulting in bufferbloat in the WAL system. (enterprise)
  • Back pressure from CPU over-utilization via PKI issue requests (specifically for RSA keys), resulting in failed heartbeats.

Storage constraints can be accounted for by limiting logical requests according to their http.Method. We only limit requests with write-based methods, since these will result in storage Puts and exhibit the aforementioned bufferbloat.

CPU constraints are accounted for using the same underlying library and technique; however, they require special treatment. The maximum number of concurrent pki/issue requests found in testing (again, specifically for RSA keys) is far lower than the minimum tolerable write request rate. Without separate limiting, we would artificially impose limits on tolerable request rates for non-PKI requests. To specifically target PKI issue requests, we add a new PathsSpecial field, called limited, allowing backends to specify a list of paths which should get special-case request limiting.

For the sake of code cleanliness and future extensibility, we introduce the concept of a LimiterRegistry. The registry proposed in this PR has two entries, corresponding with the two vectors above. Each Limiter entry has its own corresponding maximum and minimum concurrency, allowing them to react to latency deviation independently and handle high volumes of requests to targeted bottlenecks (CPU and storage).

In both cases, utilization will be effectively throttled before Vault reaches any degraded state. The resulting 503 - Service Unavailable is a retryable HTTP response code, which can be handled to gracefully retry and eventually succeed. Clients should handle this by retrying with jitter and exponential backoff. This is done within Vault's API, using the go-retryablehttp library.

Limiter testing was performed via benchmarks of mixed workloads and across a deployment of agent pods with great success.

This commit introduces two new adaptive concurrency limiters in Vault,
which should handle overloading of the server during periods of
untenable request rate. The limiter adjusts the number of allowable
in-flight requests based on latency measurements performed across the
request duration. This approach allows us to reject entire requests
prior to doing any work and prevents clients from exceeding server
capacity.

The limiters intentionally target two separate vectors that have been
proven to lead to server over-utilization.

- Back pressure from the storage backend, resulting in bufferbloat in
  the WAL system. (enterprise)
- Back pressure from CPU over-utilization via PKI issue requests
  (specifically for RSA keys), resulting in failed heartbeats.

Storage constraints can be accounted for by limiting logical requests
according to their http.Method. We only limit requests with write-based
methods, since these will result in storage Puts and exhibit the
aforementioned bufferbloat.

CPU constraints are accounted for using the same underlying library and
technique; however, they require special treatment. The maximum number
of concurrent pki/issue requests found in testing (again, specifically
for RSA keys) is far lower than the minimum tolerable write request
rate. Without separate limiting, we would artificially impose limits on
tolerable request rates for non-PKI requests. To specifically target PKI
issue requests, we add a new PathsSpecial field, called limited,
allowing backends to specify a list of paths which should get
special-case request limiting.

For the sake of code cleanliness and future extensibility, we introduce
the concept of a LimiterRegistry. The registry proposed in this PR has
two entries, corresponding with the two vectors above. Each Limiter
entry has its own corresponding maximum and minimum concurrency,
allowing them to react to latency deviation independently and handle
high volumes of requests to targeted bottlenecks (CPU and storage).

In both cases, utilization will be effectively throttled before Vault
reaches any degraded state. The resulting 503 - Service Unavailable is a
retryable HTTP response code, which can be handled to gracefully retry
and eventually succeed. Clients should handle this by retrying with
jitter and exponential backoff. This is done within Vault's API, using
the go-retryablehttp library.

Limiter testing was performed via benchmarks of mixed workloads and
across a deployment of agent pods with great success.
@mpalmi mpalmi added this to the 1.16.0-rc1 milestone Jan 26, 2024
@mpalmi mpalmi requested review from raskchanky and a team January 26, 2024 16:30
@github-actions github-actions bot added the hashicorp-contributed-pr If the PR is HashiCorp (i.e. not-community) contributed label Jan 26, 2024
@mpalmi mpalmi marked this pull request as ready for review January 26, 2024 16:55
@mpalmi mpalmi requested a review from a team as a code owner January 26, 2024 16:55
Copy link

Build Results:
All builds succeeded! ✅

Copy link

github-actions bot commented Jan 26, 2024

CI Results:
Failures:

Test Type Package Test Logs
race vault TestOIDC_PeriodicFunc view test results
race vault TestOIDC_PeriodicFunc/test-key-nil-next-signing-key view test results

Copy link
Contributor

@raskchanky raskchanky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your ENT PR also has changes to vault/request_forwarding.go which don't appear to be in this PR. Is that on purpose?

@mpalmi
Copy link
Contributor Author

mpalmi commented Jan 26, 2024

Your ENT PR also has changes to vault/request_forwarding.go which don't appear to be in this PR. Is that on purpose?

Yes to this as well! That was a change I made while trying to debug some things. I decided to drop it here. We may have to bring it back if we find that we need changes in the gRPC handler, but until then, leaving it out.

Copy link
Contributor

@raskchanky raskchanky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mpalmi mpalmi merged commit 43be9fc into main Jan 26, 2024
109 of 110 checks passed
@mpalmi mpalmi deleted the request-limiter branch January 26, 2024 19:26
Monkeychip pushed a commit that referenced this pull request Jan 29, 2024
This commit introduces two new adaptive concurrency limiters in Vault,
which should handle overloading of the server during periods of
untenable request rate. The limiter adjusts the number of allowable
in-flight requests based on latency measurements performed across the
request duration. This approach allows us to reject entire requests
prior to doing any work and prevents clients from exceeding server
capacity.

The limiters intentionally target two separate vectors that have been
proven to lead to server over-utilization.

- Back pressure from the storage backend, resulting in bufferbloat in
  the WAL system. (enterprise)
- Back pressure from CPU over-utilization via PKI issue requests
  (specifically for RSA keys), resulting in failed heartbeats.

Storage constraints can be accounted for by limiting logical requests
according to their http.Method. We only limit requests with write-based
methods, since these will result in storage Puts and exhibit the
aforementioned bufferbloat.

CPU constraints are accounted for using the same underlying library and
technique; however, they require special treatment. The maximum number
of concurrent pki/issue requests found in testing (again, specifically
for RSA keys) is far lower than the minimum tolerable write request
rate. Without separate limiting, we would artificially impose limits on
tolerable request rates for non-PKI requests. To specifically target PKI
issue requests, we add a new PathsSpecial field, called limited,
allowing backends to specify a list of paths which should get
special-case request limiting.

For the sake of code cleanliness and future extensibility, we introduce
the concept of a LimiterRegistry. The registry proposed in this PR has
two entries, corresponding with the two vectors above. Each Limiter
entry has its own corresponding maximum and minimum concurrency,
allowing them to react to latency deviation independently and handle
high volumes of requests to targeted bottlenecks (CPU and storage).

In both cases, utilization will be effectively throttled before Vault
reaches any degraded state. The resulting 503 - Service Unavailable is a
retryable HTTP response code, which can be handled to gracefully retry
and eventually succeed. Clients should handle this by retrying with
jitter and exponential backoff. This is done within Vault's API, using
the go-retryablehttp library.

Limiter testing was performed via benchmarks of mixed workloads and
across a deployment of agent pods with great success.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
hashicorp-contributed-pr If the PR is HashiCorp (i.e. not-community) contributed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants