-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
This commit introduces two new adaptive concurrency limiters in Vault, which should handle overloading of the server during periods of untenable request rate. The limiter adjusts the number of allowable in-flight requests based on latency measurements performed across the request duration. This approach allows us to reject entire requests prior to doing any work and prevents clients from exceeding server capacity. The limiters intentionally target two separate vectors that have been proven to lead to server over-utilization. - Back pressure from the storage backend, resulting in bufferbloat in the WAL system. (enterprise) - Back pressure from CPU over-utilization via PKI issue requests (specifically for RSA keys), resulting in failed heartbeats. Storage constraints can be accounted for by limiting logical requests according to their http.Method. We only limit requests with write-based methods, since these will result in storage Puts and exhibit the aforementioned bufferbloat. CPU constraints are accounted for using the same underlying library and technique; however, they require special treatment. The maximum number of concurrent pki/issue requests found in testing (again, specifically for RSA keys) is far lower than the minimum tolerable write request rate. Without separate limiting, we would artificially impose limits on tolerable request rates for non-PKI requests. To specifically target PKI issue requests, we add a new PathsSpecial field, called limited, allowing backends to specify a list of paths which should get special-case request limiting. For the sake of code cleanliness and future extensibility, we introduce the concept of a LimiterRegistry. The registry proposed in this PR has two entries, corresponding with the two vectors above. Each Limiter entry has its own corresponding maximum and minimum concurrency, allowing them to react to latency deviation independently and handle high volumes of requests to targeted bottlenecks (CPU and storage). In both cases, utilization will be effectively throttled before Vault reaches any degraded state. The resulting 503 - Service Unavailable is a retryable HTTP response code, which can be handled to gracefully retry and eventually succeed. Clients should handle this by retrying with jitter and exponential backoff. This is done within Vault's API, using the go-retryablehttp library. Limiter testing was performed via benchmarks of mixed workloads and across a deployment of agent pods with great success.
- Loading branch information
Showing
21 changed files
with
1,165 additions
and
585 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
```release-note:feature | ||
**Request Limiter**: Add adaptive concurrency limits to write-based HTTP | ||
methods and special-case `pki/issue` requests to prevent overloading the Vault | ||
server. | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,191 @@ | ||
// Copyright (c) HashiCorp, Inc. | ||
// SPDX-License-Identifier: BUSL-1.1 | ||
package limits | ||
|
||
import ( | ||
"context" | ||
"errors" | ||
"fmt" | ||
"math" | ||
"sync/atomic" | ||
|
||
"github.com/armon/go-metrics" | ||
"github.com/hashicorp/go-hclog" | ||
"github.com/platinummonkey/go-concurrency-limits/core" | ||
"github.com/platinummonkey/go-concurrency-limits/limit" | ||
"github.com/platinummonkey/go-concurrency-limits/limiter" | ||
"github.com/platinummonkey/go-concurrency-limits/strategy" | ||
) | ||
|
||
var ( | ||
// ErrCapacity is a new error type to indicate that Vault is not accepting new | ||
// requests. This should be handled by callers in request paths to return | ||
// http.StatusServiceUnavailable to the client. | ||
ErrCapacity = errors.New("Vault server temporarily overloaded") | ||
|
||
// DefaultDebugLogger opts out of the go-concurrency-limits internal Debug | ||
// logger, since it's rather noisy. We're generating logs of interest in | ||
// Vault. | ||
DefaultDebugLogger limit.Logger = nil | ||
|
||
// DefaultMetricsRegistry opts out of the go-concurrency-limits internal | ||
// metrics because we're tracking what we care about in Vault. | ||
DefaultMetricsRegistry core.MetricRegistry = core.EmptyMetricRegistryInstance | ||
) | ||
|
||
const ( | ||
// Smoothing adjusts how heavily we weight newer high-latency detection. | ||
// Higher values (>1) place more emphasis on recent measurements. We set | ||
// this below 1 to better tolerate short-lived spikes in request rate. | ||
DefaultSmoothing = .1 | ||
|
||
// DefaultLongWindow is chosen as a minimum of 1000 samples. longWindow | ||
// defines sliding window size used for the Exponential Moving Average. | ||
DefaultLongWindow = 1000 | ||
) | ||
|
||
// RequestLimiter is a thin wrapper for limiter.DefaultLimiter. | ||
type RequestLimiter struct { | ||
*limiter.DefaultLimiter | ||
} | ||
|
||
// Acquire consults the underlying RequestLimiter to see if a new | ||
// RequestListener can be acquired. | ||
// | ||
// The return values are a *RequestListener, which the caller can use to perform | ||
// latency measurements, and a bool to indicate whether or not a RequestListener | ||
// was acquired. | ||
// | ||
// The returned RequestListener is short-lived and eventually garbage-collected; | ||
// however, the RequestLimiter keeps track of in-flight concurrency using a | ||
// token bucket implementation. The caller must release the resulting Limiter | ||
// token by conducting a measurement. | ||
// | ||
// There are three return cases: | ||
// | ||
// 1) If Request Limiting is disabled, we return an empty RequestListener so all | ||
// measurements are no-ops. | ||
// | ||
// 2) If the request limit has been exceeded, we will not acquire a | ||
// RequestListener and instead return nil, false. No measurement is required, | ||
// since we immediately return from callers with ErrCapacity. | ||
// | ||
// 3) If we have not exceeded the request limit, the caller must call one of | ||
// OnSuccess(), OnDropped(), or OnIgnore() to return a measurement and release | ||
// the underlying Limiter token. | ||
func (l *RequestLimiter) Acquire(ctx context.Context) (*RequestListener, bool) { | ||
// Transparently handle the case where the limiter is disabled. | ||
if l == nil || l.DefaultLimiter == nil { | ||
return &RequestListener{}, true | ||
} | ||
|
||
lsnr, ok := l.DefaultLimiter.Acquire(ctx) | ||
if !ok { | ||
metrics.IncrCounter(([]string{"limits", "concurrency", "service_unavailable"}), 1) | ||
// If the token acquisition fails, we've reached capacity and we won't | ||
// get a listener, so just return nil. | ||
return nil, false | ||
} | ||
|
||
return &RequestListener{ | ||
DefaultListener: lsnr.(*limiter.DefaultListener), | ||
released: new(atomic.Bool), | ||
}, true | ||
} | ||
|
||
// concurrencyChanger adjusts the current allowed concurrency with an | ||
// exponential backoff as we approach the max limit. | ||
func concurrencyChanger(limit int) int { | ||
change := math.Sqrt(float64(limit)) | ||
if change < 1.0 { | ||
change = 1.0 | ||
} | ||
return int(change) | ||
} | ||
|
||
var ( | ||
// DefaultWriteLimiterFlags have a less conservative MinLimit to prevent | ||
// over-optimizing the request latency, which would result in | ||
// under-utilization and client starvation. | ||
DefaultWriteLimiterFlags = LimiterFlags{ | ||
Name: WriteLimiter, | ||
MinLimit: 100, | ||
MaxLimit: 5000, | ||
} | ||
|
||
// DefaultSpecialPathLimiterFlags have a conservative MinLimit to allow more | ||
// aggressive concurrency throttling for CPU-bound workloads such as | ||
// `pki/issue`. | ||
DefaultSpecialPathLimiterFlags = LimiterFlags{ | ||
Name: SpecialPathLimiter, | ||
MinLimit: 5, | ||
MaxLimit: 5000, | ||
} | ||
) | ||
|
||
// LimiterFlags establish some initial configuration for a new request limiter. | ||
type LimiterFlags struct { | ||
// Name specifies the limiter Name for registry lookup and logging. | ||
Name string | ||
|
||
// MinLimit defines the minimum concurrency floor to prevent over-throttling | ||
// requests during periods of high traffic. | ||
MinLimit int | ||
|
||
// MaxLimit defines the maximum concurrency ceiling to prevent skewing to a | ||
// point of no return. | ||
// | ||
// We set this to a high value (5000) with the expectation that systems with | ||
// high-performing specs will tolerate higher limits, while the algorithm | ||
// will find its own steady-state concurrency well below this threshold in | ||
// most cases. | ||
MaxLimit int | ||
|
||
// InitialLimit defines the starting concurrency limit prior to any | ||
// measurements. | ||
// | ||
// If we start this value off too high, Vault could become | ||
// overloaded before the algorithm has a chance to adapt. Setting the value | ||
// to the minimum is a safety measure which could result in early request | ||
// rejection; however, the adaptive nature of the algorithm will prevent | ||
// this from being a prolonged state as the allowed concurrency will | ||
// increase during normal operation. | ||
InitialLimit int | ||
} | ||
|
||
// NewRequestLimiter is a basic constructor for the RequestLimiter wrapper. It | ||
// is responsible for setting up the Gradient2 Limit and instantiating a new | ||
// wrapped DefaultLimiter. | ||
func NewRequestLimiter(logger hclog.Logger, flags LimiterFlags) (*RequestLimiter, error) { | ||
logger.Info("setting up new request limiter", | ||
"initialLimit", flags.InitialLimit, | ||
"maxLimit", flags.MaxLimit, | ||
"minLimit", flags.MinLimit, | ||
) | ||
|
||
// NewGradient2Limit is the algorithm which drives request limiting | ||
// decisions. It gathers latency measurements and calculates an Exponential | ||
// Moving Average to determine whether latency deviation warrants a change | ||
// in the current concurrency limit. | ||
lim, err := limit.NewGradient2Limit(flags.Name, | ||
flags.InitialLimit, | ||
flags.MaxLimit, | ||
flags.MinLimit, | ||
concurrencyChanger, | ||
DefaultSmoothing, | ||
DefaultLongWindow, | ||
DefaultDebugLogger, | ||
DefaultMetricsRegistry, | ||
) | ||
if err != nil { | ||
return nil, fmt.Errorf("failed to create gradient2 limit: %w", err) | ||
} | ||
|
||
strategy := strategy.NewSimpleStrategy(flags.InitialLimit) | ||
defLimiter, err := limiter.NewDefaultLimiter(lim, 1e9, 1e9, 10, 100, strategy, nil, DefaultMetricsRegistry) | ||
if err != nil { | ||
return &RequestLimiter{}, err | ||
} | ||
|
||
return &RequestLimiter{defLimiter}, nil | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
// Copyright (c) HashiCorp, Inc. | ||
// SPDX-License-Identifier: BUSL-1.1 | ||
package limits | ||
|
||
import ( | ||
"sync/atomic" | ||
|
||
"github.com/armon/go-metrics" | ||
"github.com/platinummonkey/go-concurrency-limits/limiter" | ||
) | ||
|
||
// RequestListener is a thin wrapper for limiter.DefaultLimiter to handle the | ||
// case where request limiting is turned off. | ||
type RequestListener struct { | ||
*limiter.DefaultListener | ||
released *atomic.Bool | ||
} | ||
|
||
// OnSuccess is called as a notification that the operation succeeded and | ||
// internally measured latency should be used as an RTT sample. | ||
func (l *RequestListener) OnSuccess() { | ||
if l.DefaultListener != nil { | ||
metrics.IncrCounter(([]string{"limits", "concurrency", "success"}), 1) | ||
l.DefaultListener.OnSuccess() | ||
l.released.Store(true) | ||
} | ||
} | ||
|
||
// OnDropped is called to indicate the request failed and was dropped due to an | ||
// internal server error. Note that this does not include ErrCapacity. | ||
func (l *RequestListener) OnDropped() { | ||
if l.DefaultListener != nil { | ||
metrics.IncrCounter(([]string{"limits", "concurrency", "dropped"}), 1) | ||
l.DefaultListener.OnDropped() | ||
l.released.Store(true) | ||
} | ||
} | ||
|
||
// OnIgnore is called to indicate the operation failed before any meaningful RTT | ||
// measurement could be made and should be ignored to not introduce an | ||
// artificially low RTT. It also provides an extra layer of protection against | ||
// leaks of the underlying StrategyToken during recoverable panics in the | ||
// request handler. We treat these as Ignored, discard the measurement, and mark | ||
// the listener as released. | ||
func (l *RequestListener) OnIgnore() { | ||
if l.DefaultListener != nil && l.released.Load() != true { | ||
metrics.IncrCounter(([]string{"limits", "concurrency", "ignored"}), 1) | ||
l.DefaultListener.OnIgnore() | ||
l.released.Store(true) | ||
} | ||
} |
Oops, something went wrong.