-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LocalRateLimit(HTTP): Add dynamic token bucket support #36623
base: main
Are you sure you want to change the base?
LocalRateLimit(HTTP): Add dynamic token bucket support #36623
Conversation
Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
CC @envoyproxy/api-shepherds: Your approval is needed for changes made to |
Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
457f0a9
to
71ecf21
Compare
Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
I think we still need a way to limit the overhead and memory of the token buckets. It's unacceptable to let it increases unlimited. |
Thanks a lot for taking a look. |
Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
Thanks for this contribution. Dynamic descriptor support is a very complex problem in the local rate limit, considering various limitations. I have take a pass to current implementation, but before flush more comments to the implementation details, I will throw some quesions first:
So, I will suggest you to take a step back and think is there other way to resolve your requirement, like a new limit filter in your fork or using the rate_limit/rate_limit_quota directly. If you insist on to enhance the local_rate_limit, then, I think we should provide an abstraction (like DynamicDescriptor) to wrap all these complexity first (list/memory management, lifetime problem, cross workers updating, etc. (PS: I think lock may be an option if we can limit the lock only be used in the new feature)), and then, we could integrate the high level abstraction into the local_rate_limit. Or it's hard for reviewer to ensure the quality of the exist feature. Thanks again for all your help and contribution. This is not simple feature. 🌷 |
Thanks a lot @wbpcode for taking a look. Really appreciate!!
From the conversations on the linked issues, seems like there has been repetitive request for this functionality since long time, so figuring out a solution in community might benefit number of users.
Sounds good. I will work on adding the abstraction as per your suggestion. |
Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
Thanks so much for this update. I think this make sense. Here are some high level suggestions to this (I think we are in the correct way, thanks):
|
// Actual number of dynamic descriptors will depend on the cardinality of unique values received from the http request for the omitted | ||
// values. | ||
// Default is 20. | ||
uint32 dynamic_descripters_lru_cache_limit = 18; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May name this simplely max_dynamic_descripters
. (We may change the elimination algorithm in future, who know?) and please use wrapper number type google.protobuf.UInt32Value
.
And Please add explict bool to enable this feature, like google.protobuf.BoolValue use_dynamic_descripters
.
This pull request has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in 7 days if no further activity occurs. Please feel free to give a status update now, ping for review, or re-open when it's ready. Thank you for your contributions! |
Commit Message: LocalRateLimit(HTTP): Add dynamic token bucket support
Additional Description:
fixes: #23351 and #19895
User configures descriptors in the http local rate limit filter. These descriptors are the "target" to match using the source descriptors built using the traffic(http requests). Only matched traffic will be rate limited. When request comes, at runtime, based on rate_limit configuration, descriptors are generated where
values
are picked from the request as directed by the rate_limit configuration. These generated descriptors are matched with "target"(user configured) descriptors. Generated descriptors are very flexible already in the sense that "values" from the request can be extracted using number of ways such as dynamic metadata, matcher api, computed reg expressions etc etc, but in "target"(user configured) descriptors are very rigid and it is expected that user statically configures the "values" in the descriptor.This PR is adding flexibility by allowing blank "values" in the user configured descriptors. Blank values will be treated as wildcard. Suppose descriptor entry key is
client-id
and value is left blank by the user, local rate limit filter will create a descriptor dynamically for each unique value of headerclient-id
. That meansclient1
,client2
and so on will have dedicated descriptors and token buckets.To keep a resource consumption under limit, LRU cache is maintained for dynamic descriptors with a default size of 20, which is configurable.
Docs Changes: TODO
Release Notes: TODO