Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LocalRateLimit(HTTP): Add dynamic token bucket support #36623

Draft
wants to merge 12 commits into
base: main
Choose a base branch
from

Conversation

vikaschoudhary16
Copy link
Contributor

@vikaschoudhary16 vikaschoudhary16 commented Oct 16, 2024

Commit Message: LocalRateLimit(HTTP): Add dynamic token bucket support
Additional Description:
fixes: #23351 and #19895

User configures descriptors in the http local rate limit filter. These descriptors are the "target" to match using the source descriptors built using the traffic(http requests). Only matched traffic will be rate limited. When request comes, at runtime, based on rate_limit configuration, descriptors are generated where values are picked from the request as directed by the rate_limit configuration. These generated descriptors are matched with "target"(user configured) descriptors. Generated descriptors are very flexible already in the sense that "values" from the request can be extracted using number of ways such as dynamic metadata, matcher api, computed reg expressions etc etc, but in "target"(user configured) descriptors are very rigid and it is expected that user statically configures the "values" in the descriptor.
This PR is adding flexibility by allowing blank "values" in the user configured descriptors. Blank values will be treated as wildcard. Suppose descriptor entry key is client-id and value is left blank by the user, local rate limit filter will create a descriptor dynamically for each unique value of header client-id. That means client1, client2 and so on will have dedicated descriptors and token buckets.

To keep a resource consumption under limit, LRU cache is maintained for dynamic descriptors with a default size of 20, which is configurable.

Docs Changes: TODO
Release Notes: TODO

Copy link

CC @envoyproxy/api-shepherds: Your approval is needed for changes made to (api/envoy/|docs/root/api-docs/).
envoyproxy/api-shepherds assignee is @markdroth
CC @envoyproxy/api-watchers: FYI only for changes made to (api/envoy/|docs/root/api-docs/).

🐱

Caused by: #36623 was opened by vikaschoudhary16.

see: more, trace.

Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
@vikaschoudhary16 vikaschoudhary16 force-pushed the lrl-dynamic-tokenbuckets branch from 457f0a9 to 71ecf21 Compare October 16, 2024 08:34
Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
@wbpcode
Copy link
Member

wbpcode commented Oct 16, 2024

I think we still need a way to limit the overhead and memory of the token buckets. It's unacceptable to let it increases unlimited.

@vikaschoudhary16
Copy link
Contributor Author

I think we still need a way to limit the overhead and memory of the token buckets. It's unacceptable to let it increases unlimited.

Thanks a lot for taking a look.
Will add logic to keep it under limits

@wbpcode wbpcode self-assigned this Oct 16, 2024
Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
@wbpcode
Copy link
Member

wbpcode commented Oct 20, 2024

Thanks for this contribution. Dynamic descriptor support is a very complex problem in the local rate limit, considering various limitations.

I have take a pass to current implementation, but before flush more comments to the implementation details, I will throw some quesions first:

  1. Should the LRU list be updated when one of the entry is hit in the worker? I think it should to keep the active descriptor alive. But this will also introduces more compleixity.
  2. Should different descriptor has independent list? Or the descriptor with much more different dynamic values will occupies the whole list?
  3. Lifetime management and search logic is much more complex now.

So, I will suggest you to take a step back and think is there other way to resolve your requirement, like a new limit filter in your fork or using the rate_limit/rate_limit_quota directly.

If you insist on to enhance the local_rate_limit, then, I think we should provide an abstraction (like DynamicDescriptor) to wrap all these complexity first (list/memory management, lifetime problem, cross workers updating, etc. (PS: I think lock may be an option if we can limit the lock only be used in the new feature)), and then, we could integrate the high level abstraction into the local_rate_limit. Or it's hard for reviewer to ensure the quality of the exist feature.

Thanks again for all your help and contribution. This is not simple feature. 🌷

@vikaschoudhary16
Copy link
Contributor Author

Thanks for this contribution. Dynamic descriptor support is a very complex problem in the local rate limit, considering various limitations.

I have take a pass to current implementation, but before flush more comments to the implementation details, I will throw some quesions first:

  1. Should the LRU list be updated when one of the entry is hit in the worker? I think it should to keep the active descriptor alive. But this will also introduces more compleixity.
  2. Should different descriptor has independent list? Or the descriptor with much more different dynamic values will occupies the whole list?
  3. Lifetime management and search logic is much more complex now.

So, I will suggest you to take a step back and think is there other way to resolve your requirement, like a new limit filter in your fork or using the rate_limit/rate_limit_quota directly.

If you insist on to enhance the local_rate_limit, then, I think we should provide an abstraction (like DynamicDescriptor) to wrap all these complexity first (list/memory management, lifetime problem, cross workers updating, etc. (PS: I think lock may be an option if we can limit the lock only be used in the new feature)), and then, we could integrate the high level abstraction into the local_rate_limit. Or it's hard for reviewer to ensure the quality of the exist feature.

Thanks again for all your help and contribution. This is not simple feature. 🌷

Thanks a lot @wbpcode for taking a look. Really appreciate!!

So, I will suggest you to take a step back and think is there other way to resolve your requirement, like a new limit filter in your fork or using the rate_limit/rate_limit_quota directly.

From the conversations on the linked issues, seems like there has been repetitive request for this functionality since long time, so figuring out a solution in community might benefit number of users.
yeah, rate_limit_quota can address the usecase in case user is fine with external dependency. In some cases, users are keen in avoiding external dependency and looking for solution through local rate limiter path.

If you insist on to enhance the local_rate_limit, then, I think we should provide an abstraction (like DynamicDescriptor) to wrap all these complexity first (list/memory management, lifetime problem, cross workers updating, etc. (PS: I think lock may be an option if we can limit the lock only be used in the new feature)), and then, we could integrate the high level abstraction into the local_rate_limit.

Sounds good. I will work on adding the abstraction as per your suggestion.

Signed-off-by: Vikas Choudhary (vikasc) <[email protected]>
@wbpcode
Copy link
Member

wbpcode commented Oct 25, 2024

Thanks so much for this update. I think this make sense. Here are some high level suggestions to this (I think we are in the correct way, thanks):

  1. Please add an explict bool flag to enable this feature to avoid some users mis-use this when they configure an empty descriptor in case.
  2. This is a new feature. So, you can ignore the timer based token bucket completely. It would make the implementation simper.
  3. You may need to refactor the return type of requestAllowed. Now, because the buckets in the dynamic descriptor may be destroyed in another thread.

// Actual number of dynamic descriptors will depend on the cardinality of unique values received from the http request for the omitted
// values.
// Default is 20.
uint32 dynamic_descripters_lru_cache_limit = 18;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May name this simplely max_dynamic_descripters. (We may change the elimination algorithm in future, who know?) and please use wrapper number type google.protobuf.UInt32Value.

And Please add explict bool to enable this feature, like google.protobuf.BoolValue use_dynamic_descripters.

Copy link

This pull request has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in 7 days if no further activity occurs. Please feel free to give a status update now, ping for review, or re-open when it's ready. Thank you for your contributions!

@github-actions github-actions bot added the stale stalebot believes this issue/PR has not been touched recently label Nov 24, 2024
@wbpcode wbpcode removed the stale stalebot believes this issue/PR has not been touched recently label Nov 25, 2024
@wbpcode wbpcode added the no stalebot Disables stalebot from closing an issue label Nov 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api no stalebot Disables stalebot from closing an issue
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Limit to X requests/sec PER CUSTOMER using LocalRateLimit
3 participants