-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rate limit: Add quota API #16942
rate limit: Add quota API #16942
Conversation
Signed-off-by: Yan Avlasov <[email protected]>
CC @envoyproxy/api-shepherds: Your approval is needed for changes made to |
// the descriptors, then the request admission is determined by the | ||
// :ref:`overall_code <envoy_v3_api_field_service.ratelimit.v3.RateLimitResponse.overall_code>`. | ||
// | ||
// When quota expires due to timeout, a new RLS request will also be made. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Proactively, as soon as the existing quota expires? Or do we wait for the next request to hit that descriptor before we make another RLS request?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this came up on the last review and I think this is client dependent. The client could do either. Can we clarify this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The client can certainly try to guess whether it makes sense to proactively re-request, but I don't think it will want to do that unconditionally, because that would mean that the client would never stop requesting quota for that descriptor, even if it never again receives any requests for that descriptor. I'm wondering if there will be cases where different descriptors have different traffic properties, and instead of the client having to come up with hueristics to tell them apart, maybe it would make sense for the RLS server to provide a directive?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that makes sense to me, though IMO we could add that functionality later as it would be backwards compatible (some type of "fetch hint" message), but I don't feel super strongly about it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, this could be added later. @yanavlasov, I'll leave it up to you whether you think it makes sense to add this now or defer it. But in the interim, we should certainly document the expectation here, so that it's clear to client implementors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if I fully follow the "never stop requesting" logic; I think the idea is that as long as you have some quota, then it's safe to have a client intelligently attempt to refresh. When quota is exhausted, you probably want to wait for a request to trigger the fetch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated the doc with proactively querying rate limit server.
I think it is possible to make client refresh quota only if it was being consumed during last refresh interval, so that it does not re-request if had no traffic for a rate limit descriptor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't actually talking about the case where the quota is exhausted; I was talking about the case where the quota is expired (i.e., it may not be exhausted but its valid_until
time has passed). Although now that you mention it, I think this concern also exists in the quota exhaustion case.
The "never stop requesting" situation I was thinking of was that if the client proactively refreshes the quota based solely on the fact that the valid_until
time is approaching (or the fact that the quota is nearing exhaustion), then it will always refresh, even if it stops seeing requests that hit that descriptor. To avoid this, I think clients will also need to record the last time quota from a given descriptor was used, and if it's been too long since then, the client should stop refreshing that quota.
The problem I see here is that there several knobs in this algorithm that need to be set somehow:
- The amount of time prior to
valid_until
at which quota should be refreshed. - The amount of quota remaining (maybe expressed as a percentage of the original quota given) at which quota should be refreshed.
- The amount of time since quota was last used (I'll call this "idle time") at which the client should stop refreshing the quota.
No matter what we set these knobs to, there will be some situations in which the behavior will be sub-optimal. For example, if we set the idle time to 5 minutes but the client is actually seeing requests for that descriptor every 6 minutes, then we'll always wind up blocking the data plane for the RLS request. And if the RLS server is sending quota with valid_until
times lasting 1 minute, then the client will needlessly proactively refresh the quota 4 times that it will never actually use.
Given that these knobs will really control the efficiency of the system (with the primary goal of minimizing the number of times we have to block a data plane request on an RLS request and a secondary goal of minimizing the number of unnecessary quota refreshes), I'm wondering if we need to make some of this configurable via xDS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. I think as we start building this out, we can add more knobs to control refresh behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with one small comment.
/wait
// the descriptors, then the request admission is determined by the | ||
// :ref:`overall_code <envoy_v3_api_field_service.ratelimit.v3.RateLimitResponse.overall_code>`. | ||
// | ||
// When quota expires due to timeout, a new RLS request will also be made. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this came up on the last review and I think this is client dependent. The client could do either. Can we clarify this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Signed-off-by: Yan Avlasov <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks. @markdroth?
As I mentioned above, I do think that we will probably wind up needing more configuration knobs here. But I agree that those can be added later. /lgtm api |
…cab (#8346) * xds: sync envoy proto to commit 62ca8bd2b5960ed1c6ce2be97d3120cee719ecab * Suppress warnings for newly deprecated xDS proto fields Sync to the latest update to pick up envoyproxy/envoy#16942 for forward compatibility with upcoming xDS Rate Limiting features. Internal Envoy import CL for `62ca8bd2b5960ed1c6ce2be97d3120cee719ecab`: cl/381356375 Suppressed warnings for newly deprecated xDS proto fields: 1) `PerXdsConfig xds_config` to be replaced with `GenericXdsConfig generic_xds_configs`, but this work yet to be planned 2) `HttpConnectionManager`'s `uint32 setXffNumTrustedHops` to be replaced with `TypedExtensionConfig OriginalIpDetectionExtensions`: envoyproxy/envoy#14855
Signed-off-by: Yan Avlasov <[email protected]>
Additional Description:
API for implementing #14299
This a clone of PR #15483 with comment addressed.
Risk Level: Low, new hidden API only
Testing: docs
Docs Changes: Yes
Release Notes: N/A
Platform Specific Features: N/A
Signed-off-by: Yan Avlasov [email protected]