-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rate-limiting plugin works randomly (in redis/cluster mode) #4379
Comments
Hi @wbarantt , could you please share the following:
|
It's just stock a stock kong with no custom configuration. No errors in the logs. |
@wbarantt we'd still like to know a bit more about your Kong configuration :) Are you deploying a cluster of Kong nodes behind an LB? Can you please share the exact plugin config you've enabled? And how many distinct consumers are sending requests to Kong when you observe this behavior? TBH this sounds a bit like #3329, but we'd love to learn more about your use case to get a better understanding of what's going on. Thanks! |
This is my whole test config: This is how I test it: This is what I get with no other traffic coming to this kong node: When I turn on the regular traffic the kong starts to skip and results in sth like this: RUN: ===== #18 ===== So 7 additional requests have passed through the kong... Here's the full log: |
We are facing a similar issue after upgrade. We were earlier on 0.14.0 version where this used to work fine but after our upgrade to 1.0.2, rate limiting numbers are not right and we suspect plugin is not even getting executed for many concurrent calls. We did not found any errors related to redis / plugin but still only 60% of requests were counted as part of rate limiting. |
I am getting the same issue on Kong 1.0.3 In one of my test I made total of 40 calls (with 20 workers) to kong, rate limited was set to 10 per hour on the consumer level. So the expected result is only 10 calls can go through and the rest 30 should get 429 response. However, all calls got 200 response. 10 response has the |
I'm facing similar issue after upgrading from 0.14.0 to 1.0.2. The plugin seem to be skipped for certain % of requests. |
I did some debugging and could zero down the issue to this piece of code below,
I configured rate-limiting plugin to use redis with a threshold of 1000 requests per min. In the test run, I fired 2000 requests at a concurrency of 100, the redis counter was incremented only 367 times instead of 1000. The kong.ctx.plugin.timer is null for rest of them. I suspect its getting cleared somewhere else in the flow. As per documentation, kong.ctx.plugin is accessible within the plugin instance at request level. The same test with 0.14.1 works pretty fine. I had key-auth, rate-limiting, prometheus and syslog plugins configured for my route. When I disabled syslog and prometheus, the rate-limiting works fine. When I enable one of them, it doesn't @hbagdi @thibaultcha Would really appreciate if you can give some pointers on changes related to the kong.ctx.plugin in 1.x release. Would be more than happy to fix and submit a PR. Thanks |
Btw, this change was introduced in this commit, The 0.14.1 version was incrementing the redis counter in the access phase. In 1.x, its done in the log phase. After making the below fix, I can confirm that rate-limiting works as before,
|
@ganeshs yes, that is a good workaround. we need to check what could be overwriting (or why it is cleared or perhaps not even se in a first place) the context value. |
I'm facing the same issue. Do we have a plan to fix this issue in Kong? |
We are also facing this issue. Is there any update on when this might be addressed? |
#5460 should likely fix this. An unknown issue clears the plugin ctx contents, and rate-limiting increments counters based on data stored in it. #5459 may address the issue with ctx, but we do not have definitive proof of that issue's exact cause yet. As such, 5460 ignores any problem with ctx whatsoever by moving all rate-limiting functionality into the access phase. |
* Store namespace keys in `ngx.ctx` to ensure all dynamically generated namespaces are isolated on a per-request basis. * Introduce a new global API (which is a private API for use by core only) to explicitly delete a namespace. A note on the benefits of this new implementation: * By using a table to keep track of namespaces in `ngx.ctx`, we ensure that users of `ngx.ctx` cannot access the table namespaces, thus properly isolating it and avoiding polluting `ngx.ctx` for core and/or plugins. We can think of it as a double-pointer reference to the namespace. Each request gets a pre-allocated table for 4 namespaces (of course not limited to 4), with the assumption that most instances do not execute more than 4 plugins using `kong.ctx.plugin` at a time. * We ensure that namespace key references cannot be `nil` to ensure a safe usage from the core. * All namespace key references are still weak when assigned to a namespace, thus tying the lifetime of the namespace to that of its key reference. Similarly to the previous implementation, this is done to ensure we avoid potential memory leaks. That said, the `kong.ctx.plugin` namespace does use the plugin's conf for that purpose anymore (See 40dc146), which alleviates such concerns in the current usage of this API. * All tables allocated for namespace key references and namespaces keys themselves will be released when `ngx.ctx` will be GC'ed. * We also ensure than `kong.ctx.*` returns `nil` when `ngx.ctx` is not supported ("init" phase). Co-Authored-By: tdelaune <[email protected]> Fix #4379 Fix #5853 See #5851 See #5459
* Store namespace keys in `ngx.ctx` to ensure all dynamically generated namespaces are isolated on a per-request basis. * Introduce a new global API (which is a private API for use by core only) to explicitly delete a namespace. A note on the benefits of this new implementation: * By using a table to keep track of namespaces in `ngx.ctx`, we ensure that users of `ngx.ctx` cannot access the table namespaces, thus properly isolating it and avoiding polluting `ngx.ctx` for core and/or plugins. We can think of it as a double-pointer reference to the namespace. Each request gets a pre-allocated table for 4 namespaces (of course not limited to 4), with the assumption that most instances do not execute more than 4 plugins using `kong.ctx.plugin` at a time. * We ensure that namespace key references cannot be `nil` to ensure a safe usage from the core. * All namespace key references are still weak when assigned to a namespace, thus tying the lifetime of the namespace to that of its key reference. Similarly to the previous implementation, this is done to ensure we avoid potential memory leaks. That said, the `kong.ctx.plugin` namespace does use the plugin's conf for that purpose anymore (See 40dc146), which alleviates such concerns in the current usage of this API. * All tables allocated for namespace key references and namespaces keys themselves will be released when `ngx.ctx` will be GC'ed. * We also ensure than `kong.ctx.*` returns `nil` when `ngx.ctx` is not supported ("init" phase). Co-Authored-By: tdelaune <[email protected]> Fix #4379 Fix #5853 See #5851 See #5459
* Store namespace keys in `ngx.ctx` to ensure all dynamically generated namespaces are isolated on a per-request basis. * Introduce a new global API (which is a private API for use by core only) to explicitly delete a namespace. A note on the benefits of this new implementation: * By using a table to keep track of namespaces in `ngx.ctx`, we ensure that users of `ngx.ctx` cannot access the table namespaces, thus properly isolating it and avoiding polluting `ngx.ctx` for core and/or plugins. We can think of it as a double-pointer reference to the namespace. Each request gets a pre-allocated table for 4 namespaces (of course not limited to 4), with the assumption that most instances do not execute more than 4 plugins using `kong.ctx.plugin` at a time. * We ensure that namespace key references cannot be `nil` to ensure a safe usage from the core. * All namespace key references are still weak when assigned to a namespace, thus tying the lifetime of the namespace to that of its key reference. Similarly to the previous implementation, this is done to ensure we avoid potential memory leaks. That said, the `kong.ctx.plugin` namespace does use the plugin's conf for that purpose anymore (See 40dc146), which alleviates such concerns in the current usage of this API. * All tables allocated for namespace key references and namespaces keys themselves will be released when `ngx.ctx` will be GC'ed. * We also ensure than `kong.ctx.*` returns `nil` when `ngx.ctx` is not supported ("init" phase). Co-Authored-By: tdelaune <[email protected]> Fix #4379 Fix #5853 See #5851 See #5459
Summary
The rate-limiting plugin fails to throttle when configured in redis/cluster mode.
It seems that under a load the kong node does not decrease the limit and/or does not respond with the X-RateLimit-* headers.
Steps To Reproduce
Additional Details & Logs
It seems that the code defined outside of RateLimitingHandler:access is sometimes just not run.
It affects both the RateLimitingHandler:header_filter (which results in no rate-limiting headers being sent in the response) and the RateLimitingHandler:log (which results in no limit being consumed by the request).
The text was updated successfully, but these errors were encountered: