Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Align stores #323

Merged
merged 5 commits into from
Oct 24, 2023
Merged

Align stores #323

merged 5 commits into from
Oct 24, 2023

Conversation

gurgunday
Copy link
Member

Fixes #304 and closes #321

Motivation: the implementations of the LocalStore and the RedisStore are doing different things, which is inconsistent and confusing users.

For instance, when continueExceeding is true, RedisStore always resets the TTL of an IP, even when it hasn't yet exceeded the limit. This causes people get rate limited more than necessary because for every request that hasn't exceeded the max, we keep on resetting the ttl instead of letting it go down, while also incrementing the counter.

This PR makes RedisStore mirror LocalStore's behavior. We increment the counter but do not reset the TLL unless the limit is exceeded, after which the TTL is reset for every request as long as the key is alive.

@gurgunday gurgunday requested review from Uzlopak and a team October 24, 2023 12:20
@@ -42,7 +42,7 @@
"dependencies": {
"@lukeed/ms": "^2.0.1",
"fastify-plugin": "^4.0.0",
"tiny-lru": "^11.0.0"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a breaking change.

Copy link
Member Author

@gurgunday gurgunday Oct 24, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would that be? Again, the same explanation as below, LocalStore is our implementation that cannot be overridden – it behaves the same as before per unit tests and my intuition 😁

Edit: I see, yeah, RedisStore behaves differently now — please excuse my misunderstanding

index.js Show resolved Hide resolved
@Uzlopak
Copy link
Contributor

Uzlopak commented Oct 24, 2023

@gurgunday

You hav some merge conflicts.

To be honest i dont like the ['0', '1', '2'].includes assertion. Before it was maybe flaky, and we maybe need to run the tests sequentially. But it would give use feedback on how reliant the rate-limitting is. Now it can be potentially unreliably and we could be blind to that.

@gurgunday
Copy link
Member Author

gurgunday commented Oct 24, 2023

Here are a few things we can do in terms of tests:

The first check of Retry-After or x-ratelimit nearly always returns the timeWindow, but for the rest, there just isn't a single value that we can pinpoint

If timeWindow is 3000, the first call is fine, 3000, but the second can return 2700, which will be floored to 2, and sometimes there are 800ms clock ticks too after which could it could take 210ms to create another Date and now it would be floored to 0

I was thinking, before we even fix these, that Math.ceil might be more appropriate than .floor in terms of rateLimit timers

Because really a client shouldn't see a TTL of 600ms floored down to 0 seconds, it's more useful if they see 1

@gurgunday
Copy link
Member Author

Using .ceil seems to actually work much more reliably as well

@Uzlopak
Copy link
Contributor

Uzlopak commented Oct 24, 2023

Floor is obviously more restricitive than Ceil.

Copy link
Contributor

@Uzlopak Uzlopak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Uzlopak Uzlopak merged commit 18df4d3 into fastify:master Oct 24, 2023
7 checks passed
@gurgunday gurgunday deleted the align-stores branch October 25, 2023 05:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Align the behaviors of the LocalStore and RedisStore continueExceeding option does not work as expected
3 participants