Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor global ratelimiter fix: don't reduce values when "boosting" #6281

Merged
merged 3 commits into from
Sep 12, 2024

Conversation

Groxx
Copy link
Contributor

@Groxx Groxx commented Sep 12, 2024

Previously, if the target RPS was below the cluster's usage (say it was being reduced, or cluster usage was over target), unused could go below zero and set the RPS to below the fair-weighted value, and possibly even to a negative value.

That's unintended, the fair-weighted value is what we want even in an excess-load scenario.
Boosting was only intended to increase based on spare capacity, not decrease.

Comment on lines 451 to 457
"usage over target": {
targetLimit: 10,
fallbackLimit: 3,
weight: 0.1,
usedRPS: 100,
resultLimit: 1, // should be weight-based at worst, and not be reduced by excess usage
},
Copy link
Contributor Author

@Groxx Groxx Sep 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

        	Error:      	Max difference between 0.89 and -89.11 allowed is 0.01, but difference was 90
        	Test:       	TestBoostRPS/usage_over_target
        	Messages:   	computed RPS does not match closely enough, should be 0.89
--- FAIL: TestBoostRPS/usage_over_target (0.00s)

yea that's not good. fixed now!

this would probably be a good target for a small fuzz test, now that I think about it. there are clear min/max values: target*weight and target.

(I was curious so I checked: negative values just act like zero, so this isn't risking crashes or anything currently)

Copy link

codecov bot commented Sep 12, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 73.09%. Comparing base (e5bd91e) to head (a9fa15d).
Report is 4 commits behind head on master.

Additional details and impacted files
Files with missing lines Coverage Δ
common/quotas/global/collection/collection.go 85.02% <100.00%> (ø)

... and 5 files with indirect coverage changes


Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e5bd91e...a9fa15d. Read the comment docs.

@Groxx Groxx merged commit b41976a into cadence-workflow:master Sep 12, 2024
20 checks passed
@Groxx Groxx deleted the minor branch September 12, 2024 22:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants