-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[fuzz] Scaled Load balancer fuzz to 60k hosts #13771
[fuzz] Scaled Load balancer fuzz to 60k hosts #13771
Conversation
Signed-off-by: Zach <[email protected]>
Signed-off-by: Zach <[email protected]>
Signed-off-by: Zach <[email protected]>
/assign @asraa @htuch @adisuissa |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thanks for removing this restriction.
/wait
void LoadBalancerFuzzBase::clearStaticHostsHealthFlags() { | ||
// Have to clear the hosts health flags here - how do we know what hosts to clear? | ||
// The only outstanding health flags set are those that are set from hosts being placed in | ||
// degraded and excluded. Thus, use the priority set pointer to know which flags to clear. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you also clear unconditionally?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there is no unconditional flag clear, maybe a clearFlagForTest
method or an ASSERT that the state matches expected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that my comment of "how do we know what hosts to clear?" is outdated. Since the only health flags that get set are those that are degraded and excluded, which will naturally be in that HostVector regardless? Thus making this work?
Signed-off-by: Zach <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
/wait
Signed-off-by: Zach <[email protected]>
Signed-off-by: Zach <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! :D
* master: dns: preserve custom resolver after channel destruction (envoyproxy#13820) docs: Further updates to quick-start (envoyproxy#13793) wasm: update V8 to v8.7.220.10. (envoyproxy#13568) test: adding a test for CONNECT to an IP address (envoyproxy#13818) [fuzz] Scaled Load balancer fuzz to 60k hosts (envoyproxy#13771) Removed Circle-CI reference. (envoyproxy#13824) wasm: strip only Custom Sections with precompiled Wasm modules. (envoyproxy#13775) examples: Add dynamic configuration (filesystem) sandbox (envoyproxy#13783) apple dns: resolve IP addresses without calling Apple APIs (envoyproxy#13698) Signed-off-by: Michael Puncel <[email protected]>
Commit Message: Scaled Load Balancer Fuzz to 60k hosts.
Additional Description: In order to truly represent the production search space for Load Balancing in regards to upstream hosts, the Load Balancer fuzzer requires at least 10k upstream hosts. This PR scales it up to 60k hosts. It fixes the bottleneck of constructing thousands of hosts each run by statically declaring the host vector. Also, this PR scales my random subset selector to 32 bits in order to randomly represent this search space, as before it was 8 bits and could only represent 256 hosts.
Risk Level: Low
Testing: Regression tests for 60k hosts.