Improve "is (not) one of (sensitive)" performance by only hashing value once #80
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Describe the purpose of your pull request
Last week my company added several thousand user IDs to a
is one of (hashed)
rule (since it seemed simpler than doing a mass edit to our db and using a custom attribute) and the CPU on our API server spiked. Looking through this SDK's source code I discovered there's optimizations that can be made to this rule type.Related issues (only if applicable)
N/A
Requirement checklist (only if applicable)
Not sure how I'd add a new automated test that tests timing that would be consistent among different devices, but I added a screenshot of the speed improvement on my own machine.
Only tested on macOS but since I just moved a line up, platform should be irrelevant.
Test performed against this rule set. There are 1000 IDs per line, for a total of 4000 IDs, under a "worst-case-scenario" where the target user ID does not exist in the list so it has to check every value.
Test Code:
Before (top) / After (bottom)