-
Notifications
You must be signed in to change notification settings - Fork 687
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache CryptoUtil.getkey (redshiftzero's idea) #5100
Conversation
Adds caching to CryptoUtil.getkey, to reduce the number of expensive GPG key lookup operations. It uses CryptoUtil.keycache, an OrderedDict, so we can push out old items once we reach the cache size limit. Using functools.lru_cache would have taken care of that, but meant we couldn't avoid caching sources without keys, so delays in key generation would mean the source key would be unusable until the server were restarted. The cache is primed in securedrop/journalist.py to avoid cold starts.
this all looks great! way faster. one thought: check out d04cdd4 which just encapsulates the cache so we can add a test (mostly for documentation purposes so it's super clear to future maintainers what's going on). if you like that you can cherry pick onto this branch or i can push directly |
Yeah, much nicer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks! gonna approve from my side, i'll leave open in case anyone has any thoughts on the below and merge sometime tomorrow otherwise
LGTM based on visual review: [1] : https://github.com/freedomofpress/securedrop/blob/speedy-getkey/securedrop/source_app/main.py#L133 |
Status
Work in process
Description of Changes
Adds caching to CryptoUtil.getkey, to reduce the number of expensive GPG key lookup operations. It uses CryptoUtil.keycache, an OrderedDict, so we can push out old items once we reach the cache size limit. Using functools.lru_cache would have taken care of that, but meant we couldn't avoid caching sources without keys, so delays in key generation would mean the source key would be unusable until the server were restarted.
The cache is primed in securedrop/journalist.py to avoid cold starts.
Credit to @redshiftzero for the solution.
Testing
Save this script as
securedrop/securedrop/getkeytest.py
:export NUM_SOURCES=50
make dev
The dev server will take considerably longer to start than usual, as it creates the extra sources.
In another shell, once the server has finished populating the database and is fully ready:
docker exec -it securedrop-dev bash
/opt/venvs/securedrop-app-code/bin/python3 getkeytest.py
# this is run in the containerYou should see output like this:
Now edit
securedrop/journalist.py
to comment out the call toprime_keycache()
on line 22, and back in the container shell, rungetkeytest.py
again. If you get a key error abouttoken
, you were too quick at editing and your login attempt has been throttled. Wait 10-15 seconds and try again.You should see output like this:
Note that this time the first call to
get_all_sources
is slower, as the cache hasn't been primed.Deployment
The cache will increase memory consumption of both the source and journalist interfaces, but it's limited to 1000 keys, so should not be a problem.
Checklist
If you made changes to the server application code:
make lint
) and tests (make test
) pass in the development containerIf you made non-trivial code changes: