-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Obtain realistic timeout data for different API endpoints #648
Comments
Also include:
Right now we use a default of 5 seconds. |
Testing against a highly populated server is now tracked in more detail in #656, which leaves us with latency measurement for starring/replies. How to scope this remaining work depends a bit on how we want to do these benchmarks, e.g.:
Any thoughts on this? 1) would get us results faster, but I know we're all itching to get some functional tests in so perhaps it'd be worth to use this as an opportunity to test |
Discussed this a bit w/ Jen today; she suggested that we should start by just doing command line testing of the API endpoints, to get typical Tor latency data w/ a populated server (which is also how I interpret your proposal, @creviera). This is different from the in-client benchmarking described in #656, which I think would also be useful in case there are implementation bottlenecks in the client. I think both issues deserve to exist, with this one at a higher priority since it will give us some important baseline data relatively quickly. |
We discussed this at sprint planning today; tentatively, @rmol has offered to take a stab at this. To resolve this issue, a write-up and/or gist is sufficient. |
I have some preliminary measurements for the endpoints requested, with the server populated as requested, going through
The timing is highly variable, so I'd like to take more measurements over a longer period, but right now it looks like the current five-second timeout probably works for everything but The connection latencies with In the client's The SD core |
Loaded the staging environment with 250 sources, 1000 submissions, and 500 replies:
The problem with Given the performance of |
This is very helpful. Do all response times include the network transmission time via Tor? Can we break down the network transmission time via Tor vs. server response time components? |
All the measurements above were taken while connecting to a staging environment via our proxy, so include the overhead of both I have not tried connecting directly to the staging environment without Tor, but it would be useful to see the best case performance of these endpoints. I'll do that. |
I measured in the staging app VM, using the loopback interface, so eliminating
The So clearly Tor and @redshiftzero and I talked a bit yesterday and think there are several things we can do to speed up |
With
|
thanks for these results @rmol
this endpoint change is in: #709 regarding the rest of these results, definitely
|
small diff for 3 - goes from ~20s compute to ~2s compute for 200 sources when the cache is warm (after the first call to diff --git a/securedrop/crypto_util.py b/securedrop/crypto_util.py
index 1d730813a..7039d1e70 100644
--- a/securedrop/crypto_util.py
+++ b/securedrop/crypto_util.py
@@ -5,6 +5,7 @@ import pretty_bad_protocol as gnupg
import os
import io
import scrypt
+from functools import lru_cache
from random import SystemRandom
from base64 import b32encode
@@ -225,6 +226,7 @@ class CryptoUtil:
# The subkeys keyword argument deletes both secret and public keys.
temp_gpg.delete_keys(key, secret=True, subkeys=True)
+ @lru_cache(maxsize=500)
def getkey(self, name):
for key in self.gpg.list_keys():
for uid in key['uids']: |
We are missing a few endpoints in our reporting:
|
Closing - will revisit timeout and sync behaviour in other issues. Has also been addressed in code, ie:
|
Description
Right now we use a default of 5 seconds for the following API endpoints:
get_sources
get_submissions
get_all_replies
To come up with a more realistic timeout, we should do some latency testing hitting each endpoint via tor with variable sizes of data (e.g. 50 sources with 200 submissions and 100 replies)
The text was updated successfully, but these errors were encountered: