-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Load test plan #252
Comments
The new deployment of OAM will move from using seperate servers for the Catalog API and Uploader API to using a single server in a combined API. This means that the Catalog API is now responsible for both serving database-accessing API requests and CPU-intensive imagery processing. Note that Seth's recent work on Marblecutter and Monq worker integration should once again allow the seperation of the API load and processing load. In order to test the new setup I uploaded a queue of ~200MB raw TIFFs. Note that current settings mean that only one image is processed at a time. I then ran the following load test:
These are requests for all the currently available imagery that create the highlighted grid squares on the frontend map. Note that this in itself is a significantly unoptimised DB query (approx 400k per request) and should be thought about carefully as more imagery is included in OAM. Also note that this request is only requested by a user when they visit the home page. From OAM's Google Analytics I can see that there are usually about 2000 visitors per month, therefore concurrent users are rarely if ever going to be above 5. However I will assume a maximum plausible concurrent user surge of 100 after successful marketing. This is reflected in the My first suggestion for scaling is to separate the API service from the imagery processing service, as I mentioned at the beginning. However, if you would either like to support more concurrent imagery uploads or concurrent users above 100, then I would simply recommend adding more cores, the current 8GB is more than enough.
|
Since we're consolidating APIs and moving to AWS, we need to assemble a short plan to test the load of both users hitting the
/meta
endpoint as well as the workers for processing imagery.We are currently running production on a t2.xlarge. Is this sufficient? Also related, how many max workers do we want running for processing imagery?
The text was updated successfully, but these errors were encountered: