-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Server overload with big request #115
Comments
Database inserts can be really slow, especially on cloud infrastructures. There are several points to have a look at:
|
Thanks for your help but the problems comes from the NodeJS sending requests, i just added 20ms delay between request and everything is good. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hello,
When I try to post 100 000 observations or more (through multiples HTTP requests containing dataArray of 50 or 100 observations) the FROST Server (running on AWS with docker) is not available for others request, so the client website (using Grafana with the right plugin for STA) cannot retrieve any data. A 100 000 observation is not very big for the future infrastructure we're gonna have in my work.
I've tried multiple combination of parameters but nothing is really better, you'll find a benchmark table in attachments (the Req GET Time columns mean the time that the client have to wait between the send and the response). When i'm monitoring the server with htop, it says says postgresql take 100% of CPU. (the server has 4 core CPU and 8Gb of RAM).
So is there an option or configuration that i've forgot or is postgresql insert taking a lot of time ?
Best regards !
The text was updated successfully, but these errors were encountered: