-
Notifications
You must be signed in to change notification settings - Fork 13.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WebServer: Allow client to send many requests on the same connection #7414
Conversation
Thanks! In |
Just like the headers for a response are reset for each connection, I thought that it would be better to decide whether to keep a connection alive for each connection. Also, it's important to set it to However, it would be a good idea to set it to |
That's true but I don't understand what prevents it to be true by default.
You can update this PR with the new default value. 2.7.2 is released and we are now in 3.0.0-dev. (you need only to commit and push your changes, maybe after pulling this PR to your local-repo since it was updated by GH) |
I've made changes to keep the connection alive by default. Does this look good to you? |
The "Default to false" comment needs change. I wonder if this test
should also check
Apart from that it looks good ! |
…t the default keep alive value
I get what you mean. I changed the place where |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is awesome, @ZakCodes . Very good test procedure and writeup!
Unfortunately, we don't have documentation on the WebServer class so users need to look at the examples to figure it out.
Could you please modify one or more of the HTTPS examples to include this call with a little comment about it?
That would make it more widely used (since many people begin w/an example) and easier to grok.
As mentioned in my last comment, the keep-alive feature that I've implemented is now automatically activated if the HTTP 1.1 or above or if the However, we could show an example of disabling it. Disabling it is useful when a request couldn't be authenticated. By disabling it, you force the attacker to make another full TLS handshake to try another password which would then require more time for the attacker to crack the password. Therefore, we could disable keep-alive when the authentication is failed in the SimpleAuthentication example. |
Thanks for the clarification, @ZakCodes . I don't think we need to show disabling because in general it's not a good thing to do performance-wise and there's not really a downside I can see now. |
@ZakCodes Passwords should not be that simple that they can be cracked by a brute-force attack. Then you should use better passwords instead. |
"Brute force attack" on an ESP node also sounds like a good way to get its service down. |
Good point |
Closes #7412.
Solution
Instead of telling the client to close the connection and waiting 2s for the client to do it before accepting another one, this allows the client to send another request within the 2s if there were no other clients waiting to connect to the server when the response was sent.
By default, this isn't activated. To activate it, the programmer must call
ESP8266WebServer::keepAlive(true)
in the handler during the request.As mentioned before, if there are any other clients waiting to send a request to the server when the response is sent, the server won't accept any other request from the client. This is important because the server doesn't support having multiple concurrent connections, therefore, without this, a client could stay connected to the server forever and block all the other clients.
Testing code
Arduino HTTPS server code (before patch)
Here's the code to upload on your ESP8266 before applying the patch in this pull request.
Arduino HTTPS server code (after the patch)
Here's the code that requires the patch in this pull request to compile.
Ruby client benchmark code
Bash curl clients
Testing procedure and result
Before the patch
These tests are to be done on the version of this repository before applying the patch in this pull request.
Single client test
Process
DOMAIN
variable with the IP of your ESP8266.Result
Multiple clients test
Process
DOMAIN
variable with the IP of your ESP8266.while true; do curl -k https://$YOUR_IP/; done
in a terminal with the IP of your ESP8266.Result
After the patch
These tests are to be done on the version of this repository after applying the patch in this pull request.
Single client test with the pre-patch sketch
Process
DOMAIN
variable with the IP of your ESP8266.Result
Single client test with the post-patch sketch
Process
DOMAIN
variable with the IP of your ESP8266.Result
Multiple clients test with the pre-patch sketch
Process
DOMAIN
variable with the IP of your ESP8266.while true; do curl -k https://$YOUR_IP/; done
in a terminal with the IP of your ESP8266.Result
Multiple clients test with the post-patch sketch
Process
DOMAIN
variable with the IP of your ESP8266.while true; do curl -k https://$YOUR_IP/; done
in a terminal with the IP of your ESP8266.Result
Conclusion
The results do not show any negative consequences to this pull request. In the worst case scenario, the performance will be the same. In the best case scenario, where a single client can make as many requests as it wants, we see a 5x performance improvement.
This was achieved with a ESP8266 at 160MHz connected to a WiFi network with excellent connection. The client is a desktop computer connected to the WiFi router via an Ethernet cable. There is very little latency in my setup. If we increased the latency, the performance improvement will increase further and there still shouldn't be any negative impact.