-
Notifications
You must be signed in to change notification settings - Fork 29.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Max requests per socket #40071
Comments
I will try to contribute with a PR on this issue, but there are some questions
|
I think there are two places where we could implement a check and we might have to implement both. The first is when we receive a new request, if we are over threshold we should be automatically respond with a 503. The second is to add a check while preparing the keep-alive response header and decide not to keep alive the connection if the current connection is equal over threshold. I recommend not to destroy the socket to avoid breaking any pipelined requests. We would likely need to maintain two counters (request received, request completed). |
I'm not sure it's needed as it will respect the connection:'close' header. |
Another one, I plan to add the condition somwhere here
Will it understand that the socket buffer is still full and should be consumed, or it will just stuck and wait for the possible request body to be consumed by someone and only then the response will be sent |
That's the exact place where it should be put the "first" check I mentioned above. Write tests to verify this all. |
as discussed, added a check on max requests, now is the hardest part |
Looks like this
Already handled if "shouldKeepAlive" is set to false (it sets some So overall it works (locally, for me) But there is something odd, on using VSCode debugger, when reaching the res.writeHead(503);
res.end(); (I dont know why I reach it only on debug mode, since as described above connection should be closed because of the server thorws
and dies... |
Looks like some race condition, on calling destroy, we still get another request but we can not respond with 503 since it is already closed My test client const net = require('net');
let counter = 0;
const write = (client) => {
counter++;
client.write('GET / HTTP/1.1\r\n');
client.write('Host: localhost:8080\r\n');
client.write('Connection: keep-alive\r\n');
client.write('\r\n\r\n')
}
const client = net.createConnection({ port: 8080 }, () => {
write(client)
});
client.on('data', (data) => {
console.log(data.toString());
console.log('---------------');
if (counter === 5) {
client.end();
} else {
write(client)
}
});
client.on('end', () => {
console.log('disconnected from server');
}); |
Fixes: #40071 PR-URL: #40082 Reviewed-By: Matteo Collina <[email protected]> Reviewed-By: Robert Nagy <[email protected]>
Fixes: #40071 PR-URL: #40082 Reviewed-By: Matteo Collina <[email protected]> Reviewed-By: Robert Nagy <[email protected]>
Is your feature request related to a problem? Please describe.
We are scaling up pods in K8S on traffic spike, but since we are using keep alive, new pods are barley used since it takes too much time to rebalance the connections, as a result we have pods with high CPU usage, and pods with very low CPU usage.
Describe the solution you'd like
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Keep-Alive
HTTP standard support "max" parameter on "Keep-Alive" which defines that the connection can be used for up to "N" number of requests, then it will be closed.
The suggestion is to add this configuration on
http.server
that will return this parameter to the client, and also close the connection once it reached the maximumDescribe alternatives you've considered
This issue can be solved with side tools like service mesh (LinkerD or Istio) but they have their own issues and overhead.
We ended up doing it manually by creating some plugin for fastify that holds a WeakMap with a counter.
Some context: fastify/fastify#3260
The solution that we ended up with
The text was updated successfully, but these errors were encountered: