Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Agent Keep-Alive behavior #26357

Closed
delvedor opened this issue Feb 28, 2019 · 17 comments
Closed

Agent Keep-Alive behavior #26357

delvedor opened this issue Feb 28, 2019 · 17 comments
Labels
http Issues or PRs related to the http subsystem.

Comments

@delvedor
Copy link
Member

  • Version: v10.15.1
  • Platform: Darwin Skadi.local 18.2.0 Darwin Kernel Version 18.2.0: Mon Nov 12 20:24:46 PST 2018; root:xnu-4903.231.4~2/RELEASE_X86_64 x86_64
  • Subsystem: http

I've encountered a strange behavior while using Agent Keep-Alive, and I'm not sure it is correct.
I've created a code example to show what I've discovered, pay attention to the configurations of the agents.

'use strict'

const http = require('http')

const agentFalse = new http.Agent({
  keepAlive: false,
  keepAliveMsecs: 1000,
  maxSockets: 256,
  maxFreeSockets: 256
})

const agentFalseInf = new http.Agent({
  keepAlive: false,
  keepAliveMsecs: 1000,
  maxSockets: Infinity,
  maxFreeSockets: 256
})

const agentTrue = new http.Agent({
  keepAlive: true,
  keepAliveMsecs: 1000,
  maxSockets: 256,
  maxFreeSockets: 256
})

const server = http.createServer((req, res) => {
  console.log(req.url, req.headers)
  res.end('ok')
})

server.listen(3000, () => {
  http
    .request({ port: 3000, path: '/false', agent: agentFalse })
    .on('error', console.log)
    .end()

  http
    .request({ port: 3000, path: '/falseInf', agent: agentFalseInf })
    .on('error', console.log)
    .end()

  http
    .request({ port: 3000, path: '/true', agent: agentTrue })
    .on('error', console.log)
    .end()
})

And this is the log:

/false { host: 'localhost:3000', connection: 'keep-alive' }
/falseInf { host: 'localhost:3000', connection: 'close' }
/true { host: 'localhost:3000', connection: 'keep-alive' }

If this is the expected behavior, can you explain why it works like so?

@delvedor
Copy link
Member Author

/cc @mcollina

@mcollina
Copy link
Member

I think this is buggy, and when passing keepAlive: false, maxSockets: 256, we should not be opening a keep alive connection.

cc @nodejs/http

@mcollina mcollina added the http Issues or PRs related to the http subsystem. label Feb 28, 2019
@delvedor
Copy link
Member Author

delvedor commented Mar 1, 2019

Update: I've noted that if keepAlive is false and maxSockets is 0, the connection header has the expected value (close).

const agentFalse = new http.Agent({
  keepAlive: false,
  keepAliveMsecs: 1000,
  maxSockets: 0,
  maxFreeSockets: 256
})

I can reproduce this behavior in all the latest releases of 6, 8, 10, and 11.

@lpinca
Copy link
Member

lpinca commented Mar 1, 2019

It seems this is caused by this condition

if (!this.agent.keepAlive && !Number.isFinite(this.agent.maxSockets)) {

which dates back to 15a5a4a.

Setting maxSockets to 0 is equivalent to using Infinity (default value).

@mcollina
Copy link
Member

mcollina commented Mar 1, 2019

I think this could be a nice good-first-issue. Maybe @delvedor would like to send a PR.

@lpinca
Copy link
Member

lpinca commented Mar 2, 2019

Despite the confusing options, I'm not sure if it is a bug, I think it is working as intended. Consider the following example:

const assert = require('assert');
const { Agent, get } = require('http');

const agent = new Agent({ keepAlive: false, maxSockets: 1 });

let socket;

function onResponse(response) {
  response.resume();
}

function onSocket(sock) {
  if (socket === undefined) {
    socket = sock;
  } else {
    assert(sock === socket);
  }
}

get({ host: 'example.com', agent }, onResponse).on('socket', onSocket);
get({ host: 'example.com', agent }, onResponse).on('socket', onSocket);

If Connection: Keep-Alive is not sent, the socket is closed, and a new one must be created for the second request. If it is sent the same socket is used and when there are no more pending requests, the socket is destroyed because the keepAlive option is false. It doesn't end in the free socket list as it would have if the keepAlive option was true.

@mcollina
Copy link
Member

mcollina commented Mar 2, 2019

@lpinca it seems you are implying that setting keepAlive: false has nothing to do with sending the keep alive header, but it rather reflects the internal state of the queue.

I think this is very confusing, and at least it should be documented if not changed.

@lpinca
Copy link
Member

lpinca commented Mar 2, 2019

Yes exactly I think the keepAlive option does not mean "send the Connection: Keep-Alive header". The documentation is accurate:

Keep sockets around even when there are no outstanding requests, so they can be used for future requests without having to reestablish a TCP connection.

Maybe we can improve it somehow.

@mcollina
Copy link
Member

mcollina commented Mar 2, 2019

@lpinca we are in disagreement. If the keepAlive controls if we are sending the Connection: Keep-Alive header, then it should send it independently of the number of requests that are queued up. Our current behavior instead ties this up to the internal queue state, e.g. "no outstanding request" as the docs states.

In other terms, if we pass keepAlive: false, the Connection: Keep-Alive header would be sent if there is another request available to send. However this behavior is triggered only if maxSocket is set and not Infinity, and the number of maximum available sockets are filled. I think this behavior is correct.


In the example @delvedor has posted, there are more complicated combinations.

For example:

{
  keepAlive: false,
  keepAliveMsecs: 1000,
  maxSockets: 256,
  maxFreeSockets: 256
}

Is going to generate a Connection: Keep-Alive header even if there is no socket waiting, I assume because there is a maxFreeSocket  option. This essentially overrides the keepAlive flag. From my point of view, setting both maxFreeSockets  and keepAlive shows a conflicting intent: the way we keep sockets free is by sending Connection: Keep-Alive which the user asks us not to do, but specifying maxFreeSockets tells us that they want to do it.

I'm a bit puzzled by this issue.

@lpinca
Copy link
Member

lpinca commented Mar 2, 2019

Sorry forgot to add "not" in my previous comment, please re-read, I've edited it. I don't think we are in disagreement.

The example with { keepAlive: false, maxSockets: 256 } is the same example I posted above, but allowing up to 256 sockets per host.

@mcollina
Copy link
Member

mcollina commented Mar 2, 2019

Based on our discussion, should

const agentFalseInf = new http.Agent({
  keepAlive: false,
  maxSockets: Infinity
})

send Connection: Keep-Alive header? Note that this is the default, and it's currently sending connection: 'close'.

Based on our interpretation, should we ever send connection: 'close'?

@lpinca
Copy link
Member

lpinca commented Mar 2, 2019

Basically Connection: Keep-Alive is always sent, with a single exception being { keepAlive: false, maxSockets: Infinity } which makes sense as in this case the socket is never reused, so it's ok if it is closed by the server.

The keepAlive option controls whether or not to put the socket in the "freeSockets" list when there are no outstanding requests (or at least this is my understanding).

@mcollina
Copy link
Member

mcollina commented Mar 2, 2019

Basically Connection: Keep-Alive is always sent, with a single exception being { keepAlive: false, maxSockets: Infinity } which makes sense as in this case the socket is never reused, so it's ok if it is closed by the server.

This matches our current behavior, but is it coherent with the meaning that we have given to the keepAlive option?

Keep sockets around even when there are no outstanding requests, so they can be used for future requests without having to reestablish a TCP connection.

This states that maxSockets should not impact at all why we are sending or not Connection: Keep-Alive. Setting it to Infinity would not change the fact that there can be outstanding requests when the current one finishes.

@lpinca
Copy link
Member

lpinca commented Mar 2, 2019

If maxSockets is set to Infinity a new socket is created per request so there can't be outstanding requests.

@delvedor
Copy link
Member Author

delvedor commented Mar 2, 2019

Oks, so the naming generates this misunderstanding.
At this point how can I disable the Connection: keep-alive header?

@lpinca
Copy link
Member

lpinca commented Mar 2, 2019

@delvedor if you don't need connection pooling, don't use an agent and use the createConnection option instead. In this way Connection: close is used.

@mcollina
Copy link
Member

mcollina commented Mar 2, 2019

If maxSockets is set to Infinity a new socket is created per request so there can't be outstanding requests.

That's not what our docs says:

maxSockets Maximum number of sockets to allow per host. Default: Infinity.

Maybe we should update this? Considering that setting keepalive: false, maxSockets: Infinity is the default.


Possibly we should include a table in the docs to explain the different options and behaviors, and how they effect Connection: keep-alive?

lpinca added a commit to lpinca/node that referenced this issue Mar 4, 2019
BridgeAR pushed a commit to BridgeAR/node that referenced this issue Mar 12, 2019
PR-URL: nodejs#26412
Fixes: nodejs#26357
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Masashi Hirano <[email protected]>
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: Ruben Bridgewater <[email protected]>
BethGriggs pushed a commit that referenced this issue Apr 16, 2019
PR-URL: #26412
Fixes: #26357
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Masashi Hirano <[email protected]>
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: Ruben Bridgewater <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
http Issues or PRs related to the http subsystem.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants