Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing response for certain POST requests #12339

Closed
kanongil opened this issue Apr 11, 2017 · 25 comments
Closed

Missing response for certain POST requests #12339

kanongil opened this issue Apr 11, 2017 · 25 comments
Labels
http Issues or PRs related to the http subsystem.

Comments

@kanongil
Copy link
Contributor

kanongil commented Apr 11, 2017

  • Version: 7.8.0 (also affects 4 & 6)
  • Platform: macOS 10.12.4
  • Subsystem: http / net

When a server responds with a short payload to a POST http.request() with a large payload that has not finished uploading, a node client throws an EPIPE error on the request object. E.g. with NODE_DEBUG=http:

HTTP 90220: SOCKET ERROR: write EPIPE Error: write EPIPE
    at exports._errnoException (util.js:1034:11)
    at WriteWrap.afterWrite [as oncomplete] (net.js:812:14)

This destroys the node socket, and the http client never receives the response or any of the payload data.

Expected result

The EPIPE error should be deferred (or possibly just ignored) until the client has had a chance to handle the response and payload.

Serverside workaround

This can be mitigated serverside, by consuming all the uploaded data before sending a response.

Background

I need clients to be able to handle my server responding to POST requests without consuming the entire payload.

Example

I have created a failing example, which when run will trigger the EPIPE error.

To visualise that this is indeed a node bug, I added an ignore_epipe option, which can be set to true, to ignore EPIPE errors on the socket. This results in the client behaving as I expected, properly delivering the response and payload, before emitting an error.

var http = require('http');
var port = 8080;

var ignore_epipe = false;

var upload = function () {

    var req = http.request({
        port: port,
        method: 'POST'
    }, function (res) {

        console.log('received response code', res.statusCode);

        var length = 0

        res.setEncoding('utf8');
        res.on('data', function (chunk) {
            //console.log('BODY: ' + chunk);
            length += chunk.length;
        });
        res.on('end', function () {
            console.log('received ' + length + ' bytes');
        });

        res.on('error', function (err) {

            console.log('got response error', err);
        });
    });

    req.end(new Buffer(10*1024*1024));

    req.on('error', function (err) {

        console.log('got request error', err);
    });

    if (ignore_epipe) {
        req.on('socket', function (socket) {

            var destroy = socket._destroy;
            socket._destroy = function (exception, cb) {

                if (exception && (exception.code === 'EPIPE')) {
                    return;
                }

                return destroy.call(this, exception, cb);
            };
        });
    }
};

// create server

var server = http.createServer(function (req, res) {

    res.end('done');
});

server.listen(port, function (err) {

    if (err) {
        return console.log('something bad happened', err)
    }

    console.log('server is listening on', port);

    upload();
});
@kanongil
Copy link
Contributor Author

I suspect it's the same issue here:
nodejs/node-v0.x-archive#7151 (open, marked as confirmed-bug, abandoned).

Other related issues (http upload + EPIPE):
#10415, #9085, #3803, request/request#2023

A bit of further research reveals also that it used to work in v0.8 & v0.10 (which doesn't fail at all), and that the regression was introduced with v0.12.0, though the error message is slightly different:

{ [Error: write EPROTOTYPE] code: 'EPROTOTYPE', errno: 'EPROTOTYPE', syscall: 'write' }

Finally, it's possible that this issue could be fixed if #947 is resolved.

@kanongil
Copy link
Contributor Author

With a slight change to the test ('connection: close' header on server response), the bug also manifests itself on v0.10.

That means, the last time this worked was on v0.8.

@kanongil
Copy link
Contributor Author

@jasnell @bnoordhuis Why was nodejs/node-v0.x-archive#7151 abandoned? It still fails on v7.

@Trott
Copy link
Member

Trott commented Aug 13, 2017

@nodejs/http

@bnoordhuis
Copy link
Member

I can confirm the behavior but I'm ambivalent on how (or if) it should be fixed.

The server sends a response and closes the connection, causing the client to get an EPIPE because it's still busy writing the request data.

Ignoring the EPIPE lets it read the response. The request data is lost to the ether, though. Some of it may still have been processed by the server but you don't know how much.

@kanongil What do you propose? This seems like a no-win situation.

@kanongil
Copy link
Contributor Author

@bnoordhuis The server behavior is a perfectly legal way to send a response to a client when the request payload is unnecessary to generate a response. Eg. a 401 response. As such, I believe it needs to be fully supported.

Ignoring the EPIPE lets it read the response. The request data is lost to the ether, though. Some of it may still have been processed by the server but you don't know how much.

This is irrelevant. If the server has decided to send a response, it doesn't matter how much data actually made it through. The client just needs to handle the response.

I don't know how you would go about fixing the issue. I do know, that with the async behavior and internal buffering in streams, it is not reasonable to expect a client to know when to stop writing (to avoid the EPIPE).

One idea I had was to defer the EPIPE error emit, until the response is emitted and all data read. However, this breaks the existing contract, since it requires that the client must start reading before writing is finished. Otherwise, no error is emitted and it stalls internally.

@bnoordhuis
Copy link
Member

This is irrelevant. If the server has decided to send a response, it doesn't matter how much data actually made it through.

With all due respect, that seems a little naive; it's a very literal reading of the spec. The real world isn't going to be so simple.

it is not reasonable to expect a client to know when to stop writing

Maybe I didn't make myself clear but the client, whether it's node or something else, cannot reliably know that in advance: the EPIPE error is the operating system telling us the connection was severed.

I thought it over for a good while but I'm not optimistic a fix exists that doesn't break more than it fixes. If you manage to come up with something, send a pull request and I'll be happy to review it.

@kanongil
Copy link
Contributor Author

From https://tools.ietf.org/html/rfc7230#section-6.6, it seems that you are somewhat right.

However, this is data that has already been received from the tcp socket, but is lingering in internal buffers. Afaik, it's not really an error, but rather a standard way to end connections.

@nh2
Copy link

nh2 commented Feb 23, 2018

I think what you are observing is the TCP reset problem described in the linked https://tools.ietf.org/html/rfc7230#section-6.6:

To avoid the TCP reset problem, servers typically close a connection in stages. First, the server performs a half-close by closing only the write side of the read/write connection. The server then continues to read from the connection until it receives a corresponding close by the client, or until the server is reasonably certain that its own TCP stack has received the client's acknowledgement of the packet(s) containing the server's last response. Finally, the server fully closes the connection.

Many HTTP servers trip over this, including nginx (see bug I filed here: https://trac.nginx.org/nginx/ticket/1250#comment:4).

TCP does not allow you to close() a socket after you're done sending data, if the higher level protocol (e.g. HTTP) doesn't explicitly ensure the other side sends no further data.

As I wrote on the nginx bug (slightly amended to be general beyond nginx):

When an in-flight packet arrives at the kernel of the machine running the HTTP server, after close(), the kernel will reply with TCP RST.
When the browser side receives the RST, its kernel will discard any receive buffers, even for packets that it already ACKed.
These receive buffers may contain the data you just sent before the close().
Thus the client will never see that data.

This page from 2009 describes that in detail: ​https://blog.netherlabs.nl/articles/2009/01/18/the-ultimate-so_linger-page-or-why-is-my-tcp-not-reliable

The correct way for a server to shut down a HTTP/1.1 connection is:

  • Send connection: close
  • Call shutdown() on the socket
  • Keep read()ing from the socket until the empty string is read
  • Call close() on the socket

@camlegleiter
Copy link

Given @nh2's comment, is there a proper way forward in NodeJS to perform these steps in a safe, user-friendly way? We have a case where users could upload files of nearly any size in a browser, and I'd prefer not having to read the entire file server-side only to fail, especially when we can determine the file can't be uploaded from the headers alone. We do have a few alternatives for our use case, but it'd be nice for our API to cleanly short-circuit requests.

@rogierschouten
Copy link

I entered the same issue as #21026 but I get ECONNRESET instead of EPIPE. This is in line with what @nh2 says.

If I understand @camlegleiter correctly, he is looking for a Node.JS fix for implementing a server. I'm looking for a way to work around this problem on the client side, as I need to talk to a broken server.

So moving forward:

  1. Server-side: would it help @camlegleiter to keep reading for a maximum amount of time, to give a (possibly non-node) client the chance of receiving the reply before the connection is reset? This would still be a race condition but it might work in practice.
  2. Client-side: is there some way that Node.JS could tell the OS not to clear the receive buffers in this case?

@nh2
Copy link

nh2 commented Jun 14, 2018

Given @nh2's comment, is there a proper way forward in NodeJS to perform these steps in a safe, user-friendly way? We have a case where users could upload files of nearly any size in a browser, and I'd prefer not having to read the entire file server-side only to fail, especially when we can determine the file can't be uploaded from the headers alone.

@camlegleiter Doens't look like it. You can send connection: close and call shutdown(), and hope that the client will close reasonably soon after that. But it's under the client's control, you can't do anything but read() if you want to make sure that the client actually gets the response. Everything else would be racy, or show up as hard errors like ECONNRESET on the other side.

Client-side: is there some way that Node.JS could tell the OS not to clear the receive buffers in this case?

@rogierschouten I'm not aware of any. You'd have to ask all OS vendors to add one. But I think even that wouldn't be enough: If I remember correctly, TCP doesn't guarantee any order of RST packets relative to data packets with sequence numbers. So it is possible that an RST package overtakes a data packet on the network path. So the OS may not even have the data you'd like in its buffer at all.

@patoi
Copy link

patoi commented Nov 19, 2018

This is same in NodeJS v10.13.0

When I use curl for the request, the error never occured.

@patoi
Copy link

patoi commented Nov 19, 2018

This code sometimes throw error (failed), sometimes not (success):

const http = require('http')
const port = 8080

// PUT method is not allowed, always get 405
const METHOD = 'PUT'

const upload = function (i) {
  let response
  const req = http.request({
        port: port,
        method: METHOD
    }, function (res) {
        response = res
        console.log(i + '. SUCCESS request')
        // console.log('received status code:', res.statusCode)
        let data = ''
        res.setEncoding('utf8')
        res.on('data', function (chunk) {
          data += chunk
        })
        res.on('end', function () {
          // console.log('received message:', data)
        })
        res.on('error', function (err) {
          console.log('got response error:', err)
        })
    })

    req.on('abort', function (err) {
      console.log('got request abort:', err)
    })

    req.on('error', function (err) {
      console.log(i + '. Request FAILED: ' + err.message)
      // console.log(err.code)
      if (err.code === 'EPIPE') {
        if (!response) {
          // console.log(err.message)
          // console.log('Response is undefined')
        }
      }
      return 
    })

    const buff = Buffer.alloc(10*1024*1024, 'x')
    req.end(buff)
}

const server = http.createServer(function (req, res) {
  if (req.method !== 'POST') {
    // never reading data
    res.statusCode = 405
    res.end('Allowed method is POST')
  } else {
    let length = 0
    req.on('data', chunk => {
      length += chunk.toString().length
    })
    req.on('end', () => {
      // console.log('request end, length:', length)
      res.end('length is ' + length)
    })
  }
  
})

server.listen(port, function (err) {
    if (err) {
        return console.log('something bad happened', err)
    }
    console.log('server is listening on', port)
    for (let index = 0; index < 10; index++) {
      upload(index + 1)
    }
})

@patoi
Copy link

patoi commented Nov 19, 2018

Important, code is never reading request data, when it arrived with PUT method: I was expecting it is always failing.

  if (req.method !== 'POST') {
    // never reading data
    res.statusCode = 405
    res.end('Allowed method is POST')
  } else {

@patoi
Copy link

patoi commented Nov 19, 2018

Some output:

server is listening on 8080
1. Request FAILED: write EPIPE
2. Request FAILED: write EPIPE
3. Request FAILED: write EPIPE
5. Request FAILED: write EPIPE
7. Request FAILED: write EPIPE
8. Request FAILED: write EPIPE
9. Request FAILED: write EPIPE
4. Request FAILED: write EPIPE
6. Request FAILED: write EPIPE
10. SUCCESS request
10. Request FAILED: write EPIPE
server is listening on 8080
1. Request FAILED: write EPIPE
3. Request FAILED: write EPIPE
5. Request FAILED: write EPIPE
7. Request FAILED: write EPIPE
9. Request FAILED: write EPIPE
2. SUCCESS request
4. Request FAILED: write EPIPE
6. Request FAILED: write EPIPE
8. Request FAILED: write EPIPE
10. Request FAILED: write EPIPE
2. Request FAILED: write EPIPE

@Valeronlol
Copy link

Valeronlol commented Mar 4, 2019

Same issue with node v11.10.0 under ubuntu 18.04.

As i understand, this happens when readeble trying to pipe to aborted writeble, but 'abort' event does not emitted and node does not handle it.

@patoi
Copy link

patoi commented Feb 22, 2020

It works fine in v12.16.1:

1. SUCCESS request
2. SUCCESS request
3. SUCCESS request
4. SUCCESS request
5. SUCCESS request
6. SUCCESS request
7. SUCCESS request
8. SUCCESS request
9. SUCCESS request
2. Request FAILED: write EPIPE
1. Request FAILED: write EPIPE
3. Request FAILED: write EPIPE
4. Request FAILED: write EPIPE
5. Request FAILED: write EPIPE
6. Request FAILED: write EPIPE
7. Request FAILED: write EPIPE
8. Request FAILED: write EPIPE
9. Request FAILED: write EPIPE
10. SUCCESS request
10. Request FAILED: write EPIPE
res.end('Allowed method is POST')

sends data back, and an EPIPE ✅

@domharrington
Copy link

This error was causing us to hit H18 errors on Heroku - what was happening is that our API was accepting large HTTP bodies in the form of file uploads, and that plus an invalid API key was causing our error handler to return a response before the HTTP body had been fully processed yet.

I managed to reliably reproduce in a unit test on my dev machine with something like the following:

const req = http.request('http://localhost:3000/api', {
  method: 'POST',
});

let errorCode = null;

req.once('error', err => {
  // Error code is either EPIPE or ECONNRESET
  errorCode = err.code;
});

req.once('close', () => {
  expect(errorCode).toBeNull();
});

req.end(Buffer.alloc(10 * 1024 * 1024));

The error was always either EPIPE or ECONNRESET on my M1 mac with Node.js v14.15.1.

I fixed it in my express error handler by waiting for the incoming request to end before sending a response:

function (err, req, res, next) {
  function respond() {
    return out.status(status).send({ error });
  }

  if (req.complete) return respond();

  req.once('data', () => {});
  return req.once('end', respond);
}

It is completely possible for the request to already be complete by the time it gets to this point, hence the check on req.complete first. I took inspiration for this fix from here: https://github.com/mozilla/web-ext/pull/1346/files. I hope this helps someone.

@gsimko
Copy link

gsimko commented Nov 15, 2022

@bnoordhuis reading through this issue and others I'm unsure what is the conclusion.
With v18 I'm still seeing the same behavior: I'm trying to write a client in nodejs using http.request and if the server severs the connection because of an error, the "response" handler is not called even if the server is sending a full error response. Instead of this I'd love to see what the server returned so that I can fix the mistake. Is that possible?

@bnoordhuis
Copy link
Member

@gsimko So, I think that particular scenario could be fixed (if it doesn't work already) but it may not be easy because node's http client is kind of a tangled mess.

The underlying TCP socket can be in a half-open state where reading works but writing fails. I think (but am not 100% sure) the 'error' event only manifests on the client's request object, not the response. If you structure your code right, you should still be able to read the full response.

If however that doesn't work, please send fixes. :p

@wa-Nadoo
Copy link

Possible fix of this issue (eb19b1e) already released with Node.js 18.11.0

@gsimko
Copy link

gsimko commented Nov 16, 2022

Thank you @bnoordhuis and @wa-Nadoo! I can confirm that upgrading node helped. I went directly to node 19.1.0 and luckily the upgrade seems to be without hiccups. I was on 18.10.0 before, so there's chance that the fix indeed came in 18.11.0.

@patoi
Copy link

patoi commented Feb 27, 2023

It works fine in Node.js v18.12.1, I think it is closable, because #27916

@timscottbell
Copy link

timscottbell commented Mar 17, 2023

I'm still facing this issue on node 18, and this is the workaround I'm doing on the server side:

  async <ReqQueryType>(
    err: Error,
    req: Request<ParamsDictionary, any, any, ReqQueryType>,
    res: Response,
    next: NextFunction
  ) => {
    if (err) {
      // eslint-disable-next-line @typescript-eslint/no-unused-vars,no-unused-vars
      for await (const data of req) {
      }

      next(err);
    }
  }

Is this really the best way? We use multer for file upload, so if the file extension doesn't match, we want to respond quickly, but then we get the EPIPE on the client as the file is still being transferred. If the file is huge (which we have some customers with large files), are we really supposed to wait for the entire file to be uploaded before responding?

I don't see a way to send connection: close and call shutdown() as @nh2 is suggesting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
http Issues or PRs related to the http subsystem.
Projects
None yet
Development

No branches or pull requests