Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Response future not dropped on disconnect #1716

Closed
luben opened this issue Nov 19, 2018 · 14 comments · Fixed by #1723
Closed

Response future not dropped on disconnect #1716

luben opened this issue Nov 19, 2018 · 14 comments · Fixed by #1723
Labels
A-http1 Area: HTTP/1 specific. A-server Area: server. C-bug Category: bug. Something is wrong. This is bad!

Comments

@luben
Copy link
Contributor

luben commented Nov 19, 2018

In certain cases I see Response future is not dropped when the connection closes. I have a minimal reproduction here: https://github.com/luben/repro-hyper

The reproduction is simplified work queue - workers wait on GET /next, clients submit work to POST /, worker submit response to POST /response. What happens is that if a worker restarts, it is left registered but disconnected and it leads to next client request to hang. I added a small debugging class that tells me when the futures are dropped. It confirms that the response future from /next is not dropped on worker disconnect.

Changing the workers to use HTTP/2 instead of HTTP/1.1 (un-comment https://github.com/luben/repro-hyper/blob/master/src/bin/worker.rs#L16) solves the issue.

@seanmonstar
Copy link
Member

I haven't really investigated, could the fix for #1717 have fixed this too?

@luben
Copy link
Contributor Author

luben commented Nov 22, 2018

I will try and report back tomorrow.

@luben
Copy link
Contributor Author

luben commented Nov 22, 2018

I tried. It's not fixed. The way to reproduce it:

  1. Run the broker.
  2. Run the worker.
    ...
  3. Restart the worker. The broker should report "Dropping Next()" but does not.
    next curl request fails.

@seanmonstar seanmonstar added C-bug Category: bug. Something is wrong. This is bad! A-client Area: client. A-http1 Area: HTTP/1 specific. labels Nov 26, 2018
@luben
Copy link
Contributor Author

luben commented Nov 26, 2018

I think the bug is on the server, not the client: the broker is the server and it does not drop the future on disconnect from the client (worker)

@seanmonstar seanmonstar added A-server Area: server. and removed A-client Area: client. labels Nov 26, 2018
@luben
Copy link
Contributor Author

luben commented Nov 26, 2018

When I run it with trace log-level, the messages after I cancel the worker are:

2018-11-26 22:57:43 TRACE [tokio_reactor] event Readable | Writable | Hup Token(4194305)
2018-11-26 22:57:43 TRACE [tokio_reactor] loop process - 1 events, 0.000s

... and nothing else. So looks like nobody is polling the connection while the response future is waiting on the oneshot::Receiver for the Body.

@seanmonstar
Copy link
Member

The connection isn't waiting on a read, due to this line: https://github.com/hyperium/hyper/blob/master/src/proto/h1/conn.rs#L231

It's basically to apply TCP backpressure, to not start reading a second message while waiting to respond to the first. If there were a way to register for HUP on AsyncRead, that could help achieve it without possibly reading a next message.

@luben
Copy link
Contributor Author

luben commented Nov 26, 2018

Hm, will look how it is implemented on the h2 side and try to come with a patch.

@sfackler
Copy link
Contributor

mio supports HUP registration via UnixReady, but that's not piped through Tokio. Adding TcpStream::poll_hup would be great though, even if it would only work on Unix. The server I work on processes long-running requests, and we have request cancellation detection but it won't work via HTTP/1 without that.

@seanmonstar
Copy link
Member

In h2, it's always reading, since it needs to in order to handle HTTP2 bookkeeping, since frames can come in related to any stream (or for the connection).

We could try adding a read check on the transport even if is_mid_message, and just not error if some bytes are found. It should work if a client isn't using HTTP/1.1 pipelining, but if it is, it'd be quite hard to detect an EOF read if a pipelined request is sitting in the buffer. Perhaps that's better than nothing (and the real world doesn't really use HTTP/1.1 pipelining anyways).

@luben
Copy link
Contributor Author

luben commented Nov 26, 2018

Just for test, I disabled the check for is_mid_message but it still don't drop the response future. Here is the trace (with some more trace logs added):

2018-11-26 23:31:57 TRACE [tokio_reactor] event Readable | Writable | Hup Token(8388609)
2018-11-26 23:31:57 TRACE [tokio_reactor] loop process - 1 events, 0.000s
2018-11-26 23:31:57 TRACE [hyper::proto::h1::dispatch] poll_catch
2018-11-26 23:31:57 TRACE [hyper::proto::h1::dispatch] poll_inner
2018-11-26 23:31:57 TRACE [hyper::proto::h1::dispatch] poll_read
2018-11-26 23:31:57 TRACE [hyper::proto::h1::conn] read_keep_alive; is_mid_read=false
2018-11-26 23:31:57 TRACE [hyper::proto::h1::io] io.read_buf -> Ready(0)
2018-11-26 23:31:57 DEBUG [hyper::proto::h1::io] read 0 bytes
2018-11-26 23:31:57 TRACE [hyper::proto::h1::conn] try_io_read; found EOF on connection: State { reading: KeepAlive, writing: Init, keep_alive: Busy, error: None }
2018-11-26 23:31:57 TRACE [hyper::proto::h1::conn] State::close_read()
2018-11-26 23:31:57 TRACE [hyper::proto::h1::dispatch] poll_write
2018-11-26 23:31:57 TRACE [hyper::proto::h1::dispatch] poll_flush
2018-11-26 23:31:57 TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: Closed, writing: Init, keep_alive: Disabled, error: None }
2018-11-26 23:31:57 TRACE [hyper::proto::h1::dispatch] poll_inner.is_done?
2018-11-26 23:31:57 TRACE [hyper::proto::h1::conn] conn.is_read_closed true
2018-11-26 23:31:57 TRACE [hyper::proto::h1::conn] conn.is_write_closed false
2018-11-26 23:31:57 TRACE [hyper::proto::h1::dispatch] poll_inner.is_done -> false
2018-11-26 23:31:57 TRACE [tokio_reactor] event Readable Token(4194303)
2018-11-26 23:31:57 TRACE [tokio_reactor] loop process - 1 events, 0.000s
2018-11-26 23:31:57 TRACE [tokio_reactor] loop process - 0 events, 0.000s

@seanmonstar
Copy link
Member

Ah sure, if it were to close the connection immediately, then connections that half-close after writing the request would never be able to read the response...

@luben
Copy link
Contributor Author

luben commented Nov 26, 2018

Yes, you are right.

@seanmonstar
Copy link
Member

A possible solution is to offer some sort of "support half-closed" config option (default would need to be true to not break existing behavior), and if false, then that keep-alive read can forcibly close future down.

seanmonstar added a commit that referenced this issue Nov 27, 2018
This option determines whether a read EOF should close the connection
automatically. The behavior was to always allow read EOF while waiting
to respond, so this option has a default of `true`.

Setting this option to `false` will allow Service futures to be canceled
as soon as disconnect is noticed.

Closes #1716
seanmonstar added a commit that referenced this issue Nov 27, 2018
This option determines whether a read EOF should close the connection
automatically. The behavior was to always allow read EOF while waiting
to respond, so this option has a default of `true`.

Setting this option to `false` will allow Service futures to be canceled
as soon as disconnect is noticed.

Closes #1716
seanmonstar added a commit that referenced this issue Nov 27, 2018
This option determines whether a read EOF should close the connection
automatically. The behavior was to always allow read EOF while waiting
to respond, so this option has a default of `true`.

Setting this option to `false` will allow Service futures to be canceled
as soon as disconnect is noticed.

Closes #1716
seanmonstar added a commit that referenced this issue Nov 27, 2018
This option determines whether a read EOF should close the connection
automatically. The behavior was to always allow read EOF while waiting
to respond, so this option has a default of `true`.

Setting this option to `false` will allow Service futures to be canceled
as soon as disconnect is noticed.

Closes #1716
seanmonstar added a commit that referenced this issue Nov 27, 2018
This option determines whether a read EOF should close the connection
automatically. The behavior was to always allow read EOF while waiting
to respond, so this option has a default of `true`.

Setting this option to `false` will allow Service futures to be canceled
as soon as disconnect is noticed.

Closes #1716
seanmonstar added a commit that referenced this issue Nov 27, 2018
This option determines whether a read EOF should close the connection
automatically. The behavior was to always allow read EOF while waiting
to respond, so this option has a default of `true`.

Setting this option to `false` will allow Service futures to be canceled
as soon as disconnect is noticed.

Closes #1716
@luben
Copy link
Contributor Author

luben commented Nov 27, 2018

thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-http1 Area: HTTP/1 specific. A-server Area: server. C-bug Category: bug. Something is wrong. This is bad!
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants