Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do I shut down a server cleanly? #1575

Closed
dylanede opened this issue Jun 21, 2018 · 8 comments
Closed

How do I shut down a server cleanly? #1575

dylanede opened this issue Jun 21, 2018 · 8 comments
Labels
A-server Area: server. B-rfc Blocked: More comments would be useful in determine next steps.

Comments

@dylanede
Copy link

After spawning a server on a tokio Runtime, then shutting down the runtime (using shutdown_now), recreating a server on the same IP and port with a new Runtime panics, claiming

error creating server listener: Only one usage of each socket address (protocol/network address/port) is normally permitted. (os error 10048)

If I instead recreate the server by restarting the process, it works fine.

This behaviour is being observed on Windows 7. I have tried sleeping the thread for up to 10 seconds before recreating the server and the same error occurs.

Other than shutting down the Runtime, is there something else I must do to cleanly close a server? The documentation mentions the concept of "shutdown" in passing, but does not elaborate on what must be done to achieve that correctly.

@seanmonstar
Copy link
Member

TCP ports that are bound often have a property of not fully closing when asked to, so that OS can handle any late packets inbound to that port. There is a socket option to tell the OS "Hey, it's really OK if I reuse the port right away". You could try setting that option to see if that's the issue (it could also just be a bug in hyper or tokio).

And... I just checked, and hyper's built-in listener builder doesn't have a way to set that option, but you could build a custom one (and if it works, we can add that method into hyper directly!):

extern crate net2;
extern crate tokio;
// hyper and stuff of course...

use tokio::net::TcpListener;

fn listener() -> io::Result<TcpListener> {
     // whatever addr
    let addr = SocketAddr::from(([127, 0, 0, 1], 3030));
    net2::TcpBuilder::new_v4()?
        .reuse_address(true)?
        .bind(&addr)?
        .listen(1024).and_then(|listener| {
            TcpListener::from_std(listener, &Default::default())
        })
}

// making a server with a custom listener
let incoming = listener()?.incoming();
let opts = hyper::server::conn::Http::new();
let server = hyper::server::Builder::new(incoming, opts)
  .serve(new_service);

PS, calling Runtime::shutdown_now() is an abortive shutdown. It doesn't provide any of the resources in it a chance to tell peers it's going away. It forcibly closes all sockets at whatever state they are in. Maybe that's what you're looking for, but thought I'd point it out.

@dylanede
Copy link
Author

Thank you for the quick and in depth reply. I was aware of the existence of SO_REUSEADDR, but I wasn't sure whether it was relevant in this case. I would have expected identical behaviour from restarting the server within the process to just restarting the process, which as noted above is not what I see.

I am using shutdown_now after doing all of the graceful shutdown I can for other tasks, because I cannot find another way of getting the Server future to complete gracefully. That is what I was hoping I might have missed in the docs, since they do mention that for Server, "the future completes when the server has been shutdown". The trouble is that I cannot find how to achieve that.

Anyway, I tried using reuse_address, similarly to how you have described. The main differences are that I have handling for IPv6 addresses as well as IPv4, and I enable nodelay on the TcpStreams that the incoming stream puts out.

With this configuration the setup succeeds, however I find that I cannot connect to a server on a recreated socket. In other words, after running the server and then restarting it within the same process, it does not accept TCP connections. It also no longer accepts connections when bound on IPv6 addresses at all, even for the first creation of the socket.

In conclusion I'm not convinced that the problem that is normally solved by using SO_REUSEADDR is fully responsible here.

Thanks again for this great library, by the way.

@dylanede
Copy link
Author

I see now that shutting down the server is a case of making the incoming stream end. I will see if that helps things.

@seanmonstar
Copy link
Member

Since the Server is just a Future, you could create some other signal to notify shutdown, and then server.select2(shutdown_signal) before spawning it in the runtime.

There is a concept of graceful shutdown in hyper, specifically on individual Connection futures, but we haven't yet come up with an API on Server to signal all connections to gracefully shutdown. I'd like for that API to exist, though!

@sfackler
Copy link
Contributor

Here's some infrastructure I built out for a graceful server shutdown: https://gist.github.com/sfackler/77c02d840a9ba08b58a435cab2901809

Each Connection is wrapped in via ShutdownState::wrap_connection in a future that will turn off keepalive when asked. The ShutdownState::shutdown method returns a future that pokes all of the connections and then waits for them to complete.

@seanmonstar
Copy link
Member

Likewise, since Conduit is currently using the Connections instead of Server, we make use of graceful_shutdown with this drain::Watch concept.

I think something just like it could be ported to hyper's Server, bringing this to everyone (and Conduit would be one piece closer to not needing the lower-level APIs)!

@seanmonstar seanmonstar added A-server Area: server. B-rfc Blocked: More comments would be useful in determine next steps. labels Jun 26, 2018
@ChriFo
Copy link

ChriFo commented Jul 14, 2018

I have the same behavior with Windows (and Appveyor) and a server implementation based on https://github.com/actix/actix-web which uses tokio. Everything is fine on macOS and Linux (and Travis)!

I tried to setup a socket with the socket2 crate (set_reuse_address) but without success.

@ChriFo
Copy link

ChriFo commented Jul 16, 2018

The server seems to be ok. I think it is the client (reqwest): tokio-rs/mio#776

seanmonstar added a commit that referenced this issue Aug 23, 2018
This adds a "combinator" method to `Server`, which accepts a user's
future to "select" on. All connections received by the `Server` will
be tracked, and if the user's future finishes, graceful shutdown will
begin.

- The listener will be closed immediately.
- The currently active connections will all be notified to start a
  graceful shutdown. For HTTP/1, that means finishing the existing
  response and using `connection: clone`. For HTTP/2, the graceful
  `GOAWAY` process is started.
- Once all active connections have terminated, the graceful future
  will return.

Closes #1575
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-server Area: server. B-rfc Blocked: More comments would be useful in determine next steps.
Projects
None yet
Development

No branches or pull requests

4 participants