-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fast shutdown #641
Comments
Below is a code sample of what I want to do. This forks off a thread which tries to read from a URL. This particular URL (http://0.0.0.1) doesn't work; it just hangs. So when the program exits, there's no clean way to unblock ureq. Is something like that possible? "Agent" is cloneable and the clones can be shared across threads. So I would suggest implementing the method agent.shutdown() and have the connection associated with that agent shut down immediately, even if there's a pending request on another thread. The pending request and any following request on that agent should return an error.
|
An interesting problem! The first couple of solutions that occur to me:
However, neither of these is really satisfying or clean. To your proposed solution of having the agent shut down all extant requests: with the current design that's a little tricky, since the agent (and specifically the agent's pool) does not retain ownership of the TcpStream once it's handed off to the user for reading. Instead, the TcpStream is wrapped in several layers, one of which is an However, there's another problem: a request consists of DNS lookup, TCP connect, and TCP traffic. Calling And unfortunately, even timeouts are not perfect: we use ToSocketAddrs to do DNS resolution. Under the hood that uses system APIs that don't provide a nice timeout mechanism, so we at present timeouts don't strictly apply to DNS lookups. Instead the system resolver's default DNS lookup timeout comes into play. We may be able to solve this by maintaining a thread pool to perform DNS lookups on, but we don't do that at present. |
I'm doing that now in some other parts of the program, and it produces an annoying 5 second delay at exit. Here, I'm doing a long HTTP poll for a push-type system. (The other end is not mine, so I don't define the interface.) I have to make an HTTP request which will either return data at some point, or, after 20-30 seconds or so of no events, returns an HTTP status, after which there's another poll. A 30 second delay at shutdown is unacceptable. So I have to do something.
Ah. Thanks. I realize it's a pain. Anything that can be done in that direction would be appreciated.
Here, DNS delay isn't a problem - the other end is going to be there if we get far enough to start the long poll.
That's what I just wrote code to do, using, as shown above, with a crossbeam-channel that gets closed on drop. The polling thread is still stuck in ureq, but the thread for the channel close gets its close and cleans up. So that works out. (What I'm doing is a metaverse client. It has many open UDP and HTTP connections to multiple servers. If the program aborts without a clean shutdown, you have an avatar frozen in a shared virtual world, or even moving in an uncontrolled vehicle. The server side will eventually time out and force a logout, but it takes over a minute. Until that happens, users cannot log in again, which annoys them. A clean logout produces a nice little teleporting effect seen by others, and users expect that.) |
Is this still and issue in ureq 3.x? |
I get 10-15 second delays when the user suddenly closes my program because tasks are waiting for remote servers to answer. I'd like to have some way to force a fast shutdown. The underlying TcpStream understands shutdown, but there is no way to do that from outside ureq.
Since Agent can't be accessed from multiple threads there's no obvious way to shut down an agent. But it's possible to share the Pool, and I'd like to be able to tell the Pool to shut down everything. Thanks.
(I have a multi-threaded program, and I'm trying to stay away from "async", because the combination of Tokio and threading gets really complicated.)
The text was updated successfully, but these errors were encountered: