Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add OkHttpClient#close method #3372

Closed
1 task done
juretta opened this issue May 24, 2017 · 16 comments
Closed
1 task done

Add OkHttpClient#close method #3372

juretta opened this issue May 24, 2017 · 16 comments

Comments

@juretta
Copy link

juretta commented May 24, 2017

What kind of issue is this?

  • Feature Request. Start by telling us what problem you’re trying to solve. Often a solution
    already exists! Don’t send pull requests to implement new features without first getting our
    support. Sometimes we leave features out on purpose to keep the project small.

We have been writing helper methods to properly close/shutdown an OkHttpClient.
The Javadoc on the OkHttpClient describes how to do this but ideally there is a close() method on OkHttpClient that does this correctly (additionally OkHttpClient should probably be a java.io.Closeable as well).

Are there any reasons there isn't? Otherwise I'd be happy to raise a PR adding this.

@swankjesse
Copy link
Collaborator

We want to encourage users to share the connection pool and dispatcher between clients. But if you do, you can't just tear everything down.

@juretta
Copy link
Author

juretta commented May 24, 2017

That's a fair use case. Would it make sense to provide a utility method that correctly cleans up a client (for clients where the lifetime of the client, dispatcher and pool match)?

@swankjesse
Copy link
Collaborator

If you don’t mind, lemme step back a bit. Why do you want to close your OkHttpClient?

@juretta
Copy link
Author

juretta commented May 29, 2017

Why do you want to close your OkHttpClient?

We're using OkHttp in a service environment (i.e. not Android/Mobile). We're using one client with its own connection pool per downstream service.

We want to cleanup the resources attached to the client in a couple of places:

  • In tests where we create a new client per test case
  • In integration tests where we also check that we don't leak threads
  • For clients where the lifetime of the client does not match the lifetime of the application (e.g. where we create a new client in certain cases and abandon the old one)

@swankjesse
Copy link
Collaborator

Got it. Creating a new OkHttpClient is relatively expensive and I’m reluctant to make it easy to create and destroy clients because it may encourage inefficient use cases. You’re free to create and destroy clients for each test, but I want you to write the code to do set up and tear down so that you can take responsibility for the performance cost of that.

(The performance cost is basically the cost of creating an ExecutorService for the connection pool and another for the Dispatcher. I’ve found in my own server applications that the costs of creating and destroying thread pools is substantial in the overall cost of running the test.)

@juretta
Copy link
Author

juretta commented May 30, 2017

but I want you to write the code to do set up and tear down so that you can take responsibility for the performance cost of that.

I'm not quite sure how making me write code to properly close a client makes me take responsibility for the performance cost of creating new clients all the time.

The Javadoc on OkHttpClient clearly states that

OkHttpClients should be shared

and that creating new clients all over the place should be avoided.

So IMHO that fact is already clearly communicated.

From our own usage I observed that we have utility methods to close an OkHttpClient (admittedly in way that assumes that one client uses one pool and one dispatcher) that slightly vary in how far they go in terms of properly closing the executor service and whether they consider closing the cache (if set) as well. So providing a canonical way of fully shutting down an OkHttpClient that essentially codifies the description provided in the "Shutdown isn't necessary" Javadoc section seemed beneficial to me. Apparently it's not and that is fine.

I'll close this. Thanks for your feedback.

@juretta juretta closed this as completed May 30, 2017
@swankjesse
Copy link
Collaborator

For most efficiency, you'll have one ConnectionPool​ and one Dispatcher in your process. These can be shared by many OkHttpClient instances.

If you shut down the Dispatcher, none of the OkHttpClient instances that use it will work. Similarly for shutting down the ConnectionPool​.

An analogy: imagine unplugging your residential modem and WiFi router when you're done using Firefox. It's easy enough to plug them back in when you need them. And it's handy to know what to shut off if you're not going to use Firefox ever again.

But I still think it’d be a misfeature for Firefox to shut down these devices when it exits, or even to offer an option to do so.

@bpappin
Copy link

bpappin commented Oct 11, 2017

Here is another use case.

We are using OkHttpClient in a mobile environment, we need to clean up the client any time a user logged out, to make sure we don't retain any keys, sessions, connections when the user is not authenticated.

We don't just want a "close() " method, we want a "destroy()" method, so we know its not usable.

@swankjesse
Copy link
Collaborator

The problem with any OkHttpClient.close() method is that it assumes the closed OkHttpClient doesn’t share state with other OkHttpClient instances. In practice we expect applications to share dispatchers, connection pools, and cache between clients. It’d be weird if closing one client broke another.

Instead I recommend closing the three things recommended in the docs:

     client.dispatcher().executorService().shutdown();
     client.connectionPool().evictAll();
     client.cache().close();

That way if your application has a shared cache but not a shared connection pool or dispatcher you can close all the parts you want to close.

@bpappin
Copy link

bpappin commented Oct 11, 2017

Is that going to be enough to ensure I don't get a memory leak, when I toss the OkHttpClient into the ether?

@swankjesse
Copy link
Collaborator

Mostly.

There’s an executor service in the connection pool that stays alive for 60 seconds, so if you’re creating and discarding OkHttpClients more frequently than once-per-minute eventually these will stack up.

My recommendation is to create a single OkHttpClient and to either reuse it, or to build derivative OkHttpClient instances from it by calling .newBuilder().

@szakal
Copy link

szakal commented Feb 13, 2020

Hi.

Regarding calling:
client.connectionPool().evictAll();
I guess it will remove only idle connections, right? Will the active connections remain in the pool for a long time (depending on your pool configuration).

In my case, I keep connections open for 24 hours (due to some business requirements)
new ConnectionPool(1, 24, TimeUnit.HOURS).

Should I wait until all connections are idle to effectively shut down the pool?

@swankjesse
Copy link
Collaborator

Connections currently carrying requests and responses won't be evicted. Everything else will be, whether or not the pool duration has elapsed.

@dadoonet
Copy link

Hey team

I went to this issue as I was looking for a way to definitely stop the client and its underlining resources.
My use case is not a business one. Just technical so may be I should not try to fix it.

I'm using 4.12.0.

I'm using RandomizedTesting framework for my tests. It detects automatically ZombieThreads:

@RunWith(RandomizedRunner.class)
@ThreadLeakScope(ThreadLeakScope.Scope.SUITE)
@ThreadLeakLingering(linger = 1000) // 1s lingering
public class ZombieOkHttpIT {

    @Test
    public void testZombie() throws Exception {
        OkHttpClient httpClient = new OkHttpClient.Builder()
                .connectionPool(new ConnectionPool(1, 100, TimeUnit.MILLISECONDS))
                .connectTimeout(100, TimeUnit.MILLISECONDS)
                .writeTimeout(100, TimeUnit.MILLISECONDS)
                .readTimeout(100, TimeUnit.MILLISECONDS)
                .callTimeout(100, TimeUnit.MILLISECONDS)
                .build();
        Request request = new Request.Builder()
                .url("http://127.0.0.1:8080")
                .build();

        try (Response response = httpClient.newCall(request).execute()) {
            System.out.println("response = " + response.body().string());
        } catch (ConnectException e) {
            System.out.println("You need to start a minio service first:");
            System.out.println("docker run -p 8080:80 httpd");
        }

        httpClient.dispatcher().executorService().shutdown();
        httpClient.connectionPool().evictAll();
    }
}

This gives:

response = <html><body><h1>It works!</h1></body></html>

sept. 19, 2024 11:18:26 AM com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
WARNING: Will linger awaiting termination of 2 leaked thread(s).
sept. 19, 2024 11:18:27 AM com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
SEVERE: 2 threads leaked from SUITE scope at fr.pilato.test.zombie.okhttp.ZombieOkHttpIT: 
   1) Thread[id=32, name=Okio Watchdog, state=TIMED_WAITING, group=TGRP-ZombieOkHttpIT]
        at java.base/jdk.internal.misc.Unsafe.park(Native Method)
        at java.base/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:269)
        at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1847)
        at okio.AsyncTimeout$Companion.awaitTimeout$okio(AsyncTimeout.kt:308)
        at okio.AsyncTimeout$Watchdog.run(AsyncTimeout.kt:186)
   2) Thread[id=33, name=OkHttp TaskRunner, state=TIMED_WAITING, group=TGRP-ZombieOkHttpIT]
        at java.base/jdk.internal.misc.Unsafe.park(Native Method)
        at java.base/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:410)
        at java.base/java.util.concurrent.LinkedTransferQueue$DualNode.await(LinkedTransferQueue.java:452)
        at java.base/java.util.concurrent.SynchronousQueue$Transferer.xferLifo(SynchronousQueue.java:194)
        at java.base/java.util.concurrent.SynchronousQueue.xfer(SynchronousQueue.java:233)
        at java.base/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:336)
        at java.base/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
        at java.base/java.lang.Thread.run(Thread.java:1583)
sept. 19, 2024 11:18:27 AM com.carrotsearch.randomizedtesting.ThreadLeakControl tryToInterruptAll
INFO: Starting to interrupt leaked threads:
   1) Thread[id=32, name=Okio Watchdog, state=TIMED_WAITING, group=TGRP-ZombieOkHttpIT]
   2) Thread[id=33, name=OkHttp TaskRunner, state=TIMED_WAITING, group=TGRP-ZombieOkHttpIT]
sept. 19, 2024 11:18:30 AM com.carrotsearch.randomizedtesting.ThreadLeakControl tryToInterruptAll
SEVERE: There are still zombie threads that couldn't be terminated:
   1) Thread[id=32, name=Okio Watchdog, state=TIMED_WAITING, group=TGRP-ZombieOkHttpIT]
        at java.base/jdk.internal.misc.Unsafe.park(Native Method)
        at java.base/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:269)
        at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1847)
        at okio.AsyncTimeout$Companion.awaitTimeout$okio(AsyncTimeout.kt:308)
        at okio.AsyncTimeout$Watchdog.run(AsyncTimeout.kt:186)
   2) Thread[id=33, name=OkHttp TaskRunner, state=TIMED_WAITING, group=TGRP-ZombieOkHttpIT]
        at java.base/jdk.internal.misc.Unsafe.park(Native Method)
        at java.base/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:410)
        at java.base/java.util.concurrent.LinkedTransferQueue$DualNode.await(LinkedTransferQueue.java:452)
        at java.base/java.util.concurrent.SynchronousQueue$Transferer.xferLifo(SynchronousQueue.java:194)
        at java.base/java.util.concurrent.SynchronousQueue.xfer(SynchronousQueue.java:233)
        at java.base/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:336)
        at java.base/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
        at java.base/java.lang.Thread.run(Thread.java:1583)

com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE scope at fr.pilato.test.zombie.okhttp.ZombieOkHttpIT: 
   1) Thread[id=32, name=Okio Watchdog, state=TIMED_WAITING, group=TGRP-ZombieOkHttpIT]
        at java.base/jdk.internal.misc.Unsafe.park(Native Method)
        at java.base/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:269)
        at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1847)
        at okio.AsyncTimeout$Companion.awaitTimeout$okio(AsyncTimeout.kt:308)
        at okio.AsyncTimeout$Watchdog.run(AsyncTimeout.kt:186)
   2) Thread[id=33, name=OkHttp TaskRunner, state=TIMED_WAITING, group=TGRP-ZombieOkHttpIT]
        at java.base/jdk.internal.misc.Unsafe.park(Native Method)
        at java.base/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:410)
        at java.base/java.util.concurrent.LinkedTransferQueue$DualNode.await(LinkedTransferQueue.java:452)
        at java.base/java.util.concurrent.SynchronousQueue$Transferer.xferLifo(SynchronousQueue.java:194)
        at java.base/java.util.concurrent.SynchronousQueue.xfer(SynchronousQueue.java:233)
        at java.base/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:336)
        at java.base/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
        at java.base/java.lang.Thread.run(Thread.java:1583)

	at __randomizedtesting.SeedInfo.seed([DEB6652E3A8E5B61]:0)


com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated:
   1) Thread[id=32, name=Okio Watchdog, state=TIMED_WAITING, group=TGRP-ZombieOkHttpIT]
        at java.base/jdk.internal.misc.Unsafe.park(Native Method)
        at java.base/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:269)
        at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1847)
        at okio.AsyncTimeout$Companion.awaitTimeout$okio(AsyncTimeout.kt:308)
        at okio.AsyncTimeout$Watchdog.run(AsyncTimeout.kt:186)
   2) Thread[id=33, name=OkHttp TaskRunner, state=TIMED_WAITING, group=TGRP-ZombieOkHttpIT]
        at java.base/jdk.internal.misc.Unsafe.park(Native Method)
        at java.base/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:410)
        at java.base/java.util.concurrent.LinkedTransferQueue$DualNode.await(LinkedTransferQueue.java:452)
        at java.base/java.util.concurrent.SynchronousQueue$Transferer.xferLifo(SynchronousQueue.java:194)
        at java.base/java.util.concurrent.SynchronousQueue.xfer(SynchronousQueue.java:233)
        at java.base/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:336)
        at java.base/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
        at java.base/java.lang.Thread.run(Thread.java:1583)

	at __randomizedtesting.SeedInfo.seed([DEB6652E3A8E5B61]:0)


Process finished with exit code 255

I tried to use the default client, define some timeouts (as per the shared code) but I can't find the way to properly close everything. I tried what's written in documentation:

httpClient.dispatcher().executorService().shutdown();
httpClient.connectionPool().evictAll();

But still no luck.

Any idea of what should be done to close the Okio Watchdog and OkHttp TaskRunner threads?
Should I open another issue about this?

Full demo code available at https://github.com/dadoonet/demo-ssh-mino/blob/master/src/test/java/fr/pilato/test/zombie/okhttp/ZombieOkHttpIT.java

@JakeWharton
Copy link
Collaborator

There are many issues discussing this in the past (such as this one!).

For Okio, the thread is global and shuts itself down automatically after a timeout. See square/okio#738.

I can't remember if the task runner is global or per-instance within OkHttp, but I would expect it to behave similarly.

@dadoonet
Copy link

But I tried to add a wait (just for test purposes) and after 1+ minutes threads were still there.
How should I create the client to make it closing all the threads at some point?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants