Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix #4988: ensuring the previous response is closed #4990

Merged
merged 1 commit into from
Apr 7, 2023

Conversation

shawkins
Copy link
Contributor

@shawkins shawkins commented Mar 22, 2023

Description

Fix #4988

The prior location of the retry code did not require explicit closure because it was on top of consuming the the whole auto-closed response. Now that it's been moved, we have to account for the closure before issuing the retry.

@manusa @rohanKanojia should we put in reference tracking cleanup as a fail safe?

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • Feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change
  • Chore (non-breaking change which doesn't affect codebase;
    test, version modification, documentation, etc.)

Checklist

  • Code contributed by me aligns with current project license: Apache 2.0
  • I Added CHANGELOG entry regarding this change
  • I have implemented unit tests to cover my changes
  • I have added/updated the javadocs and other documentation accordingly
  • No new bugs, code smells, etc. in SonarCloud report
  • I tested my code in Kubernetes
  • I tested my code in OpenShift

@shawkins shawkins changed the title fix #4988: ensuring the previous fix #4988: ensuring the previous response is closed Mar 22, 2023
@manusa
Copy link
Member

manusa commented Mar 23, 2023

We keep hitting this issue again and again. I'm not sure if reference tracking is the way to go, but I do think that we need to mitigate this with something more permanent and scalable.

@manusa
Copy link
Member

manusa commented Mar 23, 2023

The current fix seems to stall the tests at some point, I'm stopping the CI workflow jobs now, but they've been running for > 5 hours

OK, I just saw that Rohan retriggered them after the previous iteration was cancelled after running for more than 6 hours.

@rohanKanojia
Copy link
Member

I've restarted these a few times. But keep seeing failures.

@shawkins
Copy link
Contributor Author

but I do think that we need to mitigate this with something more permanent and scalable

Right while all current cases are now covered (interceptors, retry, token refresh requests, etc.) we may hit this again if the logic is moved or there's still the possibility for users to obtain exec, log streams, start informers, and not close them before gc is performed - okhttp will warn about this and ultimately close the connection based upon their reference tracking, I'm not entirely sure what the behavior of the other clients is.

Other than our own reference tracking the options are:

  1. a scheduled close - but that would have to be modified for any streaming cases (upload or download) and the logic down at the StandardHttpClient layer isn't currently aware.
  2. refactor to disallow simultaneous calls within the same invocation of an httpclient instance method - this would have caught the issue with interceptors and the retry logic, but of course won't do anything about the user level issue of opening things and never closing them. Since the existing interceptors work off of new or derived httpclient instances, this would not prevent token refreshes from working.

@sonarqubecloud
Copy link

sonarqubecloud bot commented Apr 6, 2023

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

100.0% 100.0% Coverage
0.0% 0.0% Duplication

@manusa manusa merged commit 1020516 into fabric8io:master Apr 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

OkHttp reports leaked connections with versions 6.5.0 & 6.5.1
3 participants