Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential connection leak #147

Closed
minborg opened this issue Jun 7, 2021 · 7 comments
Closed

Potential connection leak #147

minborg opened this issue Jun 7, 2021 · 7 comments
Assignees
Labels
bug Something isn't working
Milestone

Comments

@minborg
Copy link
Contributor

minborg commented Jun 7, 2021

A customer reports connections being leaked with spring boot and postgres, hiraki

@minborg minborg added the bug Something isn't working label Jun 7, 2021
@minborg
Copy link
Contributor Author

minborg commented Jun 8, 2021

Customer reports:

" I solved the issue by manual open and close jpastream every query. Connection leak don't happening and everything on server is good working . But I hope you update version for fix this issue (connection leak)."

@julgus julgus added next and removed next labels Sep 1, 2022
@rstechgeek
Copy link

Yes, I also tried the latest 1.1.0 and still the connection leak is there with spring boot, hikari

@julgus
Copy link
Member

julgus commented Jun 7, 2023

Considering these reports are old, they may or may not be relevant. But any cues on how to reproduce this would be helpful. For now, I have tried running the jpastreamer-demo Spring Boot application with Hakari. 100,000 consecutive requests all succeed without any issues. I am not closing JPAStreamer in between calls.

What symptoms to you see that indicate that there is a connection leak?

@julgus
Copy link
Member

julgus commented Jun 7, 2023

Okay, found the crack. The single connection leak that springs from a regular JPAStreamer Stream does not block the application and only shows up once in the beginning. Interestingly no connection leak appears whenever I only perform joined queries, thus I assume the issue is associated with the JPAStreamer caching of simple StreamConfiguration (no joins or projections).

In contrast, triggering a reset of JPAStreamer on every request (jpaStreamer.resetStreamer(Film.class)) results in every connection being leaked, and the application becomes blocked after 21 requests (20 maximum connections).

I'll keep digging.

@julgus
Copy link
Member

julgus commented Jun 7, 2023

Disabling the JPAStreamer internal Stream cache resolves the issue. Would however be nice to get to the bottom of why that cache is problematic.

@julgus julgus self-assigned this Jun 7, 2023
@julgus
Copy link
Member

julgus commented Jun 7, 2023

PLEASE SEE UPDATED ANALYSIS IN NEXT COMMENT

My takeaway is this - we cache simple StreamConfigurations in a Map<StreamConfiguration<?>, Streamer<?>> such as StreamConfiguration.of(Film.class). The first call to jpaStreamer.stream(Film.class) will automatically create such StreamConfiguration under the hood. Repeated calls to .stream(Film.class) will reuse the StreamConfiguration from the first call, and this will result in the Stream not being closed.

The reason I only see one connection leak is probably that the loop of consecutive calls to my local endpoint is not enough of a load to trigger the use of more than one connection. The reason the application never blocks is that the first connection is not really lost, it keeps responding to requests but is never released to the connection pool between requests and is therefore registered as a leak.

Resetting JPAStreamer using the method jpaStreamer.resetStreamer(Film.class) will just remove the StreamConfigurations from the Map, forcing the creation of a new StreamConfiguration on the next request. However, the previous connection is never closed, explaining why every connection is leaked. This is clearly a problem.

@julgus
Copy link
Member

julgus commented Jun 15, 2023

My latest comment is not entirely wrong. But returning to this issue with fresh eyes helped me see the underlying problem more clearly.

Hikari connections are coupled with the lifecycle of an Entity Manager. From the perspective of JPAStreamer, Hikari will not release a connection until we close the Entity Manager we have obtained.

In the case where an application is repeatedly calling e.g. jpaStreamer.stream(Film.class) JPAStreamer caches the underlying Streamer associated with the Film entity to save resources. The Streamer has a reference to a Renderer, that in turn holds a reference to an EntityManager. As we are reusing the same Streamer over and over again, we are also reusing the same Entity Manager, and it will stay active until we close down JPAStreamer altogether (or resets the cache, see final paragraph). This also means that Hikari will reserve a connection for each Streamer in the cache and this will create leaks if a user creates a bunch of Streamers that are then never reused. Or is recognized as a leak in Hikari even if the connection is active, as it is never returned to the pool.

Uncached Streamers associated with more complex Stream Configurations (joins, projections etc.) will close the underlying EntityManager upon completion as the Streamer is discarded. Therefore these are leakproof. (These are exempted from the cache simply because the likelihood of reusing a more complex configuration is lower, but this may OFC not always be true if a single joined query is used repeatedly).

The way I see this, it is often desired to reuse an EntityManager repeatedly to save resources, thus the cache is motivated. However, we need to offer the possibility to manually close the underlying EntityManager to release the connection, alternatively, disable the cache altogether.

A while ago I introduced the option of manually removing a Streamer instance from the cache by calling jpaStreamer.resetStreamer(Class<?>... entityClasses). What I overlooked at that point was that the underlying Entity Manager is never closed. Thus the current implementation of this method does not offer a solution to the problem but rather leaks even more connections. This will be fixed in my next commit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants