Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak in ResteasyReactiveOutputStream #19257

Closed
johnoliver opened this issue Aug 5, 2021 · 5 comments
Closed

Memory leak in ResteasyReactiveOutputStream #19257

johnoliver opened this issue Aug 5, 2021 · 5 comments
Labels
area/rest kind/bug Something isn't working
Milestone

Comments

@johnoliver
Copy link
Contributor

Describe the bug

We are seeing quite a serious memory leak that seems to ultimately be coming from ResteasyReactiveOutputStream:

Caused by: java.lang.OutOfMemoryError: Direct buffer memory
	at java.base/java.nio.Bits.reserveMemory(Bits.java:175)
	at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:118)
	at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:317)
	at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:755)
	at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:731)
	at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:247)
	at io.netty.buffer.PoolArena.allocate(PoolArena.java:215)
	at io.netty.buffer.PoolArena.allocate(PoolArena.java:147)
	at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:356)
	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187)
	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:173)
	at org.jboss.resteasy.reactive.server.vertx.ResteasyReactiveOutputStream.write(ResteasyReactiveOutputStream.java:211)

Enabling io.netty.leakDetection.level=paraniod we get:

ERROR [io.net.uti.ResourceLeakDetector] (vert.x-eventloop-thread-4) LEAK: ByteBuf.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records:
Created at:
        io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:402)
        io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187)
        io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:173)
        org.jboss.resteasy.reactive.server.vertx.ResteasyReactiveOutputStream.write(ResteasyReactiveOutputStream.java:211)
        java.base/sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233)
        java.base/sun.nio.cs.StreamEncoder.implClose(StreamEncoder.java:337)

Looking at the code at https://github.com/quarkusio/quarkus/blob/main/independent-projects/resteasy-reactive/server/vertx/src/main/java/org/jboss/resteasy/reactive/server/vertx/ResteasyReactiveOutputStream.java#L211 I would like to understand how this should function, I believe calls to PooledByteBufAllocator.DEFAULT.directBuffer() should ultimately get released via a buffer.release() call, however from my understanding of the code it seems that this only happens in exception circumstances and under normal execution that release does not get called. My questions here are:

  • Should there be a release call under normal circumstances?
  • If so where is the code responsible for making that call?

Expected behavior

Not to leak memory

Actual behavior

Leaks memory

How to Reproduce?

Will look at producing a reproducer if needed

Output of uname -a or ver

No response

Output of java -version

No response

GraalVM version (if different from Java)

No response

Quarkus version or git rev

2.1.0.Final

Build tool (ie. output of mvnw --version or gradlew --version)

No response

Additional information

No response

@johnoliver johnoliver added the kind/bug Something isn't working label Aug 5, 2021
@quarkus-bot
Copy link

quarkus-bot bot commented Aug 5, 2021

/cc @FroMage, @stuartwdouglas

@geoand
Copy link
Contributor

geoand commented Aug 5, 2021

Thanks for reporting.

This is interesting as we've never seen such a leak. Is there an easy way to reproduce this?

@johnoliver
Copy link
Contributor Author

@geoand https://github.com/johnoliver/quarkus-memoryleak-reproducer run the tests on that project and you should see the netty exceptions, run that with enough data and you will start to OOM

@geoand
Copy link
Contributor

geoand commented Aug 5, 2021

Great thanks!

I'll be on PTO for a few weeks, so hopefully one of the other folks will pick this up before I can check it out

@johnoliver
Copy link
Contributor Author

Put a possible fix in as a PR, it seems that this is only going to be triggered I believe when the app is under load and a large amount of data (more than a single buffer) is needed to be returned. The specific circumstances may explain why you have not seen this issue.

@gsmet gsmet added this to the 2.1.2.Final milestone Aug 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/rest kind/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants