-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add virtual-threads support in Quarkus #24942
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great stuff!
I've added some comments
...daptor/deployment/src/main/java/io/quarkus/netty/loom/adaptor/NettyLoomAdaptorProcessor.java
Outdated
Show resolved
Hide resolved
...daptor/deployment/src/main/java/io/quarkus/netty/loom/adaptor/NettyLoomAdaptorProcessor.java
Outdated
Show resolved
Hide resolved
import io.quarkus.netty.deployment.MinNettyAllocatorMaxOrderBuildItem; | ||
import io.smallrye.common.annotation.RunOnVirtualThread; | ||
|
||
public class NettyLoomAdaptorProcessor { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we'll need some comments on what this does, otherwise it will be hard to maintain this ASM :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thx so much for your comments @geoand
I'm trying to comment this mess, would it make sense to display the actual Netty method and the desired one in order for the reader to see where we're going ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would definitely make sense to display the actual result of the transformation in addition to why we need it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hey @geoand, I added some comments explaining what we do
If you find it satisfying I'll just squash everything and rebase to make it merge-ready
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, thanks
...time/src/main/java/io/quarkus/resteasy/reactive/server/runtime/ResteasyReactiveRecorder.java
Outdated
Show resolved
Hide resolved
...ain/java/org/jboss/resteasy/reactive/server/processor/ResteasyReactiveDeploymentManager.java
Outdated
Show resolved
Hide resolved
...c/main/java/org/jboss/resteasy/reactive/server/handlers/VirtualThreadNonBlockingHandler.java
Outdated
Show resolved
Hide resolved
This comment has been minimized.
This comment has been minimized.
Short note: I recommend not to use "Loom" for class or option names. While very prominent now, the name is just an artifact of the OpenJDK development process and doesn't exist "in code". Also, it will soon just be an obscure historic details (c.f. "Coin" or "Jigsaw"). I'd propose |
@nipafx Thanks! Yes, Definitely. It's actually not about using loom, it's to open some modules we need (JPMS stuff). The name was rushed and does not convey what it does (and fortunately, we will make this obscure option goes away. @anavarr what about open-xxx-package (I can't remember the one we need, so replace xxx). Also, can't we detect and pass the option without having the user explicitly doing it? |
Quick disclaimer and questions about the "openLoom" flag It is used in reflection : we open the java.lang module to be able to access the VirtualThreadBuilder class and use a custom executor (was possible before, is impossible now) to dispatch virtual threads on carrier threads Our hypothesis was that the current ForkJoinPool-like executor was going to spawn too many carrier threads and increase the memory footprint of the application. Recent benchmarks we did indicate that this choice didn't affect the memory footprint of the application but reduced its scalability We were thinking about switching back to the "default" executor but I am afraid the API will change, shall we keep the FixedThreadPool executor for now or switch to NewVirtualThreadPerTaskExecutor ? |
I'm neither a virtual thread expert nor a Quarkus expert, but I'd recommend not to hack into these systems on the unreported hypothesis that there may be a memory issue. Particularly not since some of these things can be controlled by the user with command line flags. Form the
I hope we can start the journey towards virtual threads with small steps, each one allowing time for feedback to come in, instead of taking several steps at once. |
@nipafx you're right, I guess it was useful hack to explore different leads but our experiments didn't give results good enough for us to continue down this path ? |
This is good to go from my perspective. If everything is also good for @cescoffier, let's squash and merge it |
All good for me! |
- when an endpoint has the annotation @RunOnVirtualThread its blockingHandler will use a special Excutor spawing a new virtual threads for every request - added isRunOnVirtualThread method to check that @RunOnVirtualThread is possible (must be on a @Blocking endpoint, the user is warned if the jdk they use to compile is not loom-compliant) - an endpoint with the @transactional annotation will override the @RunOnVirtualThread annotation -> arbitrary choice of caution (@transactional might use thread locals for instance ?) - the loom-related pieces are called by reflective calls to avoid dependencies on java 19 (if the runtime is not compliant, the endpoint falls back to traditional workers) Add a new extension : netty-loom-adaptor that performs bytecode manipulation on netty's PooledByteBufAllocator to use a concurrentHashMap instead of threadLocals - when creating a DirectBuffer, we check if the current Thread is virtual or not - if it is, we get its carrier to check if the threadCaches hashMap has the carrier's name as a key - if so, we return the PoolThreadCache associated to the carrier - else, we create a new PoolThreadCache (wip) and return it - after that, we will use this cache instead of the PoolThreadLocalCache - added static fields to cache isVirtual() method (instead of using reflection to get it for every request) - added static fields to cache currentCarrierThread method (same goal as for isVirtualMethod field) - tested quarkus-loom with the application --> from 20 000 reqs during benchmark to ~ 160 000 requests during benchmark (9.5% quarkus perf to ~71% quarkus perf) - netty-loom-adaptor extension will do its magic only if @RunOnVirtualThread annotations are present in the combinedIndexBuildItem, else it won't do anything When using quarkus dev services with loom code, the user must specify -Dopen-lang-package. If they don't the reflective operations will throw exceptions
squashed and rebased, checked diff with main, should be good to go |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🎉
Nice to see this coming disciple @anavarr 👍 |
any official doc regarding how to use this in Quarkus? |
Not yet, it's as it's very experimental and subject to change |
is there a global config run all endpoint on virtual thread or platform thread? |
@JackyAnn No, and the reason is that there are too many possibilities of pinning or monopolizing the carrier threads, which would be terrible. Thus, switching to a virtual thread is something you need to be aware of for now. Later, once Loom improves, and the Java ecosystem adapts, it will become the default. But for now, the risk is too high. |
This could also allow dispatching Kotlin coroutines onto LOOM VirtualThreads, by having that annotation cooperate with |
@RunOnVirtualThread
its blockingHandler will use a special Excutor spawing a new virtual threads for every request@RunOnVirtualThread
is possible (must be on a@Blocking
endpoint, the user is warned if the jdk they use to compile is not loom-compliant)@Transactional
annotation will override the@RunOnVirtualThread
annotation -> arbitrary choice of caution (@Transactional
might use thread locals for instance ?)Add a new extension : netty-loom-adaptor that performs bytecode manipulation on netty's PooledByteBufAllocator to use a concurrentHashMap instead of threadLocals
when creating a DirectBuffer, we check if the current Thread is virtual or not
if it is, we get its carrier to check if the threadCaches hashMap has the carrier's name as a key
if so, we return the PoolThreadCache associated to the carrier
else, we create a new PoolThreadCache (wip) and return it
after that, we will use this cache instead of the PoolThreadLocalCache
added static fields to cache isVirtual() method (instead of using reflection to get it for every request)
added static fields to cache currentCarrierThread method (same goal as for isVirtualMethod field)
tested quarkus-loom with the application --> from 20 000 reqs during benchmark to ~ 160 000 requests during benchmark (9.5% quarkus perf to ~71% quarkus perf)
netty-loom-adaptor extension will do its magic only if @RunOnVirtualThread annotations are present in the combinedIndexBuildItem, else it won't do anything
When using quarkus dev services with loom code, the user must specify -DopenLoom. If they don't the reflective operations will throw exceptions