-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vertx pooled allocator should be the same as Netty #5168
Comments
With this as is, impossible to use new adaptive allocator too. @franz1981 this might affect quarkus too Context: |
The reality seems more complex, see vert.x/vertx-core/src/main/java/io/vertx/core/net/impl/NetServerImpl.java Lines 516 to 518 in 7fdc398
in short; with SSL we uses a different allocator (here -> https://github.com/eclipse-vertx/vert.x/blob/master/vertx-core/src/main/java/io/vertx/core/buffer/impl/PartialPooledByteBufAllocator.java) which uses the mentioned custom vertx allocator at #5168 (comment), but without SSL, instead, we just use the default Netty allocator. To enable the adaptive one we should "at least" move into using a different entry point to setup the Netty configured "default" one i.e. https://github.com/netty/netty/blob/4.1/buffer/src/main/java/io/netty/buffer/ByteBufAllocator.java#L24 |
@vietj @cescoffier this is something to fix on quarkus as well i think This can affect quarkus as well if we don't reuse the allocator configured on the Netty instance, which could be different from the singleton at @vietj @geoand |
I am not sure I follow this... What exactly are you after here? |
Yep, so, in short:
To solve this, we should obtain from vertx itself which configured allocator it uses, so we won't duplicate the RSS, makes sense @geoand ? |
Sure, that makes sense, but it won't really involve any Quarkus APIs or anything - we just need the extensions code do the proper thing |
IDK @geoand probably requires some change in the API till Vertx, to make sure we can query the configured allocator to vertx, or you have a better idea in mind? |
Sounds reasonable |
I've tried a different less invasive approach to this: see #5262 |
@zekronium are you a quarkus or vertx user? Just to understand If I ask your feedback if you want to give adaptive a shot :) |
98% Vert.x user with BoringSSL (tc-native) and JDK 21 (unsafe enabled with opens and useReflection=true) on top of that too. In quarkus currently OpenSSL/BoringSSL is not supported per se from what I've seen in the issues, right? EDIT: |
Exactly @zekronium - probably the only benefit you could have is if you do things on your own using the vertx allocator. - on VT. |
@franz1981 Is Quarkus jackson json encoding different to the Vert.x one? |
Yes @zekronium and will get better with time, like the wine :) where we bytecode generate the serializers to save reflection ;) |
Ah, reminds me of jsoniter-dynamic, they do similar things but its old and unmaintained now. Waiting for backport :) |
I have benchmarked our client application. Which method would you suggest to primarily look at as a common hotpath for both adaptive and default that would clearly direct comparison For apples to apples, which is newDirectBuffer, which I think is something well to look at since it universally covers IO with our settings: This test was ran as a simple PoC with quite light load for 2 minutes. With it it seems to be about ~3x improvement! These parts seem to consume slightly more CPU time, but might be by design? |
To better understand @zekronium :
|
This was a simple “feeler” test I ran using our request client for two minutes. I simply attached intelij profiler on my m1 max mac. It was a realistic load run, about 1k rps out, 1k connections. Rough usage was about 2gb with unsafe. I will comeback with a proper benchmark on proper hardware, but I hope this info is enough for now |
About your finding on the cost of buffer operations, it is expected, because the adaptive buffers piggyback to the usual Netty ones, but still way too much. I will run some micro on Netty itself to see if we can improve there and verify it. |
…eap JDK SSL heap buffer pooling (Fixes eclipse-vertx#5168)
…eap JDK SSL heap buffer pooling (Fixes eclipse-vertx#5168)
…eap JDK SSL heap buffer pooling (Fixes eclipse-vertx#5168)
Closed via #5292 |
Currently, in vertx
vert.x/src/main/java/io/vertx/core/buffer/impl/VertxByteBufAllocator.java
Line 25 in adbe976
PooledByteBufAllocator.DEFAULT
, causing creation of more thread-local direct buffers and arenas, enlarging the RSS footprint of vertx application, for no reason.What's the reason behind this choice @vietj?
The reason why it should be changed, is to "ease" the life of users and libraries which allocate Netty direct buffers using the Netty one and can end up allocating new arenas because of this.
If the aforementioned pool re-use the Netty one, clearly is getting some additional contention, but will save memory, which seems a reasonable trade-off.
The text was updated successfully, but these errors were encountered: