-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RestEasy Reactive Client] Use AsyncInputStream for Posting InputStream #37308
[RestEasy Reactive Client] Use AsyncInputStream for Posting InputStream #37308
Conversation
c9b5857
to
bb66558
Compare
59da9e1
to
d18bf3f
Compare
b9bb289
to
c0ea8f1
Compare
c0ea8f1
to
42e9ec6
Compare
Note 2: POST is about 4 times slower than GET, but still better than nothing. POST: up to 950 MB/s, using Netty as client only up to 1250 MB/s |
42e9ec6
to
d34c42a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vietj can you have a look to the Vert.x Input Stream to Read Stream class? |
This comment has been minimized.
This comment has been minimized.
Very nice. This will take some time to review, more that the few minutes I currently have on my phone 😎 |
df687a6
to
e04dd56
Compare
Just a side note: when using direct netty client, out of vertx or quarkus, I used HttpChunkedInput with a chunkWriteHandler to get max performance, but I didn't find a way to use it within reactive client, even if under there is netty... |
I remember @vietj saying we should use |
e04dd56
to
2a89aaa
Compare
A bit improvement by fetching 3 instead of 1 at startup (so little memory consumption) and fetching each time a buffer is consumed (this seems to enable better pipelining between consumption and network): POST: from 950 MB/s to 1200 MB/s, compare to Netty as client only up to 3600 MB/s Note that increasing the startup fetching from 2 to 5 still increases the result, with still no memory impact (not noticeable):
My conviction is that 3 shall be the conservative option to get the best average performance while minimizing the memory impact (even if not noticeable). 4 might be the best choice however. (maybe an option for that ?) |
@vietj can you have a look at this as it very intense on the Vert.x side? Thanks |
...ient/runtime/src/main/java/org/jboss/resteasy/reactive/client/handlers/AsyncInputStream.java
Outdated
Show resolved
Hide resolved
@fredericBregier I had a quick look: as far as I understood the AsyncInputStream will always be using the VertxBlockingInput so we have the guarantee that using the input stream never blocks ? |
@vietj This implementation is inspired from various projects (not mines). However, if there is another way, I can try if you have any idea. The issue right now is that sending an inputstream (POST) from a client leads to all bytes in memory, which leads to OOME. On the reverse way, getting InputStream (GET) is working (modulo some optimizations that I proposed too, but can be split in 2 MR). |
@vietj And indeed, only in double POST (client to server to another server), it is indeed VertxBlockingInput that seems to block. But as it is under the wood, it seems difficult to bypass it. Any idea ? |
Sorry for the spam alike... I tried the following changes. I added a test with concurrent client (10 threads). With previous implementation based on VertxBlockingStream, it was always in error (length not ok). With this version, 8/10 threads are ok, still 2 not. So I hope I'm not so far from a correct answer. No blocking but early closing of the InputStream... Still investigating (Applied on all 3 versions with small adaptations to context) |
Stopping research for now: debugging gives me that it seems Http1xServerConnection.onEnd is called too early, such that there are missing chunks, but I don't get why. |
2a89aaa
to
77a32b6
Compare
@vietj I ended up with a new version which seems far more stable, both in sequential and concurrent usages.
|
c1097bf
to
7730d3c
Compare
7730d3c
to
f76ab1c
Compare
@vietj Hi, I know the end and begiining of year are not the most favorable time to review ;-) |
@fredericBregier I need to dedicate time to this, now seems more favorable indeed! |
Any news? I guess next quarkus release is probably taken all available time... |
InputStream was not taking into account when using sending it with ReastEasy client to a Rest service. Why: The previous implementation was putting all bytes in one buffer, leading either to OOME or limit on acceptable Buffer size. Change: Add InputStream Async support such that it will not fill the memory and respect back pressure.
maxChunkSize to current MaxChunkSize, at least 8192
Workd both on sending and retrieving (server, client) an InputStream Fetch more than 1 at startup (3 to still have a limit of memory consumption) Speedup: - POST: from 900 MB/s to 1200 MB/s (almost close to Netty native Post at 1400 MB/s) - GET: from 900 MB/s to 2000 MB/s (still half og Netty netive GET at 4400 MB/s)
In particular, improve VertxBlockingInput
bf65e50
to
24b6ca9
Compare
Hi @vietj |
Hi @vietj |
@fredericBregier sorry no time for that, I am very overloaded with the recent releases we have done I will try to have a look soon (hint: this code is quite complex) |
I understand. I know the recent releases were huge!! No problem |
Hi @vietj |
@vietj OK, I found out that the next commit was fixing the header issue (I was on the commit level, so missing this one). I have to test however. Do you now when those commits will be put in an official release such that we can close easily this MR? |
Those commits are already part of Quarkus 3.8.3. I am sorry that I had not realized my PR was essentially doing the same thing as yours. |
@geoand No issue : you've found another way far more readable! That's perfect ! Thanks a lot! I check again using last commits from main (partially imported to not get everything). This works fine! I think we can close this MR then, and I will probably later on propose some optimizations on VertxXXXInputStream, and you''l see if it is interesting. |
Glad to hear it works for you! Improvements are absolutely welcome 😁 |
You are right, that was my fault for not properly labeling the PR. It should be part of the next 3.8 release |
No problem. As soon as I saw it, I sent this message in order to fix it in next release. |
Thank you for raising the issue! |
InputStream was not taking into account when using sending it with ReastEasy client to a Rest service.
Why:
The previous implementation was putting all bytes in one buffer, leading either to OOME or limit on acceptable Buffer size.
Change:
Add InputStream Async support such that it will not fill the memory and respect back pressure.
Note that, to work, quarkus.http.limits.max-body-size shall be set to a correct value (ex. 0), since currently VertxInputStream has soft limit on this value. The behavior is not changed here but could if necessary since it is no more in memory (not one buffer but chunked mode with back pressure). But as setting the value to 0 or enough is working.
Add a test to check correctness (speed and memory) but this can be improved since it only checks at the end the status and/or the size of the inputstream.