-
Notifications
You must be signed in to change notification settings - Fork 356
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jetty connector client response buffer is hard limited to 2MB #4646
Comments
The third option seems best for Jersey, since any changes in the buffering mechanics would be handled by Jetty itself. |
Thank you for the response, I will check with the jetty project and come back once anything is determined. |
Where I'm at
I think either 1 or 2 from top of the issue is the way to go. One allows you to set a different max limit, the second one is more aligned with how the async path behaves right now. Ideally the sync path should just be rewritten to behave the same way as async but I guess it could be a breaking change in some aspect since it will no longer throw on large payloads. Sync and async should ideally behave the same. I dislike having a variable that only takes effect for sync path but even with option 2 some kind of variable will be needed if different codepath must be taken. Not sure what to call it yet, I'll figure it out on the way. |
Jetty http client uses 2MB fixed/max buffer size when receiving responses and there is no way to override this in jersey currently. The problem is that this is not even configurable in jetty HttpClient, you need to do it by instantiating a response listener at request time, so supporting this requires change in connector code.
One proposal is this patch by introducing a new jetty client property to override the max buffer size and creating a new listener per request (I only did the synchronous part to test):
cen1@680d63b
Second possible approach is what I observed in camel apache/camel@41cd808 where an InputStreamListener is used instead to read arbitrary big responses. However, this opens a whole new set of problems at huge payloads that do not fit in ram (there is a comment on the merge about this which I am not sure was addressed). So perhaps this is not the better idea.
Perhaps an even better (best?) idea would be if jetty exposed this as part of HttpClient configuration, this way there is no code change needed in jersey and this can be solved when building jax-rs client instead of request time inside the connector. I am leaning towards this one but I would seek some feedback first before going upstream to jetty project.
The text was updated successfully, but these errors were encountered: