-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Messages over 1MiB stop all communication for a NIO UDS #118
Comments
We currently use ThreadLocal-defined direct byte buffers to allow callers to use non-direct buffers where we really need direct ones. The current maximum limit is 1MB, which breaks support for larger datagrams. Raise the limit from 1 MB to 8 MB, and allow configuration via a system property, org.newsclub.net.unix.thread-local-buffer.max-capacity, which takes the maximum capacity in bytes, or 0 for "unlimited". Using 0 is highly discouraged, as it may effectively block large chunks of memory. #118
Hi @TW-Goldencode, thanks for reporting! Please try the above commit. Let me know if you actually managed to get datagrams larger than 8 MB, or if that really is a good upper limit. Out of curiosity, can you explain for what you need these humungous datagrams, and how they perform compared to smaller ones? Cheers, |
@kohlschuetter Fantastic, thank you for this fix. |
@kohlschuetter Tested, it's all good. Would it be possible to create a Side notes: During the bootstrap, messages with a max of around 25MiB were transmitted. The size is not capped. The only safe way for us right now is Thanks again, we'll wait for release |
The limit to how large datagram can be seems to have no feasibly low limit (25MB datagrams have been reported to work). This imposes a challenge on caching/reuse strategies for direct byte buffers (a shared, reusable pool that is not thread-specific could be an alternative, but comes at the cost of complexity). At the cost of performance, revert the per-thread limit to 1MB, and return newly allocated direct byte buffers instead of cached ones whenever the limit is exceeded. Users of such unexpectedly large datagrams could either still force a higher (or unbounded) limit via the system property "org.newsclub.net.unix.thread-local-buffer.max-capacity", or better, use direct byte buffers in the calling code, obsoleting the need to use this cache in the first place. #118
Thanks for your feedback @TW-Goldencode. I think it becomes clear that 8MB is not a realistic upper limit, since 25MB datagrams seem to work for you as well. Please try the latest changes on that branch (including commit bf9fb50). That change lowers the limit back to 1MB, however it should work correctly (albeit perhaps a tad slower) for arbitrarily larger capacities. Please let me know (after removing the max-capacity system property override from your VM config) how that new change performs compared to max-capacity=0. Please (if possible) also test the scenario where you use direct byte buffers in the code that uses junixsocket (e.g., look for ByteBuffer.allocate and replace with ByteBuffer.allocateDirect). |
@kohlschuetter I agree the new approach is far superior, thank you. The issue already is a |
@kohlschuetter Note: It's possible |
@kohlschuetter Tested bf9fb50 , it fixes the issue (with the 1MiB default limit, no system variables). Much preferred, performance is good. Thanks again, we'll wait for release |
We currently use ThreadLocal-defined direct byte buffers to allow callers to use non-direct buffers where we really need direct ones. The current maximum limit is 1MB, which breaks support for larger datagrams. Raise the limit from 1 MB to 8 MB, and allow configuration via a system property, org.newsclub.net.unix.thread-local-buffer.max-capacity, which takes the maximum capacity in bytes, or 0 for "unlimited". Using 0 is highly discouraged, as it may effectively block large chunks of memory. #118
The limit to how large datagram can be seems to have no feasibly low limit (25MB datagrams have been reported to work). This imposes a challenge on caching/reuse strategies for direct byte buffers (a shared, reusable pool that is not thread-specific could be an alternative, but comes at the cost of complexity). At the cost of performance, revert the per-thread limit to 1MB, and return newly allocated direct byte buffers instead of cached ones whenever the limit is exceeded. Users of such unexpectedly large datagrams could either still force a higher (or unbounded) limit via the system property "org.newsclub.net.unix.thread-local-buffer.max-capacity", or better, use direct byte buffers in the calling code, obsoleting the need to use this cache in the first place. #118
junixsocket 2.5.2 has been released. Please re-open if you encounter further issues. Thanks again for reporting and testing, @TW-Goldencode! |
Thank you @kohlschuetter , no issues yet. Upped our gradle to |
Describe the bug:
Sending a message larger than 1MiB from server to client results in the following infinite loop:
Environment:
Fix:
The cause is:
DATAGRAMPACKET_BUFFER_MAX_CAPACITY
in https://github.com/kohlschutter/junixsocket/blob/main/junixsocket-common/src/main/java/org/newsclub/net/unix/AFCore.java is in effect.This seems strange, since it's not a datagram at all.
The OS has plenty of protection via
sysctl net.core.wmem_max
andsysctl net.core.rmem_max
, and within this limits can be tuned by (example)this.channel.setOption(java.net.StandardSocketOptions.SO_SNDBUF, 8388608);
andthis.channel.setOption(java.net.StandardSocketOptions.SO_RCVBUF, 8388608);
I did a custom build which removed the artificial hardcoded limit. After that it works fine.
Please indicate if you prefer a pull request for your fine library.
I'd suggest using a system variable, if set to
0
meaning unlimited.Do you have other ideas?
The text was updated successfully, but these errors were encountered: