-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Java][FlightRpc] server zero-copy doesn't work if padding buffers are needed to serialise response #40039
Comments
In fact, the same problem is present on non-zero-copy path as well.
|
This Encoder uses CoalescingBufferQueue that merges small buffers to optimize writes, but there is a special optimisation if buffer is a
This code assumes that This call to So to finalize: current logic of setting I think that correct solution might be to set I believe It might be one of the problems in Java performance described here: #13980 |
…ry memory copies (#40042) ### Rationale for this change Described in details in the issue: #40039 Summary: class ArrowMessage uses CompositeByteBuf to avoid memory copies but `maxNumComponents` for it is calculated incorrectly and as a result memory copies are still performed which significantly affects the performance of the server. ### What changes are included in this PR? Changing maxNumComponents to `Integer.MAX_VALUE` because we never want to silently merge large buffers into one. User can set useZeroCopy=false (default) and then the library will copy data into a new buffer before sending it to Netty for write. ### Are these changes tested? **TestPerf: 30% throughput boost** ``` BEFORE Transferred 100000000 records totaling 3200000000 bytes at 877.812629 MiB/s. 28764164.218015 record/s. 7024.784185 batch/s. AFTER Transferred 100000000 records totaling 3200000000 bytes at 1145.333893 MiB/s. 37530301.022096 record/s. 9165.650116 batch/s. ``` Also tested with a simple client-server application and I saw even more significant performance boost if padding isn't needed. Two tests with zero-copy set to true: **50 batches, 30 columns (Int32), 199999 rows in each batch** - before change: throughput ~25Gbit/s (memory copy happens in `grpc-nio-worker-ELG-*`) - after change: throughput ~32Gbit/s (20% boost) **50 batches, 30 columns (Int32), 200k rows in each batch** - before change: throughput ~15Gbit/s (much slower than with 199999 because memory copy happens in `flight-server-default-executor-*` thread and blocks server from writing next batch. - after change: throughput ~32Gbit/s (**115% boost**) * Closes: #40039 Authored-by: Lev Tolmachev <[email protected]> Signed-off-by: David Li <[email protected]>
…ecessary memory copies (apache#40042) ### Rationale for this change Described in details in the issue: apache#40039 Summary: class ArrowMessage uses CompositeByteBuf to avoid memory copies but `maxNumComponents` for it is calculated incorrectly and as a result memory copies are still performed which significantly affects the performance of the server. ### What changes are included in this PR? Changing maxNumComponents to `Integer.MAX_VALUE` because we never want to silently merge large buffers into one. User can set useZeroCopy=false (default) and then the library will copy data into a new buffer before sending it to Netty for write. ### Are these changes tested? **TestPerf: 30% throughput boost** ``` BEFORE Transferred 100000000 records totaling 3200000000 bytes at 877.812629 MiB/s. 28764164.218015 record/s. 7024.784185 batch/s. AFTER Transferred 100000000 records totaling 3200000000 bytes at 1145.333893 MiB/s. 37530301.022096 record/s. 9165.650116 batch/s. ``` Also tested with a simple client-server application and I saw even more significant performance boost if padding isn't needed. Two tests with zero-copy set to true: **50 batches, 30 columns (Int32), 199999 rows in each batch** - before change: throughput ~25Gbit/s (memory copy happens in `grpc-nio-worker-ELG-*`) - after change: throughput ~32Gbit/s (20% boost) **50 batches, 30 columns (Int32), 200k rows in each batch** - before change: throughput ~15Gbit/s (much slower than with 199999 because memory copy happens in `flight-server-default-executor-*` thread and blocks server from writing next batch. - after change: throughput ~32Gbit/s (**115% boost**) * Closes: apache#40039 Authored-by: Lev Tolmachev <[email protected]> Signed-off-by: David Li <[email protected]>
…ecessary memory copies (apache#40042) ### Rationale for this change Described in details in the issue: apache#40039 Summary: class ArrowMessage uses CompositeByteBuf to avoid memory copies but `maxNumComponents` for it is calculated incorrectly and as a result memory copies are still performed which significantly affects the performance of the server. ### What changes are included in this PR? Changing maxNumComponents to `Integer.MAX_VALUE` because we never want to silently merge large buffers into one. User can set useZeroCopy=false (default) and then the library will copy data into a new buffer before sending it to Netty for write. ### Are these changes tested? **TestPerf: 30% throughput boost** ``` BEFORE Transferred 100000000 records totaling 3200000000 bytes at 877.812629 MiB/s. 28764164.218015 record/s. 7024.784185 batch/s. AFTER Transferred 100000000 records totaling 3200000000 bytes at 1145.333893 MiB/s. 37530301.022096 record/s. 9165.650116 batch/s. ``` Also tested with a simple client-server application and I saw even more significant performance boost if padding isn't needed. Two tests with zero-copy set to true: **50 batches, 30 columns (Int32), 199999 rows in each batch** - before change: throughput ~25Gbit/s (memory copy happens in `grpc-nio-worker-ELG-*`) - after change: throughput ~32Gbit/s (20% boost) **50 batches, 30 columns (Int32), 200k rows in each batch** - before change: throughput ~15Gbit/s (much slower than with 199999 because memory copy happens in `flight-server-default-executor-*` thread and blocks server from writing next batch. - after change: throughput ~32Gbit/s (**115% boost**) * Closes: apache#40039 Authored-by: Lev Tolmachev <[email protected]> Signed-off-by: David Li <[email protected]>
…ecessary memory copies (apache#40042) ### Rationale for this change Described in details in the issue: apache#40039 Summary: class ArrowMessage uses CompositeByteBuf to avoid memory copies but `maxNumComponents` for it is calculated incorrectly and as a result memory copies are still performed which significantly affects the performance of the server. ### What changes are included in this PR? Changing maxNumComponents to `Integer.MAX_VALUE` because we never want to silently merge large buffers into one. User can set useZeroCopy=false (default) and then the library will copy data into a new buffer before sending it to Netty for write. ### Are these changes tested? **TestPerf: 30% throughput boost** ``` BEFORE Transferred 100000000 records totaling 3200000000 bytes at 877.812629 MiB/s. 28764164.218015 record/s. 7024.784185 batch/s. AFTER Transferred 100000000 records totaling 3200000000 bytes at 1145.333893 MiB/s. 37530301.022096 record/s. 9165.650116 batch/s. ``` Also tested with a simple client-server application and I saw even more significant performance boost if padding isn't needed. Two tests with zero-copy set to true: **50 batches, 30 columns (Int32), 199999 rows in each batch** - before change: throughput ~25Gbit/s (memory copy happens in `grpc-nio-worker-ELG-*`) - after change: throughput ~32Gbit/s (20% boost) **50 batches, 30 columns (Int32), 200k rows in each batch** - before change: throughput ~15Gbit/s (much slower than with 199999 because memory copy happens in `flight-server-default-executor-*` thread and blocks server from writing next batch. - after change: throughput ~32Gbit/s (**115% boost**) * Closes: apache#40039 Authored-by: Lev Tolmachev <[email protected]> Signed-off-by: David Li <[email protected]>
…ry memory copies (#40042) ### Rationale for this change Described in details in the issue: apache/arrow#40039 Summary: class ArrowMessage uses CompositeByteBuf to avoid memory copies but `maxNumComponents` for it is calculated incorrectly and as a result memory copies are still performed which significantly affects the performance of the server. ### What changes are included in this PR? Changing maxNumComponents to `Integer.MAX_VALUE` because we never want to silently merge large buffers into one. User can set useZeroCopy=false (default) and then the library will copy data into a new buffer before sending it to Netty for write. ### Are these changes tested? **TestPerf: 30% throughput boost** ``` BEFORE Transferred 100000000 records totaling 3200000000 bytes at 877.812629 MiB/s. 28764164.218015 record/s. 7024.784185 batch/s. AFTER Transferred 100000000 records totaling 3200000000 bytes at 1145.333893 MiB/s. 37530301.022096 record/s. 9165.650116 batch/s. ``` Also tested with a simple client-server application and I saw even more significant performance boost if padding isn't needed. Two tests with zero-copy set to true: **50 batches, 30 columns (Int32), 199999 rows in each batch** - before change: throughput ~25Gbit/s (memory copy happens in `grpc-nio-worker-ELG-*`) - after change: throughput ~32Gbit/s (20% boost) **50 batches, 30 columns (Int32), 200k rows in each batch** - before change: throughput ~15Gbit/s (much slower than with 199999 because memory copy happens in `flight-server-default-executor-*` thread and blocks server from writing next batch. - after change: throughput ~32Gbit/s (**115% boost**) * Closes: #40039 Authored-by: Lev Tolmachev <[email protected]> Signed-off-by: David Li <[email protected]>
Describe the bug, including details regarding any error messages, version, and platform.
ArrowBufRetainingCompositeByteBuf isn't supposed to copy data into new Netty buffers. To make it work it extends CompositeByteBuf and passes existing Arrow buffers as components.
But CompositeByteBuf constructors accepts two parameters: max count of components and list of components (buffers) and if count of buffers is above
maxNumComponents
it will do consolidation and merge some buffers into a new buffer.ArrowBufRetainingCompositeByteBuf passes
maxNumComponents=backingBuffers.size() + 1
and notbuffers.size() + 1
. When padding is used, buffers will have additional byte buffers for padding and as a resultbuffers.size() > backingBuffers.size() + 1
.As a result zero-copy doesn't work and a new copy of data is created by
CompositeByteBuf.consolidateIfNeeded()
.Fun fact: I found this when I was trying to debug why simple client-server benchmark works exactly 2x times faster when result has 199999 rows than when it has 200000 rows. Number of columns didn't matter, only the number of rows.
Fun fact 2: This is zero-copy version that works slower, not the version that does additional memory copy. If I remove
listener.setUseZeroCopy(true);
from producer implementation, both versions start showing same results.Component(s)
FlightRPC, Java
The text was updated successfully, but these errors were encountered: