You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem or challenge?
When writing to a parquet file in parallel, the implementation in #7562, will potentially buffer parquet data faster than can be written to the final output as there is no back pressure and the intermediate files are all buffered in memory.
I think the best possible solution would consume the sub parquet files incrementally from memory as they are produced, rather than buffering the entire file.
Ultimately, I'd like to be able to call SerializedRowGroupWriter.append_column as soon as possible -- before any parquet file has been completely serialized in memory. I.e. as a parallel tasks finishes encoding a single column for a single row group, eagerly flush those bytes to the concatenation task, then flush to ObjectStore and discard from memory. If the concatenation task can keep up with all of the parallel serializing tasks, then we could prevent ever buffering an entire row group in memory.
Describe the solution you'd like
I would like to see the output row groups written as they are produced, rather than all buffered and written after the fact, as suggested by @dev
Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem or challenge?
When writing to a parquet file in parallel, the implementation in #7562, will potentially buffer parquet data faster than can be written to the final output as there is no back pressure and the intermediate files are all buffered in memory.
As described by @devinjdangelo in #7562 (comment)
And #7562 (comment)
Describe the solution you'd like
I would like to see the output row groups written as they are produced, rather than all buffered and written after the fact, as suggested by @dev
Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: