-
Notifications
You must be signed in to change notification settings - Fork 785
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use custom thrift decoder to improve speed of parsing parquet metadata #5854
Comments
FWIW another potential possibility is to hand-write a thrift decoder for the parquet metadata rather than relying on a code generator. That would likely result in the fastest decode time, but would also be the hardest to maintain. |
Thanks @alamb for creating this tracking issue. I've slowly continued working on my code at jhorstmann/compact-thrift and benchmarks are looking good. So good in fact that adapting it on top of #5777 the performance hotspot shifts to the conversion functions from generated thrift types to internal types. I would love to get some feedback on the code, and whether there would be a preference to integrate the parquet definitions into the arrow-rs repo, or publish them separately. The generated and runtime code is also structured in a way that it would not be too crazy to write bindings to custom types by hand. Direct links to the generated code and to the runtime library. |
FWIW, by simply moving this field to heap (i.e., arrow-rs/parquet/src/format.rs Line 3407 in 087f34b
The I think this example motivates custom parquet type definitions and, thus, custom thrift decoder. |
Hi @jhorstmann -- I had a look at https://github.com/jhorstmann/compact-thrift/tree/main (very cool) Some initial reactions: Also, I keep thinking if we are going to have a parquet-rs specific implementation, why use a code generator at all? Maybe we could simply hand code a decoder directly that uses the runtime library Given how infrequently the parquet spec changes, a hand rolled parser might be reasonable (though I will admin that the generated format.rs is substantial 🤔 ). We can probably ensure compatibility with round trip testing of generated rust code 🤔 |
I agree, in the context of arrow-rs this is probably a bigger barrier to contribute than the existing C++ based thrift code generator. Maybe the amount of code could be simplified and made easier to change by hand with the use of some macros. The most tricky part of the code generation, difficult to replicate in a macro, might be the decision of which structs require lifetime annotations. |
The more I think about this the more I am convinced that the fastest thing to do would be to decode directly from thrift --> the parquet-rs structs. Perhaps we could follow the tape decoding model of the csv or json parsers in this repo 🤔 Decoding to intermediate thrift structures which are then throw away seems like an obvious source of improvement |
It occurred to me that the thrift definitions consist entirely of valid rust tokens, and so should be parseable using declarative macros. The result of that experiment can be seen in #5909, the complete macro can be found at https://github.com/jhorstmann/compact-thrift/blob/main/src/main/rust/runtime/src/macros.rs |
That is really (really) cool @jhorstmann Maybe we could even use the declarative macros to creating a parser that avoids intermediates, by providing callbacks rather than building structs 🤔 |
Is your feature request related to a problem or challenge? Please describe what you are trying to do.
Part of #5853
Parsing the parquet metadata takes substantial time and most of that time is spent in decoding the thrift format (@XiangpengHao is quantifying this in #5770)
Describe the solution you'd like
Improve the thrift decoder speed
Describe alternatives you've considered
@jhorstmann reports on #5775 that he made a prototype of this:
The current output is still doing allocations for string and binary, but running the benchmarks from https://github.com/tustvold/arrow-rs/tree/thrift-bench shows some nice improvements. This is the comparison with current arrow-rs code, so both versions should be doing the same amount of allocations:
So incidentally very close to that 2x improvement.
The main difference in the code should be avoiding most of the abstractions from
TInputProtocol
and avoiding stack moves by directly writing into default-initialized structs instead of moving from local variables.Originally posted by @jhorstmann in #5775 (comment)
Additional context
The text was updated successfully, but these errors were encountered: