Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use custom thrift decoder to improve speed of parsing parquet metadata #5854

Open
Tracked by #5853
alamb opened this issue Jun 7, 2024 · 9 comments
Open
Tracked by #5853
Labels
enhancement Any new improvement worthy of a entry in the changelog parquet Changes to the parquet crate

Comments

@alamb
Copy link
Contributor

alamb commented Jun 7, 2024

Is your feature request related to a problem or challenge? Please describe what you are trying to do.
Part of #5853

Parsing the parquet metadata takes substantial time and most of that time is spent in decoding the thrift format (@XiangpengHao is quantifying this in #5770)

Describe the solution you'd like
Improve the thrift decoder speed

Describe alternatives you've considered
@jhorstmann reports on #5775 that he made a prototype of this:

          I had an attack of "not invented here" syndrome the last few days 😅 and worked on an alternative code generator for thrift, that would allow me to more easily try out some changes to the generated code. The repo can be found at <https://github.com/jhorstmann/compact-thrift/> and the output for `parquet.thrift` at <https://github.com/jhorstmann/compact-thrift/blob/main/src/main/rust/tests/parquet.rs>.

The current output is still doing allocations for string and binary, but running the benchmarks from https://github.com/tustvold/arrow-rs/tree/thrift-bench shows some nice improvements. This is the comparison with current arrow-rs code, so both versions should be doing the same amount of allocations:

decode metadata      time:   [32.592 ms 32.645 ms 32.702 ms]

decode metadata new  time:   [17.440 ms 17.476 ms 17.532 ms]

So incidentally very close to that 2x improvement.

The main difference in the code should be avoiding most of the abstractions from TInputProtocol and avoiding stack moves by directly writing into default-initialized structs instead of moving from local variables.

Originally posted by @jhorstmann in #5775 (comment)

Additional context

@alamb alamb added the enhancement Any new improvement worthy of a entry in the changelog label Jun 7, 2024
@alamb alamb added the parquet Changes to the parquet crate label Jun 7, 2024
@alamb
Copy link
Contributor Author

alamb commented Jun 7, 2024

FWIW another potential possibility is to hand-write a thrift decoder for the parquet metadata rather than relying on a code generator. That would likely result in the fastest decode time, but would also be the hardest to maintain.

@jhorstmann
Copy link
Contributor

Thanks @alamb for creating this tracking issue. I've slowly continued working on my code at jhorstmann/compact-thrift and benchmarks are looking good. So good in fact that adapting it on top of #5777 the performance hotspot shifts to the conversion functions from generated thrift types to internal types.

I would love to get some feedback on the code, and whether there would be a preference to integrate the parquet definitions into the arrow-rs repo, or publish them separately.

The generated and runtime code is also structured in a way that it would not be too crazy to write bindings to custom types by hand.

Direct links to the generated code and to the runtime library.

@XiangpengHao
Copy link
Contributor

FWIW, by simply moving this field to heap (i.e., Option<Statistics> -> Option<Box<Statistics>>), we can get 30% performance improvement (as will show in blog #5770).

pub statistics: Option<Statistics>,

The Option<Statistics> occupies 136 bytes even if the file does not have stats at all (i.e., the field is None); this not only slows down decoding (due to poor memory locality) but also causes high memory consumption when decoding metadata (parquet-rs consumes 10MB memory per MB of metadata).

I think this example motivates custom parquet type definitions and, thus, custom thrift decoder.

@alamb
Copy link
Contributor Author

alamb commented Jun 10, 2024

FWIW, by simply moving this field to heap (i.e., Option<Statistics> -> Option<Box<Statistics>>), we can get 30% performance improvement (as will show in blog #5770).

This code is in #5856 for anyone who is curious

@alamb
Copy link
Contributor Author

alamb commented Jun 10, 2024

I would love to get some feedback on the code, and whether there would be a preference to integrate the parquet definitions into the arrow-rs repo, or publish them separately.

Hi @jhorstmann -- I had a look at https://github.com/jhorstmann/compact-thrift/tree/main (very cool)

Some initial reactions:
Writing a code generator in Kotlin is a neat idea, but I think it might make the barrier to contribution high (now people need to know Rust and Kotlin (and the associated toolchains, etc)

Also, I keep thinking if we are going to have a parquet-rs specific implementation, why use a code generator at all? Maybe we could simply hand code a decoder directly that uses the runtime library

Given how infrequently the parquet spec changes, a hand rolled parser might be reasonable (though I will admin that the generated format.rs is substantial 🤔 ). We can probably ensure compatibility with round trip testing of generated rust code 🤔

@jhorstmann
Copy link
Contributor

Writing a code generator in Kotlin is a neat idea, but I think it might make the barrier to contribution high (now people need to know Rust and Kotlin (and the associated toolchains, etc)

I agree, in the context of arrow-rs this is probably a bigger barrier to contribute than the existing C++ based thrift code generator.

Maybe the amount of code could be simplified and made easier to change by hand with the use of some macros. The most tricky part of the code generation, difficult to replicate in a macro, might be the decision of which structs require lifetime annotations.

@alamb
Copy link
Contributor Author

alamb commented Jun 13, 2024

The more I think about this the more I am convinced that the fastest thing to do would be to decode directly from thrift --> the parquet-rs structs. Perhaps we could follow the tape decoding model of the csv or json parsers in this repo 🤔

Decoding to intermediate thrift structures which are then throw away seems like an obvious source of improvement

@jhorstmann
Copy link
Contributor

It occurred to me that the thrift definitions consist entirely of valid rust tokens, and so should be parseable using declarative macros. The result of that experiment can be seen in #5909, the complete macro can be found at https://github.com/jhorstmann/compact-thrift/blob/main/src/main/rust/runtime/src/macros.rs

@alamb
Copy link
Contributor Author

alamb commented Jun 18, 2024

It occurred to me that the thrift definitions consist entirely of valid rust tokens, and so should be parseable using declarative macros.

That is really (really) cool @jhorstmann

Maybe we could even use the declarative macros to creating a parser that avoids intermediates, by providing callbacks rather than building structs 🤔

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Any new improvement worthy of a entry in the changelog parquet Changes to the parquet crate
Projects
None yet
3 participants