-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Big functions makes it difficult to parallelize compilation #33
Comments
it's one huge state machine so optimizing separate functions seems counter-intuitive IMO. |
Related to macro expansion: there may be code-size vs performance tradeoffs to look at. The way I wrote the static Huffman decoding was with a lot of macro-generated code, for many combinations. There's some commented out code for building a dynamic Huffman decoder with the appropriate data for the static one, getting that to the same level of performance would probably be a huge win. |
Just ran cargo-bloat on a project using this crate (through image) and this was at the top:
So, yeah. That's way larger than I expected it to be. |
@killercup See #34 - it'd be nice to close this issue (do we need to publish a new |
@killercup Well, I'm not sure the issue is relevant anymore now that the huge macro-expanded part is gone - I expect compile times to be much more reasonable now. cc @bvssvni for releasing a new patch version |
@bvssvni can you make me the owner of |
@nwin Done. |
Ok, I published an update to the crate. |
Rust is adding support to compile and optimize at link time in parallel. Code units seems to be functions? or at least it can't parallelize in finer grain than functions.
Because this crate has a big single function compared with the rest of the code, compile times suffer.
Ideally (and I would argue may make code easier to maintain) that function can be broken up into smaller ones . Please see this awesome technical explanation.
The text was updated successfully, but these errors were encountered: