Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Big functions makes it difficult to parallelize compilation #33

Open
JordiPolo opened this issue Oct 26, 2017 · 9 comments
Open

Big functions makes it difficult to parallelize compilation #33

JordiPolo opened this issue Oct 26, 2017 · 9 comments

Comments

@JordiPolo
Copy link

Rust is adding support to compile and optimize at link time in parallel. Code units seems to be functions? or at least it can't parallelize in finer grain than functions.
Because this crate has a big single function compared with the rest of the code, compile times suffer.
Ideally (and I would argue may make code easier to maintain) that function can be broken up into smaller ones . Please see this awesome technical explanation.

@eddyb
Copy link
Member

eddyb commented Oct 26, 2017

it's one huge state machine so optimizing separate functions seems counter-intuitive IMO.
I guess you could try splitting it off into even more methods, for nested state machines - but we do this already to some extent. Codegen units aren't function-level, LLVM can't optimize functions in parallel, it was never designed with that in mind and there are too many hazards.
Also, aren't compile time mostly in macro expansion for this crate?

@eddyb
Copy link
Member

eddyb commented Oct 26, 2017

Related to macro expansion: there may be code-size vs performance tradeoffs to look at. The way I wrote the static Huffman decoding was with a lot of macro-generated code, for many combinations.

There's some commented out code for building a dynamic Huffman decoder with the appropriate data for the static one, getting that to the same level of performance would probably be a huge win.

@killercup
Copy link

Just ran cargo-bloat on a project using this crate (through image) and this was at the top:

 5.9%  10.3% 178.4KiB inflate::InflateStream::next_state

So, yeah. That's way larger than I expected it to be.

@eddyb
Copy link
Member

eddyb commented Jan 28, 2018

@killercup See #34 - it'd be nice to close this issue (do we need to publish a new inflate version?).

@killercup
Copy link

Ah yes, 0.3.3 is the latest version but it doesn't contain #35.

@eddyb since this issue is about using codegen units, I don't think this affected at all? (But I also think crates should be optimized for this…)

@eddyb
Copy link
Member

eddyb commented Jan 28, 2018

@killercup Well, I'm not sure the issue is relevant anymore now that the huge macro-expanded part is gone - I expect compile times to be much more reasonable now.

cc @bvssvni for releasing a new patch version

@nwin
Copy link
Contributor

nwin commented Feb 3, 2018

@bvssvni can you make me the owner of inflate as well? Just tried to publish myself, didn’t work 😄

@bvssvni
Copy link
Contributor

bvssvni commented Feb 3, 2018

@nwin Done.

@nwin
Copy link
Contributor

nwin commented Feb 3, 2018

Ok, I published an update to the crate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants