Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inference: IPO constant propagation #24362

Merged
merged 3 commits into from
Dec 5, 2017
Merged

inference: IPO constant propagation #24362

merged 3 commits into from
Dec 5, 2017

Conversation

vtjnash
Copy link
Member

@vtjnash vtjnash commented Oct 27, 2017

Implements inter-procedural constant propagation.

fixes #5560
fixes #24011
fixes #14324
(based on some commits in #24337)

@StefanKarpinski
Copy link
Member

Well, that's a terrifyingly large diff to inference, but it's a very nice optimization to have!

@oscardssmith
Copy link
Member

It would be great to have nandolier on this type of thing

@yuyichao
Copy link
Contributor

It would be great to have nandolier on this type of thing

It's it more like "It would be great to have nandolier" :trollface:

@JeffBezanson JeffBezanson added the compiler:inference Type inference label Oct 27, 2017
@JeffBezanson
Copy link
Member

Looks pretty good 👍

How long do those cache arrays get?

@ararslan ararslan added needs news A NEWS entry is required for this change needs tests Unit tests are required for this change labels Oct 27, 2017
@vtjnash
Copy link
Member Author

vtjnash commented Nov 6, 2017

How long do those cache arrays get?

During sysimg build, the largest was 4728 items. The average size during a lookup was 470 items. I didn't collect any other stats (like the number of times we entered inference) for comparison though.

@vtjnash vtjnash removed needs news A NEWS entry is required for this change needs tests Unit tests are required for this change labels Nov 7, 2017
@vtjnash
Copy link
Member Author

vtjnash commented Nov 7, 2017

@nanosoldier runbenchmarks(ALL, vs=":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan

@JeffBezanson
Copy link
Member

It'd be good to have the runtime intrinsics fixes in a separate PR and merge that first.

@vtjnash
Copy link
Member Author

vtjnash commented Nov 8, 2017

Sure. Other than that, this is all ready to go. (passing all tests and nanosoldier is happy – the noise in the sparse test appears to also happens on master locally)

@vtjnash
Copy link
Member Author

vtjnash commented Nov 9, 2017

I guess I'll merge this tomorrow, unless there's any requests to defer it.

@vtjnash
Copy link
Member Author

vtjnash commented Nov 9, 2017

Would have been really nice if nanosoldier nightlies were working first though.

@JeffBezanson
Copy link
Member

The compiler performance impact of this is surprisingly small, but there's still about a 30 second regression in sysimg build time for me (5'16", while master is about 4'45" and 0.6 is < 3'). We should find some way to moderate this.

@vtjnash
Copy link
Member Author

vtjnash commented Nov 9, 2017

Yeah, I was mildly surprised too, since I didn't add any particular limits on it (but I had some expectations from past analysis). How much do you want it to back off on attempting to do better optimization?

@vtjnash
Copy link
Member Author

vtjnash commented Nov 14, 2017

Since I find that timing the sysimg build is often not a representative workload, I've also timed running all of the tests in sequence. Looks like the net impact (on this inference heavy workload) is about 18%, so I think this is good to merge now.

master:
7808.731699 seconds (2.61 M allocations: 131.345 MiB, 0.00% gc time)
PR:
9249.809467 seconds (4.68 M allocations: 230.148 MiB, 0.00% gc time)

@JeffBezanson
Copy link
Member

The sysimg build time regression described above is about 11%, so that's worse. I'm not convinced the trade-off is worthwhile as-is. There is definitely some code out there that benefits from this, but most of the code we're testing here probably doesn't, yet pays a hefty 18% cost.

@vtjnash
Copy link
Member Author

vtjnash commented Nov 15, 2017

Alright, I'll stop working on improving inference's capabilities here. Should we close the existing bug reports (#5560, #24011, and #14324) as wont-fix?

@oscardssmith
Copy link
Member

oscardssmith commented Nov 15, 2017

How difficult would it be to place a limit where we only propagate "easy" cases. My guess is that this would yield much of the benefit, but might let us bail out earlier in most cases. Alternatively, could IPO be opt in?

@vchuravy
Copy link
Member

Is this good to be merged?

@ararslan
Copy link
Member

@nanosoldier runbenchmarks(ALL, vs=":master")

@vtjnash
Copy link
Member Author

vtjnash commented Nov 26, 2017

We can’t merge any features while CI is broken, but yes this should be ready to go. Hopefullly folks deal with the CI issues soon and quickly, as that’s likely to delay our desired freeze date.

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan

@JeffBezanson
Copy link
Member

CI is now relatively healthy. Rebase, test, and merge?

Create a separate type for results that holds just the result of running
some configuration of inference, and does local caching of the results.
@vtjnash
Copy link
Member Author

vtjnash commented Dec 4, 2017

rebased and CI looks happy with this

@vchuravy
Copy link
Member

vchuravy commented Dec 4, 2017

Let's ask nanosoldier one last time?
@nanosoldier runbenchmarks(ALL, vs=":master")

@pabloferz
Copy link
Contributor

It seems that nanosoldier didn't start

@nanosoldier runbenchmarks(ALL, vs=":master")

@ararslan
Copy link
Member

ararslan commented Dec 4, 2017

The Nanosoldier webhook log on this repo shows a 400 from both of your requests. I have no idea why but maybe third time's a charm?

@nanosoldier runbenchmarks(ALL, vs=":master")

@ararslan
Copy link
Member

ararslan commented Dec 4, 2017

Nanosoldier gave me a 400 as well so I put him in a time out (AKA I restarted the server).

@nanosoldier runbenchmarks(ALL, vs=":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan

@vchuravy vchuravy merged commit 9fe9d2c into master Dec 5, 2017
@vchuravy vchuravy deleted the jn/infer-constants branch December 5, 2017 13:52
@quinnj
Copy link
Member

quinnj commented Dec 5, 2017

woohoo!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:inference Type inference
Projects
None yet