Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make tuple t[2:end] inferrable through constant prop #31138

Merged
merged 1 commit into from
May 4, 2020

Conversation

mbauman
Copy link
Member

@mbauman mbauman commented Feb 21, 2019

Spurred by #31132, I was curious if our constant propagation was robust enough to handle a peephole optimization here -- and it is! This removes the need to export front/tail by simply allowing folks to get inferrable results when they spell these operations as t[2:end] and t[1:end-1]. I additionally allowed for t[3:end] and t[1:end-2] (as well as the trivial t[1:end]) as I have used tail(tail(t)) in the past.

cc @timholy — this is the spiritual successor to #20370. Two years later and we have our holy grail!

@tkf
Copy link
Member

tkf commented Feb 22, 2019

Previous discussion: #30386

@mbauman
Copy link
Member Author

mbauman commented Feb 22, 2019

Oh thanks for the pointer @tkf — I had somehow missed that issue/PR. This implementation should be much less stressing on the compiler. It's less general, too, but it should hit the most common uses of a constant value.

Copy link
Member

@timholy timholy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is lovely! I learned something new from the fact that the @nospecialize doesn't break inference (EDIT: ah, that's surely because of the @inline.)

@timholy
Copy link
Member

timholy commented Jan 22, 2020

I'd like to merge this, but just want to point out that it doesn't fix all tuple indexing, it just special-cases the 4 situations @mbauman mentions, t[2:end], t[3:end], t[1:end-1], and t[1:end-2]. So the counter-argument is that by optimizing these cases we make it more surprising when this doesn't work in general. I am personally fine with that, as I think these cases make up the majority of uses.

But I'll leave this open for another couple of days in case people disagree.

@perrutquist
Copy link
Contributor

I recently wrote a proposal on discourse about branching on whether a value is known to the compiler.
I don't know enough about the compiler internals to even say if it's possible to implement that, but if the feature existed then it'd be very easy to handle tuple indexing in the general case.

@mbauman mbauman closed this Feb 26, 2020
@mbauman mbauman reopened this Feb 26, 2020
Spurred by #31132, I was curious if our constant propagation was robust enough to handle a peephole optimization here -- and it is! This removes the need to export front/tail by simply allowing folks to get inferrable results when they spell these operations as `t[2:end]` and `t[1:end-1]`.  I additionally allowed for `t[3:end]`, `t[1:end-2]`, and the trivial `t[1:end]` as I have used `tail(tail(t))` in the past.
@mbauman
Copy link
Member Author

mbauman commented Apr 29, 2020

I've rebased this and ran a few quick benchmarks just to make sure this is still needed and doing what we want. It is.

Before:

julia> f(t) = t[2:end]
f (generic function with 1 method)

julia> @btime f($(1,2,3,))
  149.206 ns (2 allocations: 128 bytes)
(2, 3)

julia> @btime f($(1,2,3,4,5,6,7,8))
  261.398 ns (2 allocations: 208 bytes)
(2, 3, 4, 5, 6, 7, 8)

julia> @btime f($(1,"two",3.0,4//1,0x5,0x0006))
  286.491 ns (9 allocations: 544 bytes)
("two", 3.0, 4//1, 0x05, 0x0006)

julia> f(t) = t[3:5]
f (generic function with 1 method)

julia> @btime f($(1,2,3,4,5,6,7,8))
  171.353 ns (2 allocations: 144 bytes)
(3, 4, 5)

julia> @btime f($(1,"two",3.0,4//1,0x5,0x0006))
  222.964 ns (7 allocations: 400 bytes)
(3.0, 4//1, 0x05)

After:

julia> f(t) = t[2:end]
f (generic function with 1 method)

julia> @btime f($(1,2,3,))
  0.035 ns (0 allocations: 0 bytes)
(2, 3)

julia> @btime f($(1,2,3,4,5,6,7,8))
  0.035 ns (0 allocations: 0 bytes)
(2, 3, 4, 5, 6, 7, 8)

julia> @btime f($(1,"two",3.0,4//1,0x5,0x0006))
  2.366 ns (0 allocations: 0 bytes)
("two", 3.0, 4//1, 0x05, 0x0006)

julia> f(t) = t[3:5] # still not special-cased
f (generic function with 1 method)

julia> @btime f($(1,2,3,4,5,6,7,8))
  172.339 ns (2 allocations: 144 bytes)
(3, 4, 5)

julia> @btime f($(1,"two",3.0,4//1,0x5,0x0006))
  225.719 ns (7 allocations: 400 bytes)
(3.0, 4//1, 0x05)

@code_warntype also shows what you'd expect — with all the downstream impacts that that has.

@mbauman mbauman merged commit bf22b5a into master May 4, 2020
@mbauman mbauman deleted the mb/tuple-loses-its-tail branch May 4, 2020 02:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants