-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make tuple t[2:end] inferrable through constant prop #31138
Conversation
Previous discussion: #30386 |
Oh thanks for the pointer @tkf — I had somehow missed that issue/PR. This implementation should be much less stressing on the compiler. It's less general, too, but it should hit the most common uses of a constant value. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is lovely! I learned something new from the fact that the @nospecialize
doesn't break inference (EDIT: ah, that's surely because of the @inline
.)
I'd like to merge this, but just want to point out that it doesn't fix all tuple indexing, it just special-cases the 4 situations @mbauman mentions, But I'll leave this open for another couple of days in case people disagree. |
I recently wrote a proposal on discourse about branching on whether a value is known to the compiler. |
Spurred by #31132, I was curious if our constant propagation was robust enough to handle a peephole optimization here -- and it is! This removes the need to export front/tail by simply allowing folks to get inferrable results when they spell these operations as `t[2:end]` and `t[1:end-1]`. I additionally allowed for `t[3:end]`, `t[1:end-2]`, and the trivial `t[1:end]` as I have used `tail(tail(t))` in the past.
a015838
to
cc1ab0f
Compare
I've rebased this and ran a few quick benchmarks just to make sure this is still needed and doing what we want. It is. Before: julia> f(t) = t[2:end]
f (generic function with 1 method)
julia> @btime f($(1,2,3,))
149.206 ns (2 allocations: 128 bytes)
(2, 3)
julia> @btime f($(1,2,3,4,5,6,7,8))
261.398 ns (2 allocations: 208 bytes)
(2, 3, 4, 5, 6, 7, 8)
julia> @btime f($(1,"two",3.0,4//1,0x5,0x0006))
286.491 ns (9 allocations: 544 bytes)
("two", 3.0, 4//1, 0x05, 0x0006)
julia> f(t) = t[3:5]
f (generic function with 1 method)
julia> @btime f($(1,2,3,4,5,6,7,8))
171.353 ns (2 allocations: 144 bytes)
(3, 4, 5)
julia> @btime f($(1,"two",3.0,4//1,0x5,0x0006))
222.964 ns (7 allocations: 400 bytes)
(3.0, 4//1, 0x05) After: julia> f(t) = t[2:end]
f (generic function with 1 method)
julia> @btime f($(1,2,3,))
0.035 ns (0 allocations: 0 bytes)
(2, 3)
julia> @btime f($(1,2,3,4,5,6,7,8))
0.035 ns (0 allocations: 0 bytes)
(2, 3, 4, 5, 6, 7, 8)
julia> @btime f($(1,"two",3.0,4//1,0x5,0x0006))
2.366 ns (0 allocations: 0 bytes)
("two", 3.0, 4//1, 0x05, 0x0006)
julia> f(t) = t[3:5] # still not special-cased
f (generic function with 1 method)
julia> @btime f($(1,2,3,4,5,6,7,8))
172.339 ns (2 allocations: 144 bytes)
(3, 4, 5)
julia> @btime f($(1,"two",3.0,4//1,0x5,0x0006))
225.719 ns (7 allocations: 400 bytes)
(3.0, 4//1, 0x05)
|
Spurred by #31132, I was curious if our constant propagation was robust enough to handle a peephole optimization here -- and it is! This removes the need to export front/tail by simply allowing folks to get inferrable results when they spell these operations as
t[2:end]
andt[1:end-1]
. I additionally allowed fort[3:end]
andt[1:end-2]
(as well as the trivialt[1:end]
) as I have usedtail(tail(t))
in the past.cc @timholy — this is the spiritual successor to #20370. Two years later and we have our holy grail!