-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reduce with init x2 to x100 times slower #49763
Comments
No, That being said, I agree that it would be nice to re-arrange the methods here so that we still call a higher-performance implementation even when given |
It depends on the element type. For
For
Would it make sense to always use |
@matthias314 thanks for your tests. julia> @btime @fastmath reduce(max, $v1)
994.316 ns (0 allocations: 0 bytes)
0.9995914743168539
julia> @btime reduce(max, $v1)
842.732 ns (0 allocations: 0 bytes)
0.9995914743168539 |
Yes, I've also noticed this. I think it would be good to understand how the code with On my machine at least, the fast versions of
For instance, one could change |
Duplicate of #47216? |
Yes, I'm pretty sure this ticket is a duplicate of the other one. Closing this to continue the discussion in a single place instead of spreading it over multiple pages. |
At least with sum & max, specifying
init
argument makes the code twice slower for vectors.The same happens using
maximum(array)
orsum(array)
I guess the implementations diverge at reducedim.jl.If init is specified, it's computed with a fold instead of a reduce.
With SparseVectors it's even worse (x100 slower)
Would it make sense to change it to st like this to obtain the same performance?
I could reproduce it in julia 1.8.5 & master (1.10)
The text was updated successfully, but these errors were encountered: