Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove fast paths when scaling arrays #29726

Merged
merged 2 commits into from
Oct 25, 2018
Merged

Remove fast paths when scaling arrays #29726

merged 2 commits into from
Oct 25, 2018

Conversation

andreasnoack
Copy link
Member

@andreasnoack andreasnoack commented Oct 19, 2018

As pointed out in #29691 (comment). If the scaled array contains Inf or NaN then the fast path wasn't valid. This bug was also in BLAS so two array scaling methods no longer call BLAS for large arrays. I couldn't measure any slowdown. It's a very simple function so this isn't a surprise.

This reverts commit 0ba31aa but still fixes #29681.

@andreasnoack andreasnoack added linear algebra Linear algebra bugfix This change fixes an existing bug backport pending 1.0 labels Oct 19, 2018
εδ = eps(one(nrm))/δ
rmul!(v, εδ)
rmul!(v, inv(nrm*εδ))
end

return v
v
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-1 for removing explicit returns.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm ok for both style. But I want the consistency of the same module.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added the return in the commit I'm reverting and I'm no longer touching that function. I also prefer explicit return statements.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, sorry.

@KristofferC
Copy link
Member

julia> A=randn(10^8);

julia> @time rmul!(A, 0.5);
  0.087498 seconds (4 allocations: 160 bytes)

julia> @time LinearAlgebra.generic_rmul!(A, 0.5);
  0.119145 seconds (4 allocations: 160 bytes)

suggests there is a performance difference?

@andreasnoack
Copy link
Member Author

That is a long vector. I only tested up to 10^6.

julia> @btime LinearAlgebra.generic_rmul!(y, 0.4) setup=(y=randn(10^6));
  305.301 μs (0 allocations: 0 bytes)

julia> @btime BLAS.scal!(length(y), 0.4, y, 1) setup=(y=randn(10^6));
  380.453 μs (0 allocations: 0 bytes)

Maybe multithreading has a little effect for very long vectors.

@KristofferC
Copy link
Member

I get

julia> @btime LinearAlgebra.generic_rmul!(y, 0.4) setup=(y=randn(10^6));
  439.046 μs (0 allocations: 0 bytes)

julia> @btime BLAS.scal!(length(y), 0.4, y, 1) setup=(y=randn(10^6));
  394.152 μs (0 allocations: 0 bytes)

on a 2015 MBP so I guess it depends on the cpu.

@andreasnoack
Copy link
Member Author

I'll merge. There might be some architectures where BLAS is a bit faster that us but it looks like the opposite is also the case and for smaller arrays, I think we are generally as faster or faster. This is not an operation where we should expect BLAS to do better than LLVM.

@andreasnoack andreasnoack merged commit 534fe44 into master Oct 25, 2018
@andreasnoack andreasnoack deleted the an/normalize branch October 25, 2018 12:50
KristofferC pushed a commit that referenced this pull request Oct 29, 2018
* Revert "Handle case when norm==Inf in normalize (#29691)"

This reverts commit 0ba31aa.

* Remove fast paths when scaling arrays. Also stop using BLAS since
the Julia implementations are as fast. Clean up.

(cherry picked from commit 534fe44)
@KristofferC KristofferC added needs news A NEWS entry is required for this change minor change Marginal behavior change acceptable for a minor release and removed backport pending 1.0 labels Nov 2, 2018
@KristofferC
Copy link
Member

Too big of a behavior change to backport.

@StefanKarpinski
Copy link
Member

What is breaking about this change? The description and discussion make it seem like a removal of a unhelpful fast path but the "minor change" label and the fact that this seems to have broken BandedMatrices and BlockArrays indicates that it's breaking.

@fredrikekre
Copy link
Member

Julia 1.0.2

julia> rmul!(fill(NaN, 2, 2), 0.0)
2×2 Array{Float64,2}:
 0.0  0.0
 0.0  0.0

while on master (due to this PR)

julia> rmul!(fill(NaN, 2, 2), 0.0)
2×2 Array{Float64,2}:
 NaN  NaN
 NaN  NaN

(which is the reason both of those packages fail).

@StefanKarpinski
Copy link
Member

The new behavior seems like a bug fix...

@andreasnoack
Copy link
Member Author

It is a bug fix. I'll take a look at the packages that break because of this to figure out what is going on.

@StefanKarpinski
Copy link
Member

Even though it's a bug fix I think we should lean conservative with point releases and not backport.

@KristofferC
Copy link
Member

This was removed from the backport list for 1.0.2 because it broke packages. (#29726 (reference))

@andreasnoack
Copy link
Member Author

This broke a single test in each of the two packages we know are affected by this and the maintainer is aware of this change. The two tests specifically tested the buggy behavior, i.e. the failures weren't indirect from other functions that relied on this behavior. We are not aware of any other cases where this change has had an effect.

@KristofferC
Copy link
Member

So what you are saying is that you want this in 1.0.3?

@andreasnoack
Copy link
Member Author

Never mind. I read this as a discussion of exclusion from 1.1 and missed that Stefan said point release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bugfix This change fixes an existing bug linear algebra Linear algebra minor change Marginal behavior change acceptable for a minor release needs news A NEWS entry is required for this change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

normalize(v) for vector v with Inf entry returns zero vector
5 participants