-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add the iteratenz API #167
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #167 +/- ##
==========================================
+ Coverage 92.11% 92.14% +0.02%
==========================================
Files 11 11
Lines 7203 7230 +27
==========================================
+ Hits 6635 6662 +27
Misses 568 568 ☔ View full report in Codecov by Sentry. |
This is great, I have a prototype version based on ArrayIteration.jl, but having a version with no dependencies is great and very valuable. |
@ViralBShah what do you think about implementing the same for all special matrices (symmetric, transpose, diagonal, etc)? this could give rise to fast generic implementations for many of the functions like dot. if you think it's a good idea, where should i submit a pull request? |
That should almost certainly be experimented with in a separate package first. |
Agree. I would put that in a separate package. |
i made https://github.com/SobhanMP/SparseExtra.jl for now, guess i'll add a bunch of other stuff while i'm at it... |
I would love to understand why this is faster? Will we use this iterator across other parts of SparseArrays? Maybe we revisit once your SparseExtra.jl package has more stuff in it and we can then decide if it makes sense to bring it in here, or register separately. |
note that it's not faster than the last function (the pattern used almost everywhere in Sparsearrays). accessing via the getcolptr is faster than naming a variable (i don't know why). it could be used to simplify a lot of code. for instance const S_or_SS{Tv, Ti} = Union{
AbstractSparseMatrixCSC{Tv, Ti},
[i{AbstractSparseMatrixCSC{Tv, Ti}}
for i in [Symmetric, LowerTriangular, UpperTriangular, Transpose, Adjoint, Hermitian]]...}
function dot(x::AbstractVector,
A::S_or_SS{Tv},
y) where Tv
acc = zero(promote_type(eltype(x), eltype(A), eltype(y)))
for (v, i, j) in iternz(A)
acc += x[i] * v * y[j]
end
end would be a correct implementation for 1 indexed x and y and all sort of special matrix. It could also be used for the dense version but the dense twice as slow. admittedly, my implementation is missing a lot of the functions for this to work right now, but that's the idea. |
@ViralBShah i've updated the readme of my repo, feel free to check it out |
@ViralBShah should this stay in SparseExtra.jl (and close this) or should i make this a pull request (in contrast to a draft)? this would make closing #23, #83, and possibly other would maybe close #50 |
If the design is ready, let's bring it in. I will look at the SparseExtra repo soon. |
@Wimmerer Can you review too? If good enough to bring here, we can do finer reviews in a PR (if necessary). |
I am persuaded enough by the real nice documentation. Let's open the PR. 1.9 feature freeze may be soon, so let's get it in. Let's also get the design doc in. The real moment of truth will be when we start using it throughout the library. @fredrikekre @KristofferC @dkarrasch Any comments on the API in https://sobhanmp.github.io/SparseExtra.jl/iternz/, which we are discussing bringing into SparseArrays.jl? |
I like it. A lite version of ArrayIteration.jl, should simplify the code base a lot. Thoughts on exporting? I lean towards no for now. |
in term of API it's done (at least the outward facing part) but there are three things worth considering.
|
We can keep it internal for a whole release cycle and export it later. My thought is to get something in that is small and addresses the current data structure, use it widely in the package, and then expand it to be more general. |
@fredrikekre @KristofferC @dkarrasch Any comments on the API in https://sobhanmp.github.io/SparseExtra.jl/iternz/, which we are discussing bringing into SparseArrays.jl? |
In the link, the API just shows, which feels nice enough to bring in.
I hadn't seen the rest of the API here https://sobhanmp.github.io/SparseExtra.jl/. I'm not sure if it makes sense to bring all of it into SparseArrays.jl. What's the thinking behind bringing all of it into SparseArrays.jl? |
i don't think it makes sense to bring that in SparseArrays. the parallel |
So I finally took a look at SparseExtra.jl. It's very nice that it composes with outer wrappers! What seems to be currently missing is that for Unfortunately, I wasn't able to follow all the details to propose how to fix this. I believe that these are the only two wrappers that have non-trivial manipulations, so when we get those right, this could be ported here and measured for performance. Also, x-ref JuliaLang/julia#40103. |
@dkarrasch The links were very helpful. I think that this works as intended now. julia> A = Symmetric([randn(20, 20) for _ in 1:20, _ in 1:20]);
julia> sum(sum(A[i, j] - v for (v, i, j) in iternz(A))) == 0
true
julia> A = Hermitian([randn(20, 20) + randn(20, 20) * 1im for _ in 1:20, _ in 1:20]);
julia> sum(sum(A[i, j] - v for (v, i, j) in iternz(A))) == 0
true Are there still interesting in this? Should I update the pull request? Do we want more tests? |
I thought the main issue here was the significant performance gap. I would be supportive of bringing the iterator in - it is a nice API, but perhaps we can't migrate the internals to it until it is on par or faster. However, it can still be made available to users (with an appropriate note). @rayegun @dkarrasch @jishnub @fredrikekre Thoughts? |
I don't think that this is an issue look at the plots in The loss of performance is fairly small. IMHO, even if we do not replace existing code, we can at least make fast fallbacks for all methods with this. |
I am onboard with introducing it but would be good at least to hear from @rayegun |
@rayegun (ping) |
I couldn't find anything like this so i thought i'd make a small implementation. the idea is that this makes iteration on sparsearrays much easier, no need to deal with colptr & friends. It's still pretty fast. But this begs the question, has this been in the api and did i miss it?
This should make adding special matrice operations easy.
tho my problem that it makes no sense that it's faster than manual iteration :/