-
Notifications
You must be signed in to change notification settings - Fork 130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LieAlgebras: Use sparse data structures #2844
Conversation
871a29e
to
fd011a1
Compare
What kind of dimensions are you thinking of working with? |
Lie algebras not that large, 10-50. Modules can easily get to the 100s. And in most cases, I have very few non-zero entries. |
From your comment in #2818, I concluded that I should use a row/vector type for my coefficient vectors (instead of 1xdim sized MatElem). But as far as I know, we do not have a fast vector type (fast aka backed up by flint or similar). |
The |
For the hom and action matrices, they are sparse basically everytime. |
Codecov Report
@@ Coverage Diff @@
## master #2844 +/- ##
==========================================
+ Coverage 80.61% 80.67% +0.06%
==========================================
Files 456 456
Lines 64748 64756 +8
==========================================
+ Hits 52194 52245 +51
+ Misses 12554 12511 -43
|
2166bb9
to
a495841
Compare
After doing some benchmarks on basic operations of "row vectors", namely here julia @thofma Can we talk about this sometime tomorrow? Just copying some of the benchmark stuff below here to save it for later: julia> a = ZZRingElem[iseven(i) ? i : 0 for i in 1:1000]; # not that sparse
julia> julia_a = a;
julia> dense_a = matrix(ZZ, 1, 1000, a);
julia> sparse_a = sparse_row(dense_a);
julia> b = ZZRingElem[i == 3 ? i : 0 for i in 1:1000]; # very sparse
julia> julia_b = b;
julia> dense_b = matrix(ZZ, 1, 1000, b);
julia> sparse_b = sparse_row(dense_b);
# current implementation
julia> @btime dense_a + dense_a; # 3.685 μs (3 allocations: 7.90 KiB)
julia> @btime dense_a + dense_b; # 4.163 μs (3 allocations: 7.90 KiB)
julia> @btime dense_b + dense_a; # 3.860 μs (3 allocations: 7.90 KiB)
julia> @btime dense_b + dense_b; # 4.501 μs (3 allocations: 7.90 KiB)
julia> @btime 3 * dense_a; # 3.902 μs (3 allocations: 7.90 KiB)
julia> @btime 3 * dense_b; # 3.790 μs (3 allocations: 7.90 KiB)
# slower by a factor of 3x-6x, linear number of allocations instead of constant
julia> @btime julia_a + julia_a; # 19.442 μs (1001 allocations: 23.56 KiB)
julia> @btime julia_a + julia_b; # 18.646 μs (1001 allocations: 23.56 KiB)
julia> @btime julia_b + julia_a; # 18.852 μs (1001 allocations: 23.56 KiB)
julia> @btime julia_b + julia_b; # 18.765 μs (1001 allocations: 23.56 KiB)
julia> @btime 3 * julia_a; # 17.757 μs (1001 allocations: 23.56 KiB)
julia> @btime 3 * julia_b; # 18.575 μs (1001 allocations: 23.56 KiB)
# speed heavily dependent on the sparsity
julia> @btime sparse_a + sparse_a; # 13.663 μs (8 allocations: 15.97 KiB)
julia> @btime sparse_a + sparse_b; # 9.902 μs (8 allocations: 8.22 KiB)
julia> @btime sparse_b + sparse_a; # 8.053 μs (8 allocations: 8.22 KiB)
julia> @btime sparse_b + sparse_b; # 203.501 ns (8 allocations: 384 bytes)
julia> @btime 3 * sparse_a; # 26.903 μs (1013 allocations: 30.80 KiB)
julia> @btime 3 * sparse_b; # 195.782 ns (9 allocations: 400 bytes)
|
As announced in #2818 (comment).
This PR needs
thofma/Hecke.jl#1221, thofma/Hecke.jl#1225, thofma/Hecke.jl#1227, thofma/Hecke.jl#1228, thofma/Hecke.jl#1229 to be available.coerce_to_lie_algebra_elem
In the end, decide which accessors should remain. (I am adding many more for some intermediate working versions.)