Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LieAlgebras: Use sparse data structures #2844

Closed
wants to merge 1 commit into from

Conversation

lgoettgens
Copy link
Member

@lgoettgens lgoettgens commented Sep 21, 2023

As announced in #2818 (comment).

This PR needs thofma/Hecke.jl#1221, thofma/Hecke.jl#1225, thofma/Hecke.jl#1227, thofma/Hecke.jl#1228, thofma/Hecke.jl#1229 to be available.

  • LieAlgebraElem coefficient vector
  • LieAlgebraModuleElem coefficient vector
  • LieAlgebraModule transformation matrices
  • LieAlgebraHom matrix
  • LieAlgebraModuleHom matrix
  • LieAlgebraIdeal basis_matrix
  • LieSubAlgebra basis_matrix
  • Remove coerce_to_lie_algebra_elem

In the end, decide which accessors should remain. (I am adding many more for some intermediate working versions.)

@lgoettgens lgoettgens added enhancement New feature or request needs hecke update WIP NOT ready for merging labels Sep 21, 2023
@thofma
Copy link
Collaborator

thofma commented Sep 21, 2023

What kind of dimensions are you thinking of working with?

@lgoettgens
Copy link
Member Author

lgoettgens commented Sep 21, 2023

What kind of dimensions are you thinking of working with?

Lie algebras not that large, 10-50. Modules can easily get to the 100s. And in most cases, I have very few non-zero entries.
For the action matrices and homs, most times each row only has at most one non-zero.

@lgoettgens
Copy link
Member Author

From your comment in #2818, I concluded that I should use a row/vector type for my coefficient vectors (instead of 1xdim sized MatElem). But as far as I know, we do not have a fast vector type (fast aka backed up by flint or similar).

@thofma
Copy link
Collaborator

thofma commented Sep 21, 2023

The SRow type is not implemented in flint either. They are pure julia and thus not faster than Vector (unless of course everything is sparse). On the other hand, we have fast methods for Vector * Oscar matrices, implemented e.g. in flint whenever the coefficients are coming from flint.

@lgoettgens
Copy link
Member Author

For the hom and action matrices, they are sparse basically everytime.
And in my use-cases, most coefficient vectors are sparse as well.
I would try to finish this PR and then do some speed comparison to decide whether to merge this or not.

@codecov
Copy link

codecov bot commented Sep 23, 2023

Codecov Report

Merging #2844 (a495841) into master (c93b2fb) will increase coverage by 0.06%.
Report is 1 commits behind head on master.
The diff coverage is 90.12%.

@@            Coverage Diff             @@
##           master    #2844      +/-   ##
==========================================
+ Coverage   80.61%   80.67%   +0.06%     
==========================================
  Files         456      456              
  Lines       64748    64756       +8     
==========================================
+ Hits        52194    52245      +51     
+ Misses      12554    12511      -43     
Files Coverage Δ
experimental/LieAlgebras/src/LieAlgebraHom.jl 86.36% <100.00%> (ø)
experimental/LieAlgebras/src/LieAlgebraIdeal.jl 67.00% <100.00%> (+2.00%) ⬆️
...xperimental/LieAlgebras/src/LieAlgebraModuleHom.jl 83.01% <100.00%> (ø)
experimental/LieAlgebras/src/LieAlgebras.jl 100.00% <ø> (ø)
experimental/LieAlgebras/src/LieSubalgebra.jl 66.66% <100.00%> (ø)
experimental/LieAlgebras/src/LinearLieAlgebra.jl 98.09% <100.00%> (+1.94%) ⬆️
experimental/LieAlgebras/src/iso_oscar_gap.jl 100.00% <100.00%> (ø)
experimental/LieAlgebras/test/LieAlgebra-test.jl 96.00% <100.00%> (+0.22%) ⬆️
...rimental/LieAlgebras/test/LieAlgebraModule-test.jl 95.18% <100.00%> (+0.24%) ⬆️
...rimental/LieAlgebras/test/LinearLieAlgebra-test.jl 100.00% <100.00%> (ø)
... and 3 more

... and 7 files with indirect coverage changes

@lgoettgens
Copy link
Member Author

After doing some benchmarks on basic operations of "row vectors", namely here julia Vector{ZZRingElem}, dense (1xn) ZZMatrix, and sparse SRow{ZZRingElem}, I came to the conclusion that, at least for the coefficient vectors, the current implementation using 1xn Nemo matrices is superior to the other two. For matrices in the homs and module actions, that may be different. I'll check that in a future benchmark.

@thofma Can we talk about this sometime tomorrow?

Just copying some of the benchmark stuff below here to save it for later:

julia> a = ZZRingElem[iseven(i) ? i : 0 for i in 1:1000]; # not that sparse
julia> julia_a = a;
julia> dense_a = matrix(ZZ, 1, 1000, a);
julia> sparse_a = sparse_row(dense_a);

julia> b = ZZRingElem[i == 3 ? i : 0 for i in 1:1000]; # very sparse
julia> julia_b = b;
julia> dense_b = matrix(ZZ, 1, 1000, b);
julia> sparse_b = sparse_row(dense_b);


# current implementation
julia> @btime dense_a + dense_a; # 3.685 μs (3 allocations: 7.90 KiB)

julia> @btime dense_a + dense_b; # 4.163 μs (3 allocations: 7.90 KiB)

julia> @btime dense_b + dense_a; # 3.860 μs (3 allocations: 7.90 KiB)

julia> @btime dense_b + dense_b; # 4.501 μs (3 allocations: 7.90 KiB)

julia> @btime 3 * dense_a; # 3.902 μs (3 allocations: 7.90 KiB)

julia> @btime 3 * dense_b; # 3.790 μs (3 allocations: 7.90 KiB)


# slower by a factor of 3x-6x, linear number of allocations instead of constant
julia> @btime julia_a + julia_a; # 19.442 μs (1001 allocations: 23.56 KiB)

julia> @btime julia_a + julia_b; # 18.646 μs (1001 allocations: 23.56 KiB)

julia> @btime julia_b + julia_a; # 18.852 μs (1001 allocations: 23.56 KiB)

julia> @btime julia_b + julia_b; # 18.765 μs (1001 allocations: 23.56 KiB)

julia> @btime 3 * julia_a; # 17.757 μs (1001 allocations: 23.56 KiB)

julia> @btime 3 * julia_b; # 18.575 μs (1001 allocations: 23.56 KiB)


# speed heavily dependent on the sparsity
julia> @btime sparse_a + sparse_a; # 13.663 μs (8 allocations: 15.97 KiB)

julia> @btime sparse_a + sparse_b; # 9.902 μs (8 allocations: 8.22 KiB)

julia> @btime sparse_b + sparse_a; # 8.053 μs (8 allocations: 8.22 KiB)

julia> @btime sparse_b + sparse_b; # 203.501 ns (8 allocations: 384 bytes)

julia> @btime 3 * sparse_a; # 26.903 μs (1013 allocations: 30.80 KiB)

julia> @btime 3 * sparse_b; # 195.782 ns (9 allocations: 400 bytes)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request needs hecke update topic: LieTheory WIP NOT ready for merging
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants