Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add tensor_view function and show usage in 2D-1 benchmark example. #4

Merged
merged 1 commit into from
Jul 22, 2024

Conversation

jpthiele
Copy link
Contributor

As discussed today I have written a generic function that takes vector and index as well as tensor rank and dimension to construct a matching view.

@pjaap FYI

src/helper_functions.jl Outdated Show resolved Hide resolved
@jpthiele jpthiele force-pushed the tensorview branch 4 times, most recently from cd91468 to 754ce4e Compare July 12, 2024 10:48
@chmerdon
Copy link
Member

Ok, nice. Some other examples where this might be useful are Examples 226, 270, 282 and 290.

@chmerdon
Copy link
Member

Unfortunately, I see lots of allocations in the kernel of Example245 again when using the tensorviews. My guess is, that the compiler has difficulties to infer the return type of the tensorview? I get no allocations when I define a separate function for each rank case, e.g. scalar_view, vector_view, matrix_view.

```
# Examples
"""
function tmul!(y, A, x, α = 1.0, β = 0.0)
Copy link
Member

@pjaap pjaap Jul 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If think you look for LinearAlgebra.BLAS.gemv!('T', alpha, A, x, beta, y)? It does not to any allocations in my tests.

@jpthiele
Copy link
Contributor Author

Unfortunately, I see lots of allocations in the kernel of Example245 again when using the tensorviews. My guess is, that the compiler has difficulties to infer the return type of the tensorview? I get no allocations when I define a separate function for each rank case, e.g. scalar_view, vector_view, matrix_view.

Julia seems to have a problem with either the rank 3 or the generic rank n version.
If I comment those out it all works fine and without allocations.

I will try to figure this out further, one idea we had was to use ::Val{1} to ::Val{3} and then ::Val{rank} for the generic case. The main caveat is that the function call then becomes tensor_view(input,i,::Val(2),dim but the documentation for value types discourages writing those as internal functions and wrapping them in a generic function that calls Val(rank).

@jpthiele
Copy link
Contributor Author

Another thing we should discuss is where the documentation for the helper functions would fit best.
We should probably explain usage both in at least one example and the documentations for BilinearOperator and NonlinearOperator

@chmerdon
Copy link
Member

Instead of Val{k} one could also define abstract types like TScalar, TVector, TMatrix, TRank3, TRank4 (all subtypes of <: TensorType) etc. and use them for dispatch. One could even put the dimensions into the type, i.e., use TVector{3} or TMatrix{2,2} and then use, e.g., tensorview(input, 3, TMatrix{2,2})...

Concerning the documentation, I agree. The NonlinearOperator already has an example at the end of the page for which one could add the new possibility. Another example for the BilinearOperator would be good, too. We will think about that when the rest is sorted out.

@j-fu
Copy link
Member

j-fu commented Jul 18, 2024

Had similar situations where I worked with wrapper structs:

Enabled Sparspak for dual numbers by implementing tailored BLAS routines:

Local kernel callback routines in VoronoiFVM:

@jpthiele
Copy link
Contributor Author

Added TensorDescription, updated tensor_view and added documentation for everything new.
@Da-Be-Ru FYI

@chmerdon
Copy link
Member

looks good to me now, are we ready to merge?

@jpthiele
Copy link
Contributor Author

From my point of view, yes

@chmerdon chmerdon merged commit 36a3f4b into WIAS-PDELib:master Jul 22, 2024
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants