Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving PETSc SNES support #45

Open
1 of 3 tasks
amartinhuertas opened this issue Oct 30, 2021 · 4 comments
Open
1 of 3 tasks

Improving PETSc SNES support #45

amartinhuertas opened this issue Oct 30, 2021 · 4 comments

Comments

@amartinhuertas
Copy link
Member

amartinhuertas commented Oct 30, 2021

At present the current implementation of PETScNonLinearSolver is not fully optimized. These are the main problems.

  • Residuals and jacobians are copied back and forth from PArrays to PETSc objects, and vice-versa. I guess that this can be overcomed by directly assembling into PETSc data structures (as we did in GridapDistributed.jl 0.1.0). I would consider this in a future PR if and only if we determine that it becomes a bottleneck/hot spot.
  • The jacobian copy overhead is even more severe because the local storage format of PSparseMatrix by default is not SparseMatrixCSR{0,PetscInt,PetscReal} but SparseMatrixCSC, so that we need a conversion at each Jacobian calculation. I guess this latter issue can be easily overcomed. Using CSR format in PoissonTests.jl #66
  • The PETScNonLinearSolverCache does not currently store a reference to the KSP/PC objects used for the previous nonlinear solve. I guess it would be convenient to have this in the future. (We can put it an issue).

Besides,

  • Partitioned simulations in sequential mode currently trigger a @NotImplemented macro. I did not have enough strength to fully implement this case.
@fverdugo
Copy link
Member

fverdugo commented Nov 10, 2021

Hi @amartinhuertas !

I have been looking at the implementation of PETScNonLinearSolver and I believe that it is not completely inline with the spirit of the NonlinearSolver interface of Gridap.

In particular, specializations of NonlinearSolver should just contain information about the method used to solve the non-linear problem and they should have no relation with the jacobian matrix or residual vector. Info about the matrix and vector is usually stored in the solver cache. This is like this since the same solver object can be used to solve two completely independent non-linear problems without any conflict (even for problems with different sizes). The key is that the solver object will generate an independent cache for each of the problems.

The same idea applies for linear solvers. See for instance the PETScLinearSolver

struct PETScLinearSolver{F} <: LinearSolver
  setup::F
  comm::MPI.Comm
end

that only contains minimal information about the method to be used (encoded in the setup function). In the future we can even remove comm. See issue #51

We need to implement the PETScNonLinearSolverusing an analogous pattern. In particular to move the snes and the initialized fields to the solver cache.

@amartinhuertas
Copy link
Member Author

We need to implement the PETScNonLinearSolverusing an analogous pattern. In particular to move the snes and the initialized fields to the solver cache.

Ok, that makes sense. I will do it.

@fverdugo
Copy link
Member

Another minor fix. Add ,::Nothing as last argument in

function Algebra.solve!(x::AbstractVector,nls::PETScNonlinearSolver,op::NonlinearOperator)

The nonlinear solvers can be used either as

cache = solve!(x,op)
cache = solve!(x,op,cache)

or

cache = nothing
cache = solve!(x,op,cache)
cache = solve!(x,op,cache)

@amartinhuertas
Copy link
Member Author

Another minor fix. Add ,::Nothing as last argument in

Done.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants