Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Truly global FE spaces, including multifield FE spaces #26

Closed
amartinhuertas opened this issue Sep 17, 2020 · 0 comments
Closed

Truly global FE spaces, including multifield FE spaces #26

amartinhuertas opened this issue Sep 17, 2020 · 0 comments
Assignees

Comments

@amartinhuertas
Copy link
Member

amartinhuertas commented Sep 17, 2020

Related to issue #24

I have a first beta release of the distributed Stokes tests using truly global distributed multifield FE spaces. You can, e.g., see the differences among the previous and the new version lines 52-57 and lines 53-57, resp.

Recall that the motivation of this work resides in the following two points (quoting directly from issue #24):

  • We are currently using a software design pattern in which we have a DistributedFESpace composed of local FE spaces. The polymorphism is in the local FE spaces (e.g., SingleFieldFESpace versus MultiFieldFESpace). DistributedFESpace is unique. DistributedFESpace glues together the parts by building a global DoF numbering across processors. While this pattern has quite a lot of expressivity, I do not see how it can be used to cleanly build a global zero mean FE space. We need some sort of global layer of code in which the processors agree in the single DOF to be fixed, and the zero mean constraint imposition has to be computed globally by computing the pressure mean/domain volume by assembling (reduce-sum) local contributions from each of the subdomains. Is it possible to acommodate in the interplay of DistributedFESpace+Local FE spaces these operations? Do you foresee any other similar scenario? (e.g., FESpaces with multipoint linear constraints?)

  • Just guessing, the problem in the point above could be solved if we had at least two types of SingleField DistributedFESpaces (the standard one, and the one that imposes the constraint), and a MultiFieldFEspace defined as the composition of global FE spaces, but this clearly breaks our original intended design pattern of having a single DistributedFEspace and besides, it needs extra communication (see issue Misc tasks (in the lack of a better name) #3).

There are some points that I would like to discuss in a meeting, some of the are issues to be resolved, some others not necessarily:

  • We now have a hierarchy of global distributed FE spaces rooted at the abstract type DistributedFESpace. Currently, this data type only has three subtypes. I would like to discuss with you the current status of the hierarchy, how to appropriately design it to naturally accomodate growth, the API of the different types, etc.
    Also discuss the differences and common aspects among the hierarchy of FESpaces in Gridap.jl
    and GridapDistributed.jl. What else has to be implemented in GridapDistributed.jl that can be expressed in Gridap.jl and cannot be expressed with the current machinery in GridapDistibuted.jl? E.g., what happens with global spaces with linear multi-point constraints?

  • [UPDATE: We decided the trait is not necessary at the DistributedFESpace level, the one of the sequential version is to be re-used. The second issue has been solved in fbac993] Issue: I guess that MultiFieldDistributedFESpace should have a trait-like type parameter in the spirit of its sequential counterpart to control the generation of the global DoF identifiers of the multifield FE space. The so-called MultiFieldStyle type parameter. At present, we are generating the global DoF identifers of MultiFieldDistributedFESpace in a hard-coded way that follows the strategy here to assign DoF to processor ownership. Essentially, within each processor's local portion, we have first the DoFs
    of the first field, then those of the second, and so on. In other words, the global vector is ordered and partitioned among processors as follows [F1_PO, F2_P0, F3_P0 ... FN_P0 | F1_P1 F2_P1 ... FN_PN | ... | F1_Pp F2_Pp ... FN_Pp ]. I wonder how flexible this should be in order to support all possible solvers in PETSc, e.g., GAMG for multifield PDEs.

  • [UPDATE: solved in 392108e] Issue: At several points (in particular, here, here, here and here) we are assuming in a hard-coded way that the MultifieldStyle is ConsecutiveFieldStyle. Clearly we should be able to write code in GridapDistributed.jl which is independent of MultifieldStyle of the local FE spaces, but I am afraid that in order to achieve such goal, we should improve the API and abstractions with MultifieldFESpace at Gridap.jl.

  • [UPDATE: We decided the trait is not necessary at the DistributedFESpace level, the one of the sequential version is to be re-used, thus most of this point does not apply anymore.] Issue: in Multifield FE spaces there is an operation to be abstracted towards single field FE space code re-use, namely to restrict the DoF values of a multifield FE function to a single field. Following Gridap.jl, I have named this operation restrict_to_field. See here and here.
    I would like to discuss the current interfaces of these funtions, I guess that we should dispatch
    also on MultifieldStyle, what else? Do you foresee limitations in the current interfaces?
    I do not like to have communicator-depedent code in MultiFieldDistributedFESpace, how can we avoid that?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant