Skip to content

Commit

Permalink
Merge pull request #34 from gridap/documentation
Browse files Browse the repository at this point in the history
Documentation fixes
  • Loading branch information
pmartorell authored Jul 4, 2024
2 parents 37d3644 + 271bb28 commit 47751c1
Show file tree
Hide file tree
Showing 4 changed files with 12 additions and 10 deletions.
9 changes: 5 additions & 4 deletions docs/src/distributed.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

## Introduction

When dealing with large-scale problems, this package can be accelerated through two types of parallelization. The first one is multi-threading, which uses [Julia Threads](https://docs.julialang.org/en/v1/base/multi-threading/) for shared memory parallelization (e.g., `julia -t 4`). This method, adds some speed-up. However, it is only efficient for a reduced number of threads.
When dealing with large-scale problems, this package can be accelerated through two types of parallelization. The first one is multi-threading, which uses [Julia Threads](https://docs.julialang.org/en/v1/base/multi-threading/) for shared memory parallelization (e.g., `julia -t 4`). This method adds some speed-up. However, it is only efficient for a reduced number of threads.

The second one is a distributed memory computing. For such parallelization, we use MPI (`mpiexec -np 4 julia input.jl`) through [`PartitionedArrays`](https://www.francescverdugo.com/PartitionedArrays.jl/stable) and [`GridapDistributed`](https://gridap.github.io/GridapDistributed.jl/dev/). With MPI expect to efficiently compute large-scale problems, up to thousands of cores.
The second one is a distributed memory computing. For such parallelization, we use MPI (`mpiexec -np 4 julia input.jl`) through [`PartitionedArrays`](https://www.francescverdugo.com/PartitionedArrays.jl/stable) and [`GridapDistributed`](https://gridap.github.io/GridapDistributed.jl/dev/). With MPI we can compute large-scale problems efficiently, up to thousands of cores.

Additionally, the distributed memory implementation is built on top of [GridapEmbedded](https://github.com/gridap/GridapEmbedded.jl). GridapEmbedded provides parallelization tools since [v0.9.2](https://github.com/gridap/GridapEmbedded.jl/releases/tag/v0.9.2).

Expand Down Expand Up @@ -37,6 +37,7 @@ Where `poisson.jl` is the following code.

```julia
using STLCutters
using Gridap
using GridapEmbedded
using GridapDistributed
using PartitionedArrays
Expand All @@ -47,7 +48,7 @@ cells = (10,10,10)
filename = "stl_file_path.stl"

with_mpi() do distribute
ranks = distribute(LinearIndices((prod(parts))))
ranks = distribute(LinearIndices((prod(parts),)))
# Domain and discretization
geo = STLGeometry(filename)
pmin,pmax = get_bounding_box(geo)
Expand All @@ -64,7 +65,7 @@ with_mpi() do distribute
= Measure(Γ,2)
# FE spaces
Vstd = TestFESpace(Ω_act,ReferenceFE(lagrangian,Float64,1))
V = AgFEMSpace(Vstd)
V = AgFEMSpace(model,Vstd,aggregates)
U = TrialFESpace(V)
# Weak form
γ = 10.0
Expand Down
7 changes: 4 additions & 3 deletions docs/src/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ filename = download_thingi10k(293137)

## Discretization

Since STLCutters is an extension of [GridapEmbedded](https://github/gridap/GridapEmbedded.jl) it utilizes the same workflow to solve PDEs on embedded domains. In particular, STLCutters extends the [`cut`](@ref STLCutters.cut) function from GridapEmbedded.
Since STLCutters is an extension of [GridapEmbedded](https://github/gridap/GridapEmbedded.jl), it utilizes the same workflow to solve PDEs on embedded domains. In particular, STLCutters extends the [`cut`](@ref STLCutters.cut) function from GridapEmbedded.

We load the STL file with an [`STLGeometry`](@ref) object. E.g.,

Expand All @@ -54,7 +54,7 @@ cutgeo = cut(model,geo)

## Usage with Gridap

Once, the geometry is discretized one can generate the embedded triangulations to solve PDEs with [Gridap](https://github.com/gridap/Gridap.jl), see also [Gridap Tutorials](https://gridap.github.io/Tutorials/stable).
Once the geometry is discretized, one can generate the embedded triangulations to solve PDEs with [Gridap](https://github.com/gridap/Gridap.jl), see also [Gridap Tutorials](https://gridap.github.io/Tutorials/stable).

Like in GridapEmbedded, we extract the embedded triangulations as follows.

Expand All @@ -71,6 +71,7 @@ Now, we provide an example of the solution of a Poisson problem on the embedded

```julia
using STLCutters
using Gridap
using GridapEmbedded
cells = (10,10,10)
filename = "stl_file_path.stl"
Expand All @@ -90,7 +91,7 @@ dΩ = Measure(Ω,2)
= Measure(Γ,2)
# FE spaces
Vstd = TestFESpace(Ω_act,ReferenceFE(lagrangian,Float64,1))
V = AgFEMSpace(Vstd)
V = AgFEMSpace(Vstd,aggregates)
U = TrialFESpace(V)
# Weak form
γ = 10.0
Expand Down
2 changes: 1 addition & 1 deletion src/Distributed.jl
Original file line number Diff line number Diff line change
Expand Up @@ -344,7 +344,7 @@ end
The number of interfaces coincides with the number of neigbors given by [`compute_face_neighbors`](@ref)
!!! note
If the subdomain is not cut, the neighbors are considered undefined.
If the subdomain is not cut, the neighbors are considered undefined.
"""
function compute_face_neighbor_to_inoutcut(
model::DiscreteModel,
Expand Down
4 changes: 2 additions & 2 deletions src/STLs.jl
Original file line number Diff line number Diff line change
Expand Up @@ -1276,8 +1276,8 @@ end
each cell in `a:UnstructuredGrid`. The output is a vector of vectors.
!!! note
This function uses a `CartesianGrid` internally. It is not optimized
for higly irregular grids.
This function uses a `CartesianGrid` internally. It is not optimized
for higly irregular grids.
!!! note
This function allows **false positives**.
Expand Down

0 comments on commit 47751c1

Please sign in to comment.