-
-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]: gmshio causes kernel crash if entities are tagged several times #3332
Comments
@pierricmora I don’t think we can support the behavior that you are suggesting without adding some very expensive checks to the gmshio readers. One would have to create the intersection of Physical Surfaces tagged by Gmsh. You can do this by modifiying: https://github.com/FEniCS/dolfinx/blob/main/python/dolfinx/io/gmshio.py#L278 to be the unique set of rows in cells. dolfinx/cpp/dolfinx/mesh/MeshTags.h Line 35 in 8e4122b
|
I think that the problem comes from the fact that the cells are read based on the physical groups, rather than by directly asking gmsh to return them. I mean: in Could the for loop be reversed, by first reading all cells regardless of tags with |
You can write your own wrapper for this if you would like to. As it is using the GMSH Python API you can create your own custom reader. Adding all cells regardless of tags is not necessarily a good idea, as GMSH usually only generates cells/facets/edges for Physical entities, to avoid duplicated cells (as in your case). From section 1.2.3 of https://gmsh.info/doc/texinfo/gmsh.html
As I previously stated, our
would be preferable from my point of view. |
Ok, yes it makes sense from Gmsh doc. In my case
does produce the same output regardless the way I have defined - or not - physical groups, so I thought it could be the stable way to go. But indeed, given what's stated in the doc, maybe not, so I think I'll follow your advice. This is a pity, because I find it so much more convenient to work with overlapping or incomplete tags. |
@jorgensd Then, what about adding these simple checks and error messages to avoid crashes and give the user a hint on what is expected? The following seems to work on my side. In # Sort elements by ascending dimension
perm_sort = np.argsort(cell_dimensions)
# Check that all cells are tagged once
_d = model.getDimension()
assert _d in topologies.keys(), 'All cells are expected to be tagged once; none found'
_elementTypes, _elementTags, _nodeTags = model.mesh.getElements(dim=_d, tag=-1)
# assert len(_elementTypes) == 1 # assert only one type of elements; already checked in extract_topology_and_markers
nbcells = len(_elementTags[0])
nbcells_tagged = len(topologies[_d]["topology"])
nbcells_tagged_once = len(np.unique(topologies[_d]['topology'], axis=0))
assert nbcells == nbcells_tagged, f'All cells are expected to be tagged once; found: {nbcells_tagged}, expected: {nbcells}'
assert nbcells_tagged == nbcells_tagged_once, 'All cells are expected to be tagged once; found duplicates'
# Broadcast cell type data and geometric dimension |
The only thing that worries me here is that the call to import numpy as np
from time import perf_counter
N = 1_000_000
M = 8
a = np.arange(N*M).reshape(N, M).astype(np.int64)
b = np.vstack([a, a, a]).astype(np.int64)
start = perf_counter()
unique_b = np.unique(b, axis=0)
end = perf_counter()
print(end-start)
print(f"{unique_b.shape=}, {a.shape=}, {b.shape=}") yielding 1.6669321240005956
unique_b.shape=(1000000, 8), a.shape=(1000000, 8), b.shape=(3000000, 8) It would be interesting to see how this modification changes the runtime of "standard" problems. Feel free to make a pull request, and test some standard meshes. |
By playing with N on your snippet I get a linear behaviour for the time spent in np.unique. Then, I extended my example by adding a Below is the code I run. mesh_size_max = 0.01
# Create model
start = perf_counter()
model = gmsh_rect_ellipse(mode=3, mesh_size_max=mesh_size_max, show=False)
end = perf_counter()
gen_model_time = end - start
# model_to_mesh, original
start = perf_counter()
mesh, ct, ft = model_to_mesh(model, MPI.COMM_WORLD, rank=0, gdim=2, mute_assert=True)
end = perf_counter()
model_to_mesh_woutAssert_time = end - start
# model_to_mesh with checks
start = perf_counter()
mesh, ct, ft = model_to_mesh(model, MPI.COMM_WORLD, rank=0, gdim=2, mute_assert=False)
end = perf_counter()
model_to_mesh_withAssert_time = end - start
# say hello
print(f'mesh_size_max: {mesh_size_max},\n' + \
f'number of cells: {len(model.mesh.getElements(dim=2, tag=-1)[1][0])},\n' + \
f'model generation time (Gmsh): {gen_model_time:.3f}s,\n' + \
f'model_to_mesh time, original version: {model_to_mesh_woutAssert_time:.3f}s,\n' + \
f'model_to_mesh time, with checks: {model_to_mesh_withAssert_time:.3f}s,\n' + \
f'overcost due to checks: {(model_to_mesh_withAssert_time - model_to_mesh_woutAssert_time) / model_to_mesh_woutAssert_time * 100:.1f}%') Outputs: mesh_size_max: 0.005, mesh_size_max: 0.0025, mesh_size_max: 0.00125, mesh_size_max: 0.000625, |
I would be fine with this. 10% overhead isn’t to bad, as one should preferrably only use this wrapper once and not on hpc systems, as Gmsh only writes serial files and Gmsh.model doesn’t do mpi. |
Summarize the issue
It seems to me that gmshio requires that all facets are assigned into a single physical group. When facets get assigned several tags, I get a kernel crash in a jupyter lab.
I tracked the line causing kernel crash until create_mesh in
model_to_mesh
.How to reproduce the bug
The following 2D example represents an ellipse inclusion inside a rectangular matrix. Another smaller rectangle represents a region where I would like to assign a source term, without being linked to different physical properties. The source region overlaps the inclusion; therefore the natural way to define tags is:
As for now, I can bypass the bug by working with more cumbersome tags:
Minimal Example (Python)
Output (Python)
No response
Version
0.8.0
DOLFINx git commit
5a20e2b
Installation
Docker image in a ubuntu computer
Additional information
No response
The text was updated successfully, but these errors were encountered: