Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add remeshing #1

Closed
ktbolt opened this issue Jan 11, 2023 · 15 comments
Closed

Add remeshing #1

ktbolt opened this issue Jan 11, 2023 · 15 comments
Assignees
Labels
enhancement New feature or request

Comments

@ktbolt
Copy link
Collaborator

ktbolt commented Jan 11, 2023

Add remeshing for fluid-solid simulations.

For now I will just convert the current Fortran remeshing implementation to C++.

@ktbolt ktbolt added the enhancement New feature or request label Jan 11, 2023
@ktbolt ktbolt self-assigned this Jan 11, 2023
@ktbolt
Copy link
Collaborator Author

ktbolt commented Jan 12, 2023

I found a couple of bugs in the skewness() and calc_elem_ar() functions used to compute mesh properties (e.g. Jacobian) to determine if remeshing is needed, fixed those bugs and now the computed mesh properties match Fortran.

I added a while loop to the C++ main simulation code to reproduce the Fortran GOTO 101 to restart the simulation when remeshing is needed.

@ktbolt
Copy link
Collaborator Author

ktbolt commented Jan 13, 2023

I've adding a Remsher XML element used to set parameters needed for remeshing

   <Remesher type="Tetgen" >
     <Max_edge_size name="lumen" value="0.7"> </Max_edge_size>
     <Max_edge_size name="wall"  value="0.5"> </Max_edge_size>
     <Min_dihedral_angle> 10.0 </Min_dihedral_angle>
     <Max_radius_ratio> 1.1 </Max_radius_ratio>
     <Remesh_frequency> 1000 </Remesh_frequency>
     <Frequency_for_copying_data> 1 </Frequency_for_copying_data>
   </Remesher>

@ktbolt
Copy link
Collaborator Author

ktbolt commented Feb 6, 2023

The initial part of the TetGen meshing seems to be working, creating the same number of elements and node.

@ktbolt
Copy link
Collaborator Author

ktbolt commented Mar 10, 2023

I've converted an svFSI test provided by @fgerosa that triggers remeshing.

svFSI and svFSIplus were giving different intermediates results, tracked this down to the mat_inv_ge function that sometimes produces NaNs and incorrect results.

I rewrote mat_inv_ge and svFSI and svFSIplus produce the same intermediates results.

@ktbolt
Copy link
Collaborator Author

ktbolt commented Mar 17, 2023

The remeshing code seems to work for the above test.

The challenge now is to restart the simulation with the new mesh. The Fortran code essentially starts a new simulation, reading the .inp file again, distributing the mesh, etc. checking the global resetSim variable to modify the flow of control.

@ktbolt
Copy link
Collaborator Author

ktbolt commented Mar 23, 2023

Remeshing is sort of working, can run through the entire simulation with five remeshing steps.

The Fortran and C++ meshes match after the first remesh step but the next one produces a mesh that has a single node that does not match (shown as red and green spheres in the following image).

Screen Shot 2023-03-23 at 10 11 08 AM

@ktbolt
Copy link
Collaborator Author

ktbolt commented Mar 25, 2023

I've fixed a couple of indexing bugs and some misinterpreted logic.

I found another inconsistency in the codes in checking if a point is in an element, uses a matrix inverse to compute the local element coordinates of a point. The Fortran code uses a tol=1e-14 to check if the local element coordinates are in [0.0, 1.0] but this is too small for the C++ code, changed it to 1e-8 and the codes then produced the same results.

The Fortran and C++ codes and now produce about the same results.

I will next try remeshing for a parallel simulation.

@ktbolt
Copy link
Collaborator Author

ktbolt commented Apr 24, 2023

Parallel remeshing is now working for the test I am using, a couple of indexing problems and a bug in reading/writing mesh partitions.

It took a long time to find an unintentional use of the (i) Array operator in the statement

tempX(i,a) = x(i,Ac) + rmsh.D0(i+nsd+1);

which should have been

tempX(i,a) = x(i,Ac) + rmsh.D0(i+nsd+1,Ac);

This operator was added for certain element-by-element array operations in the linear solver. I will review and maybe remove it.

@ktbolt
Copy link
Collaborator Author

ktbolt commented Apr 27, 2023

Enabling remeshing for an FSI simulation with a solid and fluid mesh (svFSIplus-Tests/07-fsi/ale/03-pipe_3D) causes svFSIplus to fail when running in parallel. The crash is in the code that checks for mesh distortion and not in the actual remeshing code.

@ktbolt
Copy link
Collaborator Author

ktbolt commented Apr 27, 2023

Printing out the number of elements (lM.nEl) of the mesh passed to the calc_elem_jac() code I see

[calc_elem_jac:0] lM.nEl: 0
[calc_elem_jac:0] lM.nEl: 85220

[calc_elem_jac:1] lM.nEl: 62539
[calc_elem_jac:1] lM.nEl: 0

[calc_elem_jac:2] lM.nEl: 71066
[calc_elem_jac:2] lM.nEl: 0

[calc_elem_jac:3] lM.nEl: 71100
[calc_elem_jac:3] lM.nEl: 0

The calc_elem_jac() code allocates a vector Jac as such: Vector<double> Jac(lM.nEl);. The curse of the 0-size array has returned!

I've modified the calc_elem_jac, calc_elem_skew and calc_elem_ar to check for 0-size arrays and deal with them in a dignified manner. This fixes the code failures there but memory is still being corrupted somewhere, causing memory faults in MPI and other places.

@ktbolt
Copy link
Collaborator Author

ktbolt commented Apr 28, 2023

I've fixed a bug in calc_mesh_props() that was causing MPI to fail, was using mpint for bool rmsh_flag array, changed

MPI_Allgather(rmsh_flag, nMesh, cm_mod::mpint, gFlag.data(), nMesh, cm_mod::mpint, com_mod.cm.com());

to

MPI_Allgather(rmsh_flag, nMesh, cm_mod::mplog, gFlag.data(), nMesh, cm_mod::mplog, com_mod.cm.com());

@ktbolt
Copy link
Collaborator Author

ktbolt commented Apr 29, 2023

I've coded around more 0-size array issues and fixed an indexing bug.

The 4-process parallel remeshing now works for the most excellent FSI problem provided to me by @fgerosa.

Screen Shot 2023-04-28 at 9 10 42 PM

However, the Fortran and C++ remeshing does not produce identical meshes, differ by a few tens of nodes. I think that is OK but I will have a look to see what might be causing that.

@ktbolt
Copy link
Collaborator Author

ktbolt commented May 1, 2023

The remeshing does produce the same meshes for this simulation.

@ktbolt ktbolt mentioned this issue May 8, 2023
@ktbolt
Copy link
Collaborator Author

ktbolt commented May 8, 2023

I have made a pull request for the new code.

Note that this code will not match the Fortran results until SimVascular/svFSI#105 is resolved.

@ktbolt
Copy link
Collaborator Author

ktbolt commented May 19, 2023

Merged into main.

@ktbolt ktbolt closed this as completed May 19, 2023
@yuecheng-yu yuecheng-yu mentioned this issue Mar 13, 2024
1 task
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant