Replaced MPI_Init with MPI_Init_thread to avoid deadlocks from scotch
Issue/feature addressed
When a mesh in Hdf5 format was read in parallel, the IncNavierStokesSolver executable hang during the partitioning steps by Scotch.
Identified Backtrace:
SCOTCH_CALL(SCOTCH_dgraphPart, (&scGraph, nparts, &strat, &part[0]));
library/SpatialDomains/MeshPartitionPTScotch.cpp: 104
library/SpatialDomains/MeshPartition.cpp: 682
Test-case:
solvers/IncNavierStokesSolver/Tests/ChannelFlow_3D.xml
Environment:
- Hdf5 version 1.12.0
- scotch 6.0.6
Cause:
Scotch was configured with multi-threading support and the MPI communication was initialized using MPI_Init. However, inside the documentation by Scotch, it is recommended to initialize the MPI communication using MPI_Init_thread() instead of MPI_Init to avoid issues with threading.
Proposed solution
Change the initialization of the MPI communication following the guidelines in https://github.com/poulson/scotch/blob/master/INSTALL.txt
Implementation
Replaced MPI_Init(&narg, &arg) with MPI_Init_thread(&narg, &arg, MPI_THREAD_MULTIPLE, &thread_support) for the initialization of the MPI communication
Tests
Suggested reviewers
Notes
Similar issue reported in a different solver: https://develop.openfoam.com/Development/openfoam/-/issues/2791
Checklist
-
Functions and classes, or changes to them, are documented. -
User guide/documentation is updated. -
Changelog is updated. -
Suitable tests added for new functionality. -
Contributed code is correctly formatted. (See the contributing guidelines). -
License added to any new files. -
No extraneous files have been added (e.g. compiler output or test data files).