waLBerla 7.2
Loading...
Searching...
No Matches
walberla::mpi::MPIManager Class Reference

Detailed Description

Encapsulates MPI Rank/Communicator information.

Every process has two ranks/communicators: World: This communicator/rank is valid after calling activateMPI, usually at the beginning of the program. This communicator never changes.

Custom: Can be adapted to the block structure. During the block structure setup, either the Cartesian setup has to be chosen using createCartesianComm() or the world communicator has to be used: useWorldComm()

#include <MPIManager.h>

+ Inheritance diagram for walberla::mpi::MPIManager:

Public Member Functions

 ~MPIManager ()
 
void initializeMPI (int *argc, char ***argv, bool abortOnException=true)
 Configures the class, initializes numProcesses, worldRank the rank and comm variables are still invalid, until custom communicator is set up.
 
void finalizeMPI ()
 
void resetMPI ()
 
void abort ()
 
Cartesian Communicator
void createCartesianComm (const std::array< int, 3 > &, const std::array< int, 3 > &)
 
void createCartesianComm (const uint_t xProcesses, const uint_t yProcesses, const uint_t zProcesses, const bool xPeriodic=false, const bool yPeriodic=false, const bool zPeriodic=false)
 
void cartesianCoord (std::array< int, 3 > &coordOut) const
 Cartesian coordinates of own rank.
 
void cartesianCoord (int rank, std::array< int, 3 > &coordOut) const
 Cartesian coordinates of given rank.
 
int cartesianRank (std::array< int, 3 > &coords) const
 translates Cartesian coordinates to rank
 
int cartesianRank (const uint_t x, const uint_t y, const uint_t z) const
 translates Cartesian coordinates to rank
 
World Communicator
void useWorldComm ()
 
Getter Function
int worldRank () const
 
int numProcesses () const
 
int rank () const
 
MPI_Comm comm () const
 
uint_t bitsNeededToRepresentRank () const
 
bool isMPIInitialized () const
 
bool hasCartesianSetup () const
 
bool rankValid () const
 Rank is valid after calling createCartesianComm() or useWorldComm()
 
bool hasWorldCommSetup () const
 
bool isCommMPIIOValid () const
 Indicates whether MPI-IO can be used with the current MPI communicator; certain versions of OpenMPI produce segmentation faults when using MPI-IO with a 3D Cartesian MPI communicator (see waLBerla issue #73)
 
template<typename CType >
MPI_Datatype getCustomType () const
 Return the custom MPI_Datatype stored in 'customMPITypes_' and defined by the user and passed to 'commitCustomType'.
 
template<typename CType >
MPI_Op getCustomOperation (mpi::Operation op) const
 Return the custom MPI_Op stored in 'customMPIOperation_' and defined by the user and passed to 'commitCustomOperation'.
 

Public Attributes

 WALBERLA_BEFRIEND_SINGLETON
 

Setter Function

int worldRank_ {0}
 Rank in MPI_COMM_WORLD.
 
int rank_ {-1}
 Rank in the custom communicator.
 
int numProcesses_ {1}
 Total number of processes.
 
MPI_Comm comm_
 Use this communicator for all MPI calls this is in general not equal to MPI_COMM_WORLD this may change during domain setup, where a custom communicator adapted to the domain is created.
 
bool isMPIInitialized_ {false}
 Indicates whether initializeMPI has been called. If true, MPI_Finalize is called upon destruction.
 
bool cartesianSetup_ {false}
 Indicates whether a Cartesian communicator has been created.
 
bool currentlyAborting_ {false}
 
bool finalizeOnDestruction_ {false}
 
std::map< std::type_index, walberla::mpi::DatatypecustomMPITypes_ {}
 It is possible to commit own datatypes to MPI, that are not part of the standard.
 
std::map< walberla::mpi::Operation, walberla::mpi::MPIOperationcustomMPIOperations_ {}
 
template<typename CType , class ConstructorArgumentType >
void commitCustomType (ConstructorArgumentType &argument)
 !
 
template<typename CType >
void commitCustomOperation (mpi::Operation op, MPI_User_function *fct)
 !
 
static std::string getMPIErrorString (int errorCode)
 
static std::string getMPICommName (MPI_Comm comm)
 
 MPIManager ()
 

Constructor & Destructor Documentation

◆ ~MPIManager()

walberla::mpi::MPIManager::~MPIManager ( )

◆ MPIManager()

walberla::mpi::MPIManager::MPIManager ( )
inlineprivate

Member Function Documentation

◆ abort()

void walberla::mpi::MPIManager::abort ( )

◆ bitsNeededToRepresentRank()

uint_t walberla::mpi::MPIManager::bitsNeededToRepresentRank ( ) const
inline

◆ cartesianCoord() [1/2]

void walberla::mpi::MPIManager::cartesianCoord ( int rank,
std::array< int, 3 > & coordOut ) const

Cartesian coordinates of given rank.

◆ cartesianCoord() [2/2]

void walberla::mpi::MPIManager::cartesianCoord ( std::array< int, 3 > & coordOut) const

Cartesian coordinates of own rank.

◆ cartesianRank() [1/2]

int walberla::mpi::MPIManager::cartesianRank ( const uint_t x,
const uint_t y,
const uint_t z ) const

translates Cartesian coordinates to rank

◆ cartesianRank() [2/2]

int walberla::mpi::MPIManager::cartesianRank ( std::array< int, 3 > & coords) const

translates Cartesian coordinates to rank

◆ comm()

MPI_Comm walberla::mpi::MPIManager::comm ( ) const
inline

◆ commitCustomOperation()

template<typename CType >
void walberla::mpi::MPIManager::commitCustomOperation ( mpi::Operation op,
MPI_User_function * fct )
inline

!

Initializes a custom MPI_Op and logs it in the customMPIOperation map

Parameters
opA operator, e.g. SUM, MIN.
fctThe definition of the MPI_User_function used for this operator.

◆ commitCustomType()

template<typename CType , class ConstructorArgumentType >
void walberla::mpi::MPIManager::commitCustomType ( ConstructorArgumentType & argument)
inline

!

Initializes a custom MPI_Datatype and logs it in the customMPITypes_ map.

Parameters
argumentThe argument that is expected by the constructor of mpi::Datatype At the point of creation 26.01.2024 this is either MPI_Datatype or const int.

◆ createCartesianComm() [1/2]

void walberla::mpi::MPIManager::createCartesianComm ( const std::array< int, 3 > & dims,
const std::array< int, 3 > & periodicity )

◆ createCartesianComm() [2/2]

void walberla::mpi::MPIManager::createCartesianComm ( const uint_t xProcesses,
const uint_t yProcesses,
const uint_t zProcesses,
const bool xPeriodic = false,
const bool yPeriodic = false,
const bool zPeriodic = false )

◆ finalizeMPI()

void walberla::mpi::MPIManager::finalizeMPI ( )

Free the custom types and operators

◆ getCustomOperation()

template<typename CType >
MPI_Op walberla::mpi::MPIManager::getCustomOperation ( mpi::Operation op) const
inline

Return the custom MPI_Op stored in 'customMPIOperation_' and defined by the user and passed to 'commitCustomOperation'.

◆ getCustomType()

template<typename CType >
MPI_Datatype walberla::mpi::MPIManager::getCustomType ( ) const
inline

Return the custom MPI_Datatype stored in 'customMPITypes_' and defined by the user and passed to 'commitCustomType'.

◆ getMPICommName()

std::string walberla::mpi::MPIManager::getMPICommName ( MPI_Comm comm)
static

◆ getMPIErrorString()

std::string walberla::mpi::MPIManager::getMPIErrorString ( int errorCode)
static

◆ hasCartesianSetup()

bool walberla::mpi::MPIManager::hasCartesianSetup ( ) const
inline

◆ hasWorldCommSetup()

bool walberla::mpi::MPIManager::hasWorldCommSetup ( ) const
inline

◆ initializeMPI()

void walberla::mpi::MPIManager::initializeMPI ( int * argc,
char *** argv,
bool abortOnException = true )

Configures the class, initializes numProcesses, worldRank the rank and comm variables are still invalid, until custom communicator is set up.

Parameters
abortOnExceptionif true, MPI_Abort is called in case of an uncaught exception

◆ isCommMPIIOValid()

bool walberla::mpi::MPIManager::isCommMPIIOValid ( ) const

Indicates whether MPI-IO can be used with the current MPI communicator; certain versions of OpenMPI produce segmentation faults when using MPI-IO with a 3D Cartesian MPI communicator (see waLBerla issue #73)

◆ isMPIInitialized()

bool walberla::mpi::MPIManager::isMPIInitialized ( ) const
inline

◆ numProcesses()

int walberla::mpi::MPIManager::numProcesses ( ) const
inline

◆ rank()

int walberla::mpi::MPIManager::rank ( ) const
inline

◆ rankValid()

bool walberla::mpi::MPIManager::rankValid ( ) const
inline

Rank is valid after calling createCartesianComm() or useWorldComm()

◆ resetMPI()

void walberla::mpi::MPIManager::resetMPI ( )

◆ useWorldComm()

void walberla::mpi::MPIManager::useWorldComm ( )
inline

◆ worldRank()

int walberla::mpi::MPIManager::worldRank ( ) const
inline

Member Data Documentation

◆ cartesianSetup_

bool walberla::mpi::MPIManager::cartesianSetup_ {false}
private

Indicates whether a Cartesian communicator has been created.

◆ comm_

MPI_Comm walberla::mpi::MPIManager::comm_
private

Use this communicator for all MPI calls this is in general not equal to MPI_COMM_WORLD this may change during domain setup, where a custom communicator adapted to the domain is created.

◆ currentlyAborting_

bool walberla::mpi::MPIManager::currentlyAborting_ {false}
private

◆ customMPIOperations_

std::map< walberla::mpi::Operation, walberla::mpi::MPIOperation > walberla::mpi::MPIManager::customMPIOperations_ {}
private

◆ customMPITypes_

std::map< std::type_index, walberla::mpi::Datatype > walberla::mpi::MPIManager::customMPITypes_ {}
private

It is possible to commit own datatypes to MPI, that are not part of the standard.

One example would be float16. With these maps, it is possible to track self defined MPI_Datatypes and MPI_Ops, to access them at any time and place in the program, also, they are automatically freed once MPIManager::finalizeMPI is called. To initialize types or operations and add them to the map, the getter functions 'commitCustomType' and 'commitCustomOperation' should be used. This can for example be done e.g. in the specialization of the MPITrait of the newly defined type. For an example see MPIWrapper.cpp

◆ finalizeOnDestruction_

bool walberla::mpi::MPIManager::finalizeOnDestruction_ {false}
private

◆ isMPIInitialized_

bool walberla::mpi::MPIManager::isMPIInitialized_ {false}
private

Indicates whether initializeMPI has been called. If true, MPI_Finalize is called upon destruction.

◆ numProcesses_

int walberla::mpi::MPIManager::numProcesses_ {1}
private

Total number of processes.

◆ rank_

int walberla::mpi::MPIManager::rank_ {-1}
private

Rank in the custom communicator.

◆ WALBERLA_BEFRIEND_SINGLETON

walberla::mpi::MPIManager::WALBERLA_BEFRIEND_SINGLETON

◆ worldRank_

int walberla::mpi::MPIManager::worldRank_ {0}
private

Rank in MPI_COMM_WORLD.


The documentation for this class was generated from the following files: