Quantcast
Channel: Intel Developer Zone Articles
Viewing all articles
Browse latest Browse all 3384

Python mpi4py on Intel® True Scale and Omni-Path Clusters

$
0
0

Python users of the mpi4py package, leveraging capabilities for distributed computing on supercomputers with the Intel® True Scale or Intel® Omni-Path interconnects might run into issues with the default configuration of mpi4py.

The mpi4py package is using matching probes (MPI_Mpobe) for the receiving function recv() instead of regular MPI_Recv operations per default. These matching probes from the MPI 3.0 standard however are not supported for all fabrics, which may lead to a hang in the receiving function.

Therefore, users are recommended to leverage the OFI fabric instead of TMI for Omni-Path systems. For Intel® MPI, the configuration could look like the following environment variable setting.:

I_MPI_FABRICS=ofi

Users utilizing True Scale or Omni-Path systems via the TMI fabric, might alternatively switch off the usage of matching probe operations withing the mpi4py recv() function.

This can be established via

mpi4py.rc.recv_mprobe = False

right after importing the mpi4py package.

 

 


Viewing all articles
Browse latest Browse all 3384

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>