Overview
Intel® MPI Library is a multi-fabric message passing library based on ANL* MPICH3* and OSU* MVAPICH2*.
Intel® MPI Library implements the Message Passing Interface, version 3.1 (MPI-3) specification. The library is thread-safe and provides the MPI standard compliant multi-threading support.
To receive technical support and updates, you need to register your product copy. See Technical Support below.
Product Contents
- The Intel® MPI Library Runtime Environment (RTO) contains the tools you need to run programs including scalable process management system (Hydra), supporting utilities, and dynamic libraries.
- The Intel® MPI Library Development Kit (SDK) includes all of the Runtime Environment components and compilation tools: compiler wrapper scripts (
mpicc
,mpiicc
, etc.), include files and modules, static libraries, debug libraries, and test codes.
What's New
Intel® MPI Library 2018 Beta
- Documentation has been removed from the product and is now available online.
Intel® MPI Library 2017 Update 2
- Added an environment variable
I_MPI_HARD_FINALIZE
.
Intel® MPI Library 2017 Update 1
- Support for topology-aware collective communication algorithms (
I_MPI_ADJUST
family). - Deprecated support for cross-OS launches.
Intel® MPI Library 2017
- Support for the MPI-3.1 standard.
- Removed the SMPD process manager.
- Removed the SSHM support.
- Deprecated support for the Intel® microarchitectures older than the generation codenamed Sandy Bridge.
- Bug fixes and performance improvements.
- Documentation improvements.
Key Features
- MPI-1, MPI-2.2 and MPI-3.1 specification conformance.
- MPICH ABI compatibility.
- Support for any combination of the following network fabrics:
- RDMA-capable network fabrics through DAPL*, such as InfiniBand* and Myrinet*.
- Sockets, for example, TCP/IP over Ethernet*, Gigabit Ethernet*, and other interconnects.
- (SDK only) Support for Intel® 64 architecture clusters using:
- Intel® C++/Fortran Compiler 14.0 and newer.
- Microsoft* Visual C++* Compilers.
- (SDK only) C, C++, Fortran 77, and Fortran 90 language bindings.
- (SDK only) Dynamic linking.
System Requirements
Hardware Requirements
- Systems based on the Intel® 64 architecture, in particular:
- Intel® Core™ processor family
- Intel® Xeon® E5 v4 processor family recommended
- Intel® Xeon® E7 v3 processor family recommended
- 1 GB of RAM per core (2 GB recommended)
- 1 GB of free hard disk space
Software Requirements
- Operating systems:
- Microsoft* Windows Server* 2008, 2008 R2, 2012, 2012 R2, 2016
- Microsoft* Windows* 7, 8.x, 10
- (SDK only) Compilers:
- Intel® C++/Fortran Compiler 15.0 or newer
- Microsoft* Visual Studio* Compilers 2013, 2015, 2017
- Batch systems:
- Microsoft* Job Scheduler
- Altair* PBS Pro* 9.2 or newer
- Recommended InfiniBand* software:
- Windows* OpenFabrics* (WinOF*) 2.0 or newer
- Windows* OpenFabrics* Enterprise Distribution (winOFED*) 3.2 RC1 or newer for Microsoft* Network Direct support
- Mellanox* WinOF* Rev 4.40 or newer
- Additional software:
- The memory placement functionality for NUMA nodes requires the
libnuma.so
library andnumactl
utility installed.numactl
should includenumactl
,numactl-devel
andnumactl-libs
.
- The memory placement functionality for NUMA nodes requires the
Known Issues and Limitations
- Cross-OS runs using
ssh
from a Windows* host fail. Two workarounds exist:- Create a symlink on the Linux* host that looks identical to the Windows* path to
pmi_proxy
. - Start
hydra_persist
on the Linux* host in the background (hydra_persist &
) and use -bootstrap service from the Windows* host. This requires that the Hydra service also be installed and started on the Windows* host.
- Create a symlink on the Linux* host that looks identical to the Windows* path to
- Support for Fortran 2008 is not implemented in Intel® MPI Library for Windows*.
- Enabling statistics gathering may result in increased time in
MPI_Finalize
. - In order to run a mixed OS job (Linux* and Windows*), all binaries must link to the same single or multithreaded MPI library. The single- and multithreaded libraries are incompatible with each other and should not be mixed. Note that the pre-compiled binaries for the Intel® MPI Benchmarks are inconsistent (Linux* version links to multithreaded, Windows* version links to single threaded) and as such, at least one must be rebuilt to match the other.
- If a communication between two existing MPI applications is established using the process attachment mechanism, the library does not control whether the same fabric has been selected for each application. This situation may cause unexpected applications behavior. Set the
I_MPI_FABRICS
variable to the same values for each application to avoid this issue. - If your product redistributes the
mpitune
utility, provide themsvcr71.dll
library to the end user. - The Hydra process manager has some known limitations such as:
stdin
redirection is not supported for the-bootstrap service
option.- Signal handling support is restricted. It could result in hanging processes in memory in case of incorrect MPI job termination.
- Cleaning up the environment after an abnormal MPI job termination by means of
mpicleanup
utility is not supported.
- ILP64 is not supported by MPI modules for Fortran 2008.
- When using the
-mapall
option, if some of the network drives require a password and it is different from the user password, the application launch may fail.
Technical Support
Every purchase of an Intel® Software Development Product includes a year of support services, which provides priority customer support at our Online Support Service Center web site, http://www.intel.com/supporttickets.
In order to get support you need to register your product in the Intel® Registration Center. If your product is not registered, you will not receive priority support.