Quantcast
Channel: Intel Developer Zone Articles
Viewing all articles
Browse latest Browse all 3384

Getting Started with Intel® MPI Library 2019 for Linux* OS (Beta)

$
0
0

Intel® MPI Library is a multi-fabric message passing library that implements the Message Passing Interface, version 3.1 (MPI-3.1) specification. Use the library to develop applications that can run on multiple cluster interconnects.

The Intel® MPI Library has the following features:

  • High sclability
  • Low overhead, enables analyzing large amounts of data
  • MPI tuning utility for accelerating your applications
  • Interconnect independence and flexible runtime fabric selection

Intel® MPI Library is available as a standalone product and as part of the Intel® Parallel Studio XE Cluster Edition.

Product Contents

The product comprises the following main components:

  • Runtime Environment (RTO) includes the tools you need to run programs, including the Hydra process manager, supporting utilities, shared (.so) libraries, and documentation.

  • Software Development Kit (SDK) includes all of the Runtime Environment components plus compilation tools, including compiler wrappers such as mpiicc, include files and modules, static (.a) libraries, debug libraries, and test codes.

Besides the SDK and RTO components, Intel® MPI Library also includes Intel® MPI Benchmarks, which enable you to measure MPI operations on various cluster architectures and MPI implementations. For details, see the Intel® MPI Benchmarks User's Guide.

Prerequisites

Before you start using Intel® MPI Library make sure to complete the following steps:

  1. Source the mpivars.[c]sh script to establish the proper environment settings for the Intel® MPI Library. It is located in the <installdir_MPI>/intel64/bin directory, where <installdir_MPI> refers to the Intel MPI Library installation directory (for example, /opt/intel/compilers_and_libraries_<version>.<update>.<package#>/linux/mpi).

  2. Create a hostfile text file that lists the nodes in the cluster using one host name per line. For example:
    clusternode1
    clusternode2
  3. Make sure the passwordless SSH connection is established among all nodes of the cluster. It ensures the proper communication of MPI processes among the nodes. To establish the connection, you can use the sshconnectivity.exp script located at <installdir>/parallel_studio_xe_<version>.<update>.<package>/bin.

After completing these steps, you are ready to use Intel® MPI Library.

For detailed system requirements, see the System Requirements section in Release Notes.

Building and Running MPI Programs

Compiling an MPI program

If you have the SDK component installed, you can build your MPI programs with Intel® MPI Library. Do the following:

  1. Make sure you have a compiler in your PATH. To check this, run the which command on the desired compiler. For example:

    $ which icc
    /opt/intel/compilers_and_libraries_2018.<update>.<package#>/linux/bin/intel64/icc
  2. Compile a test program using the appropriate compiler wrapper. For example, for a C program:

    $ mpiicc -o myprog <installdir>/test/test.c

Running an MPI program

Use the mpirun command to run your program. Use the previously created hostfile with the -f option to launch the program on the specifed nodes:

$ mpirun -n <# of processes> -ppn <# of processes per node> -f ./hostfile ./myprog

The test program above produces output in the following format:

Hello world: rank 0 of 2 running on clusternode1
Hello world: rank 1 of 2 running on clusternode2

This output indicates that you properly configured your environment and Intel® MPI Library successfully ran the test MPI program on the cluster.

Key Features

Intel® MPI Library has the following major features:

  • MPI-1, MPI-2.2 and MPI-3.1 specification conformance
  • Support for any combination of the following interconnection fabrics:
    • Shared memory
    • Network fabrics with tag matching capabilities through Tag Matching Interface (TMI), such as Intel® True Scale Fabric, Infiniband*, Myrinet* and other interconnects
    • Native InfiniBand* interface through OFED* verbs provided by Open Fabrics Alliance* (OFA*)
    • OpenFabrics Interface* (OFI*)
    • RDMA-capable network fabrics through DAPL*, such as InfiniBand* and Myrinet*
    • Sockets, for example, TCP/IP over Ethernet*, Gigabit Ethernet*, and other interconnects
  • Support for the 2nd Generation Intel® Xeon Phi™ processor.
  • (SDK only) Support for Intel® 64 architecture Intel® MIC Architecture clusters using:
    • Intel® C++ Compiler version 15.0 and higher
    • Intel® Fortran Compiler version 15.0 and higher
    • GNU* C, C++ and Fortran 95 compilers
  • (SDK only) C, C++, Fortran* 77, Fortran 90 language bindings and Fortran 2008 bindings
  • (SDK only) Dynamic linking

Online Resources

See Also


Viewing all articles
Browse latest Browse all 3384

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>