Quantcast
Channel: Intel Developer Zone Articles
Viewing all articles
Browse latest Browse all 3384

Getting Started with Intel® Machine Learning Scaling Library

$
0
0

Introduction

Intel® Machine Learning Scaling Library (Intel® MLSL) is a library providing an efficient implementation of communication patterns used in deep learning. It is intended for deep learning framework developers, who would like to benefit from scalability in their projects.

Some of the Intel MLSL features include:

  • Built on top of MPI, allows for use of other communication libraries
  • Optimized to drive scalability of communication patterns
  • Works across various interconnects: Intel® Omni-Path Architecture, InfiniBand*, and Ethernet
  • Common API to support deep learning frameworks (Caffe*, Theano*, Torch*, etc.)

Installation

Downloading Intel® MLSL Package

  1. Go to https://github.com/01org/MLSL/releases and download:

    • intel-mlsl-devel-64-<version>.<update>-<package>.x86_64.rpm for root installation, or
    • l_mlsl_p_<version>.<update>-<package>.tgz for user installation.
  2. From the same page, download the source code archive (.zip or .tar.gz). The archive contains the LICENSE.txt and PUBLIC_KEY.PUB files. PUBLIC_KEY.PUB is required for root installation.

System Requirements

Operating Systems

  • Red Hat* Enterprise Linux* 6 or 7
  • SuSE* Linux* Enterprise Server 12

Compilers

  • GNU*: C, C++ 4.4.0 or newer
  • Intel® C/C++ Compiler 16.0 or newer

Installing Intel® MLSL

Intel® MLSL package comprises the Intel MLSL Software Development Kit (SDK) and the Intel® MPI Library runtime components. Follow the steps below to install the package.

Root installation

  1. Log in as root.

  2. Install the package:

    rpm --import PUBLIC_KEY.PUB
    rpm -i intel-mlsl-devel-64-<version>.<update>-<package>.x86_64.rpm

    In the package name, <version>.<update>-<package> is a string, such as 2017.1-009.

Intel MLSL will be installed at /opt/intel/mlsl_<version>.<update>-<package>.

User installation

  1. Extract the package to the desired folder:

    tar -xvzf l_mlsl_p_<version>.<update>-<package>.tgz -C /tmp/
  2. Run the install.sh script, and follow the instructions:

    ./install.sh

Intel MLSL will be installed at $HOME/intel/mlsl_<version>.<update>-<package>.

Getting Started

After you have successfully installed the product, you are ready to use all of its functionality.

To get an idea of how Intel MLSL works and how to use the library API, you are recommended to build and launch a sample application supplied with Intel MLSL. The sample application emulates a deep learning framework operation while heavily utilizing the Intel MLSL API for parallelization.

Follow these steps to build and launch the application:

  1. Set up the Intel MLSL environment:

    source <install_dir>/intel64/bin/mlslvars.sh
  2. Build mlsl_test.cpp:

    cd <install_dir>/test
    make
  3. Launch the mlsl_test binary with mpirun on the desired number of nodes (N). mlsl_test takes two arguments:

    • num_groups– defines the type of parallelism, based on the following logic:

      • num_groups == 1– data parallelism
      • num_groups == N– model parallelism
      • num_groups > 1 and num_groups < N– hybrid parallelism
    • dist_update– enables distributed weight update

Launch command examples:

# use data parallelism
mpirun –n 8 -ppn 1 ./mlsl_test 1
# use model parallelism
mpirun –n 8 -ppn 1 ./mlsl_test 8
# use hybrid parallelism, enable distributed weight update
mpirun –n 8 -ppn 1 ./mlsl_test 2 1

The application implements the standard usage workflow of Intel MLSL. The sample detailed description, as well as the generic step-by-step workflow and API reference, are available in the Developer Guide and Reference supplied with Intel MLSL.


Viewing all articles
Browse latest Browse all 3384

Trending Articles