Quantcast
Channel: Intel Developer Zone Articles
Viewing all articles
Browse latest Browse all 3384

Deep Neural Network Technical Preview for Intel® Math Kernel Library (Intel® MKL)

$
0
0

 

    In order to continue accelerating Deep Neural Network (DNN) applications on Intel® architecture, this article presents the Deep Neural Network Technical Preview for Intel® Math Kernel Library (DNN Preview for Intel® MKL), a technical preview of DNN functionality to be released in a future version of Intel MKL. This technical preview includes performance primitives powering the software described in the Single Node Caffe* Scoring and Training on Intel® Xeon® E5-Series Processor and Caffe* Training on Multi-node Distributed-memory Systems Based on Intel® Xeon® Processor E5 Family articles, which demonstrate a tenfold performance increase of the Caffe framework on the AlexNet topology.

The DNN Preview for Intel MKL demonstrates the programming model and features a collection of performance primitives for DNN applications optimized for Intel architecture. It includes a limited set of primitives optimized for Intel® Advanced Vector Extensions 2 (Intel® AVX2) enabled processors and provides all building blocks essential for implementing AlexNet topology training and classification:

  • Convolution: direct batched convolution
  • Pooling: maximum pooling
  • Normalization: local response normalization across channels
  • Activation: rectified linear neuron activation (ReLU)
  • Multi-dimensional transposition (conversion)

DNN Preview for Intel MKL Overview

DNN Preview for Intel MKL primitives implements a C API that can be used in existing C/C++ DNN frameworks as well as in custom DNN applications. In addition to input and output arrays of DNN applications, the DNN Preview for Intel MKL primitives works with special opaque data types to represent the following:

  • DNN operation specifies the operation type (such as convolution forward propagation or convolution backward filter propagation) and parameters (such as the filter size for convolution or alpha and beta for normalization)
  • Layouts of processed data specifies the relative location of elements of processed arrays in memory.

Input and output arrays of DNN operations are called resources. Each DNN operation requires resources to have certain data layouts. An application can query DNN operations about the required data layouts and check whether the layouts of the resources meet the layout requirements.

Given a DNN topology, at the setup stage the application creates all DNN operations necessary to implement scoring, training, or other application-specific computation. At this stage, some applications create intermediate conversions and allocate temporary arrays to pass data from one DNN operation to the next one if the appropriate output and input data layouts do not match.

The execution stage consists of calls to DNN Preview for Intel MKL primitives that apply DNN operations, including necessary conversions, to the input, output, and temporary arrays.

DNN Preview for Intel MKL package contains code samples of scoring and training implemented with DNN Preview for Intel MKL primitives that show the typical programming model and package usage.

Getting Started

Until the new functionality outlined in this article is incorporated into Intel MKL and Intel® Data Analytics Acceleration Library (Intel® DAAL), you can use the attached technology preview package in popular DNN frameworks or in your own DNN implementation. Note that the DNN Preview for Intel MKL is optimized for the AlexNet topology.

The DNN Preview for Intel MKL package contains a static library with DNN primitives, the Intel® OpenMP* library needed for DNN Preview for Intel MKL primitives, C header files, code samples of scoring and training implemented with the DNN Preview for Intel MKL primitives, and a Developer Reference.

Package structure:

<MKLDNNROOT>/

           lib/libmkl_dnn.a

             - Library with DNN Preview for Intel MKL primitives

           lib/libiomp5.so

             - Intel OpenMP library needed for DNN Preview for Intel MKL

           include/

                     mkl_dnn.h

                     mkl_dnn_F32.h

                     mkl_dnn_F64.h

                     mkl_dnn_types.h

             - Header files with type and function definitions

           samples/

                     s_score_sample.c

                     d_score_sample.c

                     s_train_sample.c

                     d_train_sample.c

             - Code samples of scoring and training implemented with DNN Preview for Intel MKL primitives

           doc/intel_mkl_dnn.pdf

             - Developer reference for DNN Preview for Intel MKL

System requirements and limitations:

  • - Intel OpenMP library
  • - Red Hat Enterprise Linux* 6.5 (64 bit) or higher

Support

Please direct questions and comments on this package to intel.mkl@intel.com.

 


Viewing all articles
Browse latest Browse all 3384

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>