Quantcast
Channel: Intel Developer Zone Articles
Viewing all articles
Browse latest Browse all 3384

Using the Caffe* Framework with the Intel® Computer Vision SDK

$
0
0

Using Caffe* with the Intel® Computer Vision SDK (Intel® CV SDK)

Caffe* is one of the framework options you have when using a layered model with the Model Optimizer. You will need the steps in this document if:

  • Your model topology contains layers that are not implemented in Model Optimizer AND
  • You decide not to register these unknown layers as custom operations.

To use this option, you must install the Caffe framework with all relevant dependencies, and then compile and link it. You must also make a shared library named libcaffe.so available in the CAFFE_HOME/build/lib directory.

Topologies Supported by the Model Optimizer with the Caffe Framework

  • Classification models:
    • AlexNet
    • VGG-16, VGG-19
    • SqueezeNet v1.0, SqueezeNet v1.1
    • ResNet-50, ResNet-101, ResNet-152
    • Inception v1, Inception v2, Inception v3, Inception v4
    • CaffeNet
    • MobileNet
  • Object detection models:
    • SSD300-VGG16, SSD500-VGG16
    • Faster-RCNN
    • Yolo v2, Yolo Tiny
  • Face detection models:
    • VGG Face
  • Semantic segmentation models:
    • FCN8

Install the Caffe Framework

In this guide, the Caffe* installation directory is referred to as CAFFE_HOME, and the Model Optimizer installation directory is referred to as MO_DIR.

The installation path to the Model Optimizer depends on whether you use the Intel® CV SDK or the Deep Learning Deployment Toolkit. For example, if you use the installation command with sudo, the default directory is MO_DIR

To install Caffe:

  1. Set these environment variables:
    export MO_DIR=PATH_TO_MO_INSTALL_DIR
    export CAFFE_HOME=<PATH_TO_YOUR_CAFFE_DIR
  2. Go to the Model Optimizer directory:
    cd $MO_DIR/model_optimizer_caffe/
  3. Install the Caffe dependencies, such as Git*, CMake*, and GCC*:
    cd install_prerequisites/
    ./install_Caffe_dependencies.sh
  4. Apply the Model Optimizer adapters:
    clone_patch_build_Caffe.sh 
  5. Optional: By default the Caffe installation installs BVLC Caffe from the master branch of the official repository. If you want to install a different version of Caffe, edit the the clone_patch_build_Caffe.sh script by changing these lines:
    CAFFE_REPO="https://github.com/BVLC/caffe.git" # link to the repository with Caffe* distribution
    CAFFE_BRANCH=master # branch to be taken from the repository
    CAFFE_FOLDER=`pwd`/caffe # where to clone the repository on your local machine
    CAFFE_BUILD_SUBFOLDER=build # name of the directory required for building Caffe* 
  6. Install Caffe*.To launch installation, just run the following command:
    ./clone_patch_build_Caffe.sh 

NOTE: If you experience problems with the hdf5 library while building Caffe on Ubuntu* 16.04, TBD.

The Caffe framework is installed. Continue with the next section to build the Caffe framework.

Build the Caffe Framework

  1. Build Caffe with Python 3.5:
    export CAFFE_HOME=PATH_TO_CAFFE
    cd $CAFFE_HOME
    rm -rf  ./build
    mkdir ./build
    cd ./build
    cmake -DCPU_ONLY=ON -DOpenCV_DIR=<your opencv install dir> -DPYTHON_EXECUTABLE=/usr/bin/python3.5 ..
    make all # also builds pycaffe
    make install
    make runtest # optional
  2. Add the Caffe Python directory to PYTHONPATH to let it be imported from the Python program:
    export PYTHONPATH=$CAFFE_HOME/python;$PYTHONPATH
  3. Confirm the installation and build worked correctly:
    python3
    import caffe

The Caffe framework is installed and built. Continue to the next section to build a protobuf library.

Implement the protobuf Library

The Model Optimizer uses the protobuf library to load the trained Caffe* model. By default, the library executes pure Python* implementation, which is slow. These steps implement the faster, C implementation, of the protobuf library on Windows or Linux.

Implementing the protobuf Library on Linux*

On Linux, the implementation is completed by setting an environment variable:

export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp

Implementing the protobuf Library on Windows*

Steps TBD - original content is probably incorrect. The commands are for Linux. See the file ConfigMOForCaffe.html

  1. Clone protobuf:
    sh git clone https://github.com/google/protobuf.git
    cd protobuf
  2. Create a Visual Studio solution file:
    sh C:\Path\to\protobuf\cmake\build>mkdir solution
    cd solution C:\Path\to\protobuf\cmake\build\solution>cmake -G "Visual Studio 12 2013 Win64" ../.. ```
  3. Change the runtime library option for libprotobuf and libprotobuf-lite:
    • Open the project's Property Pages dialog box.
    • Expand the C/C++ tab.
    • Select the Code Generation property page.
    • Modify the Runtime Library property to Multy-thread DLL (/MD).
    • Build libprotoc, protoc, libprotobuf and libprotobuf-lite projects for the Release configuration. 
  4. Add a path to the build directory to include environment variable PATH:
    ```sh set PATH=PATH%;C:\Path\to\protobuf\cmake\build\solution\Release ``` 
  5. Go to the python directory:
     ```sh cd C:\Path\to\protobuf\python ```
  6. Use a text editor to open and change these setup.py options:
    • Change from ​libraries = ['protobuf']
      to libraries = ['libprotobuf', 'libprotobuf-lite']
    • Change from extra_objects = ['../src/.libs/libprotobuf.a', '../src/.libs/libprotobuf-lite.a']
    • to extra_objects = ['../cmake/build/solution/Release/libprotobuf.lib', '../cmake/build/solution/Release/libprotobuf-lite.lib']
  7. Build the Python package with the CPP implementation:
    ```sh python setup.py build –cpp_implementation ```
  8. Install the Python package with the CPP implementation:
    ```sh python -m easy_install dist/protobuf-3.5.1-py3.5-win-amd64.egg ```
  9. Set an environment variable to boost the protobuf performance:
    ```sh set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp ```

You are ready to use Caffe with your trained models. Your next step is to use the Intel® Computer Vision SDK Model Optimizer Guide.

 

Helpful Links

Note: Links open in a new window.

Intel® Computer Vision SDK Model Optimizer Guide: https://software.intel.com/en-us/articles/CVSDK-ModelOptimizer

Intel® Computer Vision SDK Inference Engine Guide: https://software.intel.com/en-us/articles/CVSDK-InferEngine

Intel® Computer Vision SDK Overview: https://software.intel.com/en-us/articles/CVSDK-Overview

Intel® CV SDK Home Page: https://software.intel.com/en-us/computer-vision-sdk

Intel® CV SDK Documentation: https://software.intel.com/en-us/computer-vision-sdk/documentation/view-all

 

Legal Information

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein.

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.

The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at http://www.intel.com/ or from the OEM or retailer.

No computer system can be absolutely secure.

Intel, Arria, Core, Movidia, Pentium, Xeon, and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

OpenCL and the OpenCL logo are trademarks of Apple Inc. used with permission by Khronos

*Other names and brands may be claimed as the property of others.

Copyright © 2018, Intel Corporation. All rights reserved.


Viewing all articles
Browse latest Browse all 3384

Trending Articles