Quantcast
Channel: Intel Developer Zone Articles
Viewing all articles
Browse latest Browse all 3384

Intel® Computer Vision SDK Model Optimizer Guide

$
0
0

Prerequisites

The Model Optimizer requires:

  • Python* 3 or newer
  • In some cases, you must install a framework, such as Caffe*, or TensorFlow*, or MXNet*.  

How to Configure the Model Optimizer for a Framework

If you are not using a layered model, you do not need to use or configure a framework. In that case, you can disregard these steps.

These instructions assume you have installed the Caffe, TensorFlow, or MXNet framework, and that you have a basic knowledge of how the Model Optimizer works.

Before you can use the Model Optimizer to convert your trained network model to the Intermediate Representation file format required by the Inference Engine, you must configure the Model Optimizer for the framework that was used to train the model. This section tells you how to configure the Model Optimizer either through scripts or using a manual process.

Using Configuration Scripts

You can either configure all three frameworks at the same time, or install an individual framework. The scripts install all required dependencies. 

To configure all three frameworks: Go to the <INSTALL_DIR>/model_optimizer/install_prerequisites folder and run:

  • For Linux*:
    install_prerequisites.sh
  • For Windows*:
    install_prerequisites.bat

To configure a specific framework: Go to the <INSTALL_DIR>/model_optimizer/install_prerequisites folder and run:

  • For Caffe on Linux:
    install_prerequisites_caffe.sh
  • For Caffe on Windows:
    install_prerequisites_caffe.bat
  • For TensorFlow on Linux:
    install_prerequisites_tf.sh
  • For TensorFlow on Windows:
    install_prerequisites_tf.bat
  • For MXNet on Linux:
    install_prerequisites_mxnet.sh
  • For MXNet on Windows:
    install_prerequisites_mxnet.bat

Configuring Manually

If you prefer, you can manually configure the Model Optimizer for your selected framework. This option does not install all of the required dependencies.

  1. Go to the Model Optimizer folder:
    • For Caffe:
      cd INSTALL_DIR>/model_optimizer/model_optimizer_caffe
    • For TensorFlow: 
      cd INSTALL_DIR>/model_optimizer/model_optimizer_tensorflow
    • For MXNet:
      cd INSTALL_DIR>/model_optimizer/model_optimizer_mxnet
  2. Recommended for all global Model Optimizer dependency installations: Create and activate a virtual environment. While not required, this option is strongly recommended since the virtual environment creates a Python* sandbox, and dependencies for the Model Optimizer do not influence the global Python configuration, installed libraries, or other components. In addition, a flag ensures that system-wide Python libraries are available in this sandbox:
    • Create a virtual environment: 
      virtualenv -p /usr/bin/python3.5 .env3 --system-site-packages
    • Activate the virtual environment:
      virtualenv -p /usr/bin/python3.5 .env3/bin/activate
  3. Install all dependencies or only the dependencies for a specific framework:
    • To install dependencies for all frameworks:
      pip3 install -r requirements.txt 
    • To install dependencies only for Caffe:
      pip3 install -r requirements_caffe.txt
    • To install dependencies only for TensorFlow:
      pip3 install -r requirements_tensorflow.txt
    • To install dependencies only for MXNet:
      pip3 install -r requirements_mxnet.txt

Using an Incompatible Caffe Distribution

These steps apply to situations in which your model has custom layers, but you do not have a compatible Caffe distribution installed. In addition to this section, you should also read the information about model layers in the Intel® CV SDK Overview.

This section includes terms that might be unfamiliar:

TermDescription
Proto fileA proto file contains data and services and is compiled with protoc. The proto file is created in the format defined by the associated protocol buffer.
ProtobufA protobuf is a library for protocol buffers.
ProtocA compiler that is used to generate code from proto files.
Protocal bufferData structures are saved and in and communicated from protocol buffers. The primary purpose of protocol buffers is in network communication. Protocol buffers are used because they are simple and fast.

Many distributions of Caffe are available other than the recommended Intel distribution. Each of these distributions may use a different proto version. If you installed one of these non-Intel distributions, the Model Optimizer uses the Berkeley Vision and Learning Center* (BVLC) Caffe proto, which is distributed with the Model Optimizer. 


Intermission: About the BVLC Caffe

The Model Optimizer contains a proto parser file called caffe_pb2.py that is generated by a protoc with a protobuf, using the Berkeley Vision and Learning Center* (BVLC) Caffe proto. The proto parser loads Caffe models into memory and parses the models according to rules described in the proto file.


If your model is trained in a distribution of Caffe version other than the Intel version, your proto is probably different from the default BVLC proto, preventing the Model Optimizer from loading your model. As a possible solution, you can generate a parser specifically for your distribution of caffe.proto, assuming your distribution of Caffe includes this file. This is not a guarantee that the Model Optimizer will be successful with your Caffe distribution, since it is not possible to account for every possible distribution.

Note: The script that follows replaces a file named caffe_pb2.py in the folder MODEL_OPTIMIZER_ROOT/mo/front/caffe/proto/. You might want to back up the existing file before running the script.

  1. Use this script to generate a parser specifically for your caffe.proto file:
    cd MODEL_OPTIMIZER_ROOT
    cd mo/front/caffe/proto/
    python3 generate_caffe_pb2.py --input_proto ${PATH_TO_CUSTOM_CAFFE}/src/caffe/proto/caffe.proto
  2. Check the date and time stamp on the caffe_pb2.py file in the MODEL_OPTIMIZER_ROOT/mo/front/caffe/proto/ folder. If the new file is not there, you might have run the script from a different location and you need to copy the new file to this folder.

When you run the Model Optimizer the new parser loads the model.

 

How to Register a Custom Layer in the Model Optimizer

If you do not know what a customer layer is, or why you might need to register a customer layer, read about model layers in the Intel® CV SDK Overview.

This example uses the .prototext file because it is a well known topology. The .prototext file looks like this:

name: "my_net"
input: "data"
input_shape {
  dim: 1
  dim: 3
  dim: 227
  dim: 227
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  convolution_param {
    num_output: 64
    kernel_size: 3
    stride: 2
    weight_filler {
      type: "xavier"
    }
  }
}
layer {
  name: "reshape_conv1"
  type: "Reshape"
  bottom: "conv1"
  top: "reshape_conv1"
  reshape_param {
    shape {
      dim: 3
      dim: 2
      dim: 0
      dim: 1
    }
    # omitting other params
  }
}
  1. To customize your layer, edit the prototext file to replace some of the information with your layer information:
    layer {
      name: "reshape_conv1"
      type: "CustomReshape"
      bottom: "conv1"
      top: "reshape_conv1"
      custom_reshape_param {
        shape {
          dim: 3
          dim: 2
          dim: 0
          dim: 1
        }
      }
  2. Run the Model Optimizer against your trained model and watch for error messages that indicate the Model Optimizer was not able to load the model. You might see one of these messages:
    [ ERROR ]: Current caffe.proto does not contain field "custom_reshape_param".
    [ ERROR ]: Unable to create ports for node with id

What Went Wrong

[ ERROR ]: Current caffe.proto does not contain field "custom_reshape_param"

This error message means CustomReshape is not registered as a layer for Caffe. This example error messages uses custom_reshape_pattern. Your error might refer to a different field.

This message means the Model Optimizer uses a protobuf library to parse and load Caffe models. This library needs a file grammar to parse, and a generated parser for the library. As a Caffe fallback, the Model Optimizer uses the Caffe-generated parser for a Caffe-specific .proto file. If you have Caffe installed with the Python interface available, make sure your version of the .proto file in the src/caffe/proto folder is the same version as the version of Caffe that created the model.

For the full correction, build the version of Caffe that was used to create the model. As a temporary measure, you can use a Python extension to work with your custom layers without building Caffe. To use the temporary correction, add the layer description to the caffe.proto file and generate the parser for it.

For example, to add the description of the CustomReshape layer, which is an artificial layer, and therefore not in any cafe.proto file:

  1. Add these lines to the end of the caffe.proto file:
    // this line to the end of LayerParameter layer = 546; - ID is any number not present in caffe.proto
      optional CustomReshapeParameter custom_reshape_param = 546;
    }
    // these lines to end of the file - describing contents of this parameter
    message CustomReshapeParameter {
      optional BlobShape shape = 1; // Use the same parameter type as some other Caffe layers
    }
  2. Generate a new parser:
    cd ROOT_MO_DIR/mo/front/caffe/proto
    python3 generate_caffe_pb2.py --input_proto PATH_TO_PROTO/caffe.proto

The Model Optimizer can now load the model into memory and work with any extensions you have.


[ ERROR ]: Unable to create ports for node with id

This message means the Optimizer does now know how to infer the shape of the layer.

To localize the scope, compile the list of custom layers that are:

  • In the topology
  • Not in the list of supported layers for the target framework

 

How to Register a Custom Layer as a Model Optimizer Extension

 

 

How to Convert a Model to Intermediate Representation

The Inference Engine requires the network Intermediate Representation (IR) that is produced by Model Optimizer. The network can be trained with Caffe*, TensorFlow*, or MXNet*.

To run the Model Optimizer and convert the model, the mo.py script from the <INSTALL_DIR>/model_optimizer directory is used. To convert a model, call mo.py, specifying a path to the input model file: python3 mo.py --input_model INPUT_MODEL

The mo.py script provides the universal entry point that can deduce the framework that has produced the input model by a standard extension of the model file:

  • .pb — TensorFlow models
  • .params — MXNet models
  • .caffemodel — Caffe models

If the model files do not have standard extensions, you can use the –framework{tf,caffe,mxnet} option to specify the framework type explicitly.

For example, the following commands are equivalent:

python3 mo.py --input_model /user/models/model.pb
python3 mo.py --framework tf --input_model /user/models/model.pb
python3 mo.py --input_model INPUT_MODEL

To adjust the conversion process, the Model Optimizer additionally provides a wide list of conversion parameters: general (such as path to the model file, model name, output directory, etc.) and framework-specific parameters.

For the full list of additional options, run

python3 mo.py -h

Converting a Model Using General Conversion Parameters

Converting a Caffe* Model

Converting a TensorFlow* Model

Converting an MXNet* Model

 

Frequently Asked Questions

 

Helpful Links

Note: Links open in a new window.

Intel® CV SDK Home Page: https://software.intel.com/en-us/computer-vision-sdk

Intel® CV SDK Documentation: https://software.intel.com/en-us/computer-vision-sdk/documentation/view-all

 

Legal Information

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein.

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.

The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at http://www.intel.com/ or from the OEM or retailer.

No computer system can be absolutely secure.

Intel, Arria, Core, Movidia, Pentium, Xeon, and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

OpenCL and the OpenCL logo are trademarks of Apple Inc. used with permission by Khronos

*Other names and brands may be claimed as the property of others.

Copyright © 2018, Intel Corporation. All rights reserved.


Viewing all articles
Browse latest Browse all 3384

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>