Quantcast
Channel: Intel Developer Zone Articles
Viewing all articles
Browse latest Browse all 3384

Build and Install TensorFlow* Serving on Intel® Architecture

$
0
0

Introduction

The first tutorial in this series, Build and Install TensorFlow* on Intel® Architecture, demonstrated how to build and install TensorFlow optimized on Intel architecture from sources available in the GitHub* repository. The information provided in this paper describes how to build and install TensorFlow* Serving, a high-performance serving system for machine learning models designed for production environments.

Installation Overview

The installation guidelines presented in this document are distilled from information available on the TensorFlow Serving GitHub website. The steps outlined below are provided to give a quick overview of the installation process; however, since third-party information is subject to change over time it is recommended that you also review the information provided on the TensorFlow Serving website.

Important: The step-by-step guidelines provided below assume the reader has already completed the tutorial Build and Install TensorFlow on Intel® Architecture, which includes the steps to install the Bazel* build tool and some of the other required dependencies not covered here.

Installing gRPC*

Begin by installing the Google Protocol RPC* library (gRPC*), a framework for implementing remote procedure call (RPC) services.

sudo pip install grpcio

Installing Dependencies

Next, ensure the other TensorFlow Serving dependencies are installed by issuing the following command:

sudo apt-get update && sudo apt-get install -y \
build-essential \
curl \
libcurl3-dev \
git \
libfreetype6-dev \
libpng12-dev \
libzmq3-dev \
pkg-config \
python-dev \
python-numpy \
python-pip \
software-properties-common \
swig \
zip \
zlib1g-dev

Installing TensorFlow* Serving

Clone TensorFlow Serving from the GitHub repository by issuing the following command:

   git clone --recurse-submodules https://github.com/tensorflow/serving

The serving/tensorflow directory created during the cloning process contains a script named “configure” that must be executed to identify the pathname, dependencies, and other build configuration options. For TensorFlow optimized on Intel architecture, this script also allows you to set up Intel® Math Kernel Library (Intel® MKL) related environment settings. Issue the following commands:

cd serving/tensorflow
./configure

Important: Select ‘Y’ to build TensorFlow with MKL support, and ‘Y’ to download MKL LIB from the web. Select the default settings for the other configuration parameters.

cd ..
bazel build --config=mkl --copt="-DEIGEN_USE_VML" tensorflow_serving/...

Testing the Installation

Test the TensorFlow Serving installation by issuing the following command:

bazel test tensorflow_serving/...

If everything worked OK you should see results similar to Figure 1.

Screenshot of a command prompt window with results of correct installation

Figure 1. TensorFlow Serving installation test results.

Coming Up

The next article in this series describes how to train and save a TensorFlow model, host the model in TensorFlow Serving, and use the model for inference in a client-side application.


Viewing all articles
Browse latest Browse all 3384

Trending Articles