For an overview and details about the Intel® Distribution of OpenVINO™ toolkit, see the OpenVINO™ Overview
We recommend you read the above OpenVINO™ Overview before starting this tutorial.
NOTE: When using OpenVINO™ from the command line, you must setup your environment whenever you change users or launch a new terminal.
source /opt/intel/openvino/bin/setupvars.sh
For using the Model Optimizer from the command line, only OpenVINO is required. For using the Deep Learning Workbench, there is an additional requirement for docker and an initial configuration step for the Workbench.
The following exercises are an introduction to using OpenVINO to run samples and demos, and a quick introduction to tools such as the Model Optimizer and DL Streamer. The exercises are intended to become more difficult as you progress. The early exercises provide step-by-step instructions, while later exercises require more effort. The goal is to teach developers the concepts and skills needed to use all of the resources available for OpenVINO.
It might be a good idea to keep the Overview page, linked above, open in a tab for reference during the exercises.
This will show some of the features of OpenVINO, downloads files and creates directories we will use later.
cd /opt/intel/openvino/deployment_tools/demo/
Run the SqueezeNet classification demo.
./demo_squeezenet_download_convert_run.sh
Run the security barrier (vehicle detection, make and license plate recognition) object detection demo.
./demo_security_barrier_camera.sh
If samples have been manually built, skip this section. This step will take about 5-10 minutes, depending on your system.
Build OpenVINO Demos
cd /opt/intel/openvino/inference_engine/demos
./build_demos.sh
Build OpenVINO Samples
cd /opt/intel/openvino/inference_engine/samples/cpp
./build_samples.sh
In this step, your trained models are ready to run through the Model Optimizer to convert them to the Intermediate Representation format. This is required before using the Inference Engine with the model.
Models in the IR format always include an .xml and .bin file, and may also include other files, like .json, .mapping, or others. Make sure you have these files in the same directory for the Inference Engine to find them.
REQUIRED: model_name.xml REQUIRED: model_name.bin OPTIONAL: model_name.json, model_name.mapping, etc.
This guide uses the public SqueezeNet 1.1 Caffe* model to run the Image Classification Sample. See the example to download a model in the Download Models section to learn how to download this model.
The squeezenet1.1 model is downloaded in the Caffe* format. You must use the Model Optimizer to convert the model to the IR. The vehicle-license-plate-detection-barrier-0106, vehicle-attributes-recognition-barrier-0039, license-plate-recognition-barrier-0001 models are downloaded in the Intermediate Representation format. You don't need to use the Model Optimizer to covert these models.
Create an <ir_dir> directory to contain the model's Intermediate Representation (IR).
mkdir ~/ir
The Inference Engine can perform inference on different precision formats, such as FP32, FP16, INT8. To prepare an IR with specific precision, run the Model Optimizer with the appropriate --data_type option.
Run the Model Optimizer script:
cd /opt/intel/openvino/deployment_tools/model_optimizer
python3 ./mo.py --input_model <model_dir>/<model_file> --data_type <model_precision> --output_dir <ir_dir> The produced IR files are in the <ir_dir> directory.
The actual command:
python3 ./mo.py --input_model ~/openvino_models/models/public/squeezenet1.1/squeezenet1.1.caffemodel
The following series of exercises guide you through using samples of increasing complexity. As you move through each exercise you will get a sense of how to use OpenVINO™ in more sophisticated use cases.
In these exercises, you will:
-
Convert and optimize a neural network model to work on Intel® hardware.
-
Run computer vision applications using optimized models and appropriate media.
- During optimization with the DL Workbench™, a subset of ImageNet* and VOC* images are used.
- When running samples, we'll use either an image or video file located on this system.
NOTE: Before starting these sample exercises, change directories into the samples directory:
cd ~/omz_demos_build/intel64/Release
NOTE: During this exercise you will move to multiple directories and occasionally copy files so that you don't have to specify full paths in commands. You are welcome to set up environment variables to make these tasks easier, but we leave that to you.
REMEMBER: When using OpenVINO™ from the command line, you must set up your environment whenever you change users or launch a new terminal.
source /opt/intel/openvino/bin/setupvars.sh
Exercise 1 - Run A Sample Application
Convert a model using the Model Optimizer then use a sample application to load the model and run inference.
In this section, you will convert an FP32 model suitable for running on a CPU.
Prepare the Software Environment
-
Set up the environment variables when logging in, changing users, or launching a new terminal. (Detail above.)
-
Make a destination directory for the FP32 SqueezeNet* Model:
mkdir ~/squeezenet1.1_FP32
cd ~/squeezenet1.1_FP32
*Convert and Optimize a Neural Network Model from Caffe **
Use the Model Optimizer to convert an FP32 SqueezeNet* Caffe* model into an optimized Intermediate Representation (IR):
python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model ~/openvino_models/models/public/squeezenet1.1/squeezenet1.1.caffemodel --data_type FP32 --output_dir .
Prepare the Data (Media) or Dataset
NOTE: In this case, it's a single image.
-
Copy the labels file to the same location as the IR model.
cp /opt/intel/openvino/deployment_tools/demo/squeezenet1.1.labels .
- Tip: The labels file contains the classes used by this SqueezeNet* model.
- If it's is in the same directory as the model, the inference results will show text in addition to confidence percentages.
-
Copy a sample image to the current directory. You will use this with your optimized model:
sudo cp /opt/intel/openvino/deployment_tools/demo/car.png .
Run the Sample Application
-
Once your setup is complete, you're ready to run a sample application:
~/inference_engine_samples_build/intel64/Release/classification_sample_async -i car.png -m ~/squeezenet1.1_FP32/squeezenet1.1.xml -d CPU
-
Note: You can usually see an application's help information (parameters, etc.) by using
-h
.~/inference_engine_samples_build/intel64/Release/classification_sample_async -h
If desired, you can look at the original image using the Eye of Gnome application (installed by default on Ubuntu systems):
eog car.png
Exercise 2: Human Pose Estimation
This demo detects people and draws a stick figure to show limb positions. This model has already been converted for use with the Intel® Distribution of OpenVINO™ toolkit.
- Requires downloading the human-pose-estimation-0001 (ICV) Model.
- Requires video or camera input.
Example Syntax:
- human_pose_estimation_demo -i path/to/video -m path/to/model/human-pose-estimation-0001.xml -d CPU
Steps to Run the Human Pose Demo:
-
Set up the environment variables:
source /opt/intel/openvino/bin/setupvars.sh
-
Move to the Model Downloader Directory:
cd /opt/intel/openvino/deployment_tools/tools/model_downloader/
-
Find a suitable model:
python3 info_dumper.py --print_all |grep pose
Note: info_dumper.py
is a script that can list details about every model available in the Intel® Model Zoo. Models can also be manually downloaded from the Open Model Zoo GitHub page.
-
Download the model:
sudo ./downloader.py --name human-pose*
-
Move the model to a more convenient location:
mkdir ~/ir
cp /opt/intel/openvino/deployment_tools/tools/model_downloader/intel/human-pose-estimation-0001/FP32/human-pose-estimation-0001* ~/ir/
-
Download an appropriate video:
Open a web browser to the following URL and download the video: https://www.pexels.com/video/couple-dancing-on-sunset-background-2035509/
Rename the video for convenience:
mv ~/Downloads/Pexels\ Videos\ 2035509.mp4 ~/Videos/humpose.mp4
-
Run the sample:
cd ~/omz_demos_build/intel64/Release/
./human_pose_estimation_demo -i ~/Videos/humpose.mp4 -m ~/ir/human-pose-estimation-0001.xml -d CPU
Exercise 3: Interactive Face Detection
The face detection demo draws bounding boxes around faces, and optionally feeds the output of the primary model to additional models. This model has already been converted for use with OpenVINO™.
The Face Detection Demo supports face detection, plus optional functions:
- Age-gender recognition
- Emotion recognition
- Head pose
- Facial landmark display
Example Syntax:
- interactive_face_detection_demo -i path/to/video -m path/to/face/model -d CPU
Steps:
- Find and download an appropriate face detection model. There are several available in the Intel® Model Zoo.
- You can access the Pretrained Models page from the OpenVINO™ documentation to review model options.
- You may need to try out different models to find one that works, or that works best for your scenario.
- Find and download a video that features faces.
- Run the demo with just the face detection model.
- OPTIONAL: Run the demo using additional models (age-gender, emotion recognition, head pose, etc.). Note that when you use multiple models, there is always a primary model that is used followed by a number of optional models that use the output from the initial model.
Exercise 4: DL Streamer
The DL Streamer is a command-line tool and API for integrating OpenVINO into a media analytics pipeline. It supports OpenVINO, GStreamer, Mosquitto, Kafka, and a variety of other technologies.
Follow the link below, read through the documentation, then do the tutorial.
Use these resources to learn more about the OpenVINO™ toolkit: