Skip to content

Latest commit

 

History

History
128 lines (85 loc) · 6.83 KB

detectnet-console-2.md

File metadata and controls

128 lines (85 loc) · 6.83 KB

Back | Next | Contents
Object Detection

Locating Objects with DetectNet

The previous image recognition examples output class probabilities representing the entire input image. The second deep learning capability we're highlighting in this tutorial is object detection, and finding where in the frame various objects are located by extracting their bounding boxes. Unlike image recognition, object detection networks are capable of detecting multiple independent objects per frame.

The detectNet object accepts an image as input, and outputs a list of coordinates of the detected bounding boxes along with their confidence values. detectNet is available to use from Python and C++. See below for various pre-trained detection models available for download.

As examples of using detectNet we provide versions of a command-line interface for C++ and Python:

Later in the tutorial, we'll also cover object detection on live camera streams from C++ and Python:

Detecting Objects from the Command Line

The detectnet-console program can be used to locate objects in static images. It accepts 3 command line parameters:

  • the path to an input image (jpg, png, tga, bmp)
  • optional path to output image (jpg, png, tga, bmp)
  • optional --network flag which changes the detection model being used (the default network is PedNet).

Note that there are additional command line parameters available for loading custom models. Launch the application with the --help flag to recieve more info about using them, or see the Code Examples readme.

Here's an example of locating humans in an image with the default PedNet model:

C++

$ ./detectnet-console peds-004.jpg output.jpg

Python

$ ./detectnet-console.py peds-004.jpg output.jpg

Pre-trained Detection Models Available

Below is a table of the pre-trained object detection networks available for download, and the associated --network argument to detectnet-console used for loading the pre-trained models:

Model CLI argument NetworkType enum Object classes
SSD-Mobilenet-v1 ssd-mobilenet-v1 SSD_MOBILENET_V1 91 (COCO classes)
SSD-Mobilenet-v2 ssd-mobilenet-v2 SSD_MOBILENET_V2 91 (COCO classes)
SSD-Inception-v2 ssd-inception-v1 SSD_INCEPTION_V2 91 (COCO classes)
DetectNet-COCO-Dog coco-dog COCO_DOG dogs
DetectNet-COCO-Bottle coco-bottle COCO_BOTTLE bottles
DetectNet-COCO-Chair coco-chair COCO_CHAIR chairs
DetectNet-COCO-Airplane coco-airplane COCO_AIRPLANE airplanes
ped-100 pednet PEDNET pedestrians
multiped-500 multiped PEDNET_MULTI pedestrians, luggage
facenet-120 facenet FACENET faces

note: to download additional networks, run the Model Downloader tool
             $ cd jetson-inference/tools
             $ ./download-models.sh

Running Different Detection Models

You can specify which model to load by setting the --network flag on the command line to one of the corresponding CLI arguments from the table above. By default, PedNet is loaded (pedestrian detection) if the optional --network flag isn't specified.

Let's try running some of the other COCO models:

# C++
$ ./detectnet-console --network=coco-dog dog_1.jpg output_1.jpg

# Python
$ ./detectnet-console.py --network=coco-dog dog_1.jpg output_1.jpg

Alt text

# C++
$ ./detectnet-console --network=coco-bottle bottle_0.jpg output_2.jpg

# Python
$ ./detectnet-console.py --network=coco-bottle bottle_0.jpg output_2.jpg

Alt text

# C++
$ ./detectnet-console --network=coco-airplane airplane_0.jpg output_3.jpg 

# Python
$ ./detectnet-console.py --network=coco-airplane airplane_0.jpg output_3.jpg

Alt text

Multi-class Object Detection Models

Some models support the detection of multiple types of objects. For example, when using the multiped model on images containing luggage or baggage in addition to pedestrians, the 2nd object class is rendered with a green overlay:

# C++
$ ./detectnet-console --network=multiped peds-003.jpg output_4.jpg

# Python
$ ./detectnet-console.py --network=multiped peds-003.jpg output_4.jpg

Next, we'll run object detection on a live camera stream.

Next | Running the Live Camera Detection Demo
Back | Running the Live Camera Recognition Demo

© 2016-2019 NVIDIA | Table of Contents