Skip to content

Commit

Permalink
Convert detectors to factory pattern, ability to set different model …
Browse files Browse the repository at this point in the history
…for each detector (#4635)

* refactor detectors

* move create_detector and DetectorTypeEnum

* fixed code formatting

* add detector model config models

* fix detector unit tests

* adjust SharedMemory size to largest detector model shape

* fix detector model config defaults

* enable auto-discovery of detectors

* simplify config

* simplify config changes further

* update detectors docs; detect detector configs dynamic

* add suggested changes

* remove custom detector doc

* fix grammar, adjust device defaults
  • Loading branch information
dennispg authored Dec 15, 2022
1 parent 43c2761 commit 420bcd7
Show file tree
Hide file tree
Showing 15 changed files with 402 additions and 227 deletions.
89 changes: 49 additions & 40 deletions docs/docs/configuration/detectors.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,38 @@ id: detectors
title: Detectors
---

By default, Frigate will use a single CPU detector. If you have a Coral, you will need to configure your detector devices in the config file. When using multiple detectors, they run in dedicated processes, but pull from a common queue of requested detections across all cameras.
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, and `openvino`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.

Frigate supports `edgetpu` and `cpu` as detector types. The device value should be specified according to the [Documentation for the TensorFlow Lite Python API](https://coral.ai/docs/edgetpu/multiple-edgetpu/#using-the-tensorflow-lite-python-api).
**Note**: There is not yet support for Nvidia GPUs to perform object detection with tensorflow. It can be used for ffmpeg decoding, but not object detection.

**Note**: There is no support for Nvidia GPUs to perform object detection with tensorflow. It can be used for ffmpeg decoding, but not object detection.
## CPU Detector (not recommended)
The CPU detector type runs a TensorFlow Lite model utilizing the CPU without hardware acceleration. It is recommended to use a hardware accelerated detector type instead for better performance. To configure a CPU based detector, set the `"type"` attribute to `"cpu"`.

The number of threads used by the interpreter can be specified using the `"num_threads"` attribute, and defaults to `3.`

A TensorFlow Lite model is provided in the container at `/cpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.

```yaml
detectors:
cpu1:
type: cpu
num_threads: 3
model:
path: "/custom_model.tflite"
cpu2:
type: cpu
num_threads: 3
```
When using CPU detectors, you can add one CPU detector per camera. Adding more detectors than the number of cameras should not improve performance.
## Edge-TPU Detector
The EdgeTPU detector type runs a TensorFlow Lite model utilizing the Google Coral delegate for hardware acceleration. To configure an EdgeTPU detector, set the `"type"` attribute to `"edgetpu"`.
The EdgeTPU device can be specified using the `"device"` attribute according to the [Documentation for the TensorFlow Lite Python API](https://coral.ai/docs/edgetpu/multiple-edgetpu/#using-the-tensorflow-lite-python-api). If not set, the delegate will use the first device it finds.
A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.

### Single USB Coral

Expand All @@ -16,6 +43,8 @@ detectors:
coral:
type: edgetpu
device: usb
model:
path: "/custom_model.tflite"
```

### Multiple USB Corals
Expand Down Expand Up @@ -64,38 +93,33 @@ detectors:
device: pci
```

### CPU Detectors (not recommended)

```yaml
detectors:
cpu1:
type: cpu
num_threads: 3
cpu2:
type: cpu
num_threads: 3
```

When using CPU detectors, you can add a CPU detector per camera. Adding more detectors than the number of cameras should not improve performance.
## OpenVINO Detector

## OpenVINO
The OpenVINO detector type runs an OpenVINO IR model on Intel CPU, GPU and VPU hardware. To configure an OpenVINO detector, set the `"type"` attribute to `"openvino"`.

The OpenVINO detector allows Frigate to run an OpenVINO IR model on Intel CPU, GPU and VPU hardware.
The OpenVINO device to be used is specified using the `"device"` attribute according to the naming conventions in the [Device Documentation](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Working_with_devices.html). Other supported devices could be `AUTO`, `CPU`, `GPU`, `MYRIAD`, etc. If not specified, the default OpenVINO device will be selected by the `AUTO` plugin.

### OpenVINO Devices
OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. A supported Intel platform is required to use the `GPU` device with OpenVINO. The `MYRIAD` device may be run on any platform, including Arm devices. For detailed system requirements, see [OpenVINO System Requirements](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html)

The OpenVINO detector supports the Intel-supplied device plugins and can specify one or more devices in the configuration. See OpenVINO's device naming conventions in the [Device Documentation](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Working_with_devices.html) for more detail. Other supported devices could be `AUTO`, `CPU`, `GPU`, `MYRIAD`, etc.
An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobilenet_v2.xml` and is used by this detector type by default. The model comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model. Use the model configuration shown below when using the OpenVINO detector.

```yaml
detectors:
ov_detector:
ov:
type: openvino
device: GPU
```
device: AUTO
model:
path: /openvino-model/ssdlite_mobilenet_v2.xml
OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. A supported Intel platform is required to use the GPU device with OpenVINO. The `MYRIAD` device may be run on any platform, including Arm devices. For detailed system requirements, see [OpenVINO System Requirements](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html)
model:
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: bgr
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
```

#### Intel NCS2 VPU and Myriad X Setup
### Intel NCS2 VPU and Myriad X Setup

Intel produces a neural net inference accelleration chip called Myriad X. This chip was sold in their Neural Compute Stick 2 (NCS2) which has been discontinued. If intending to use the MYRIAD device for accelleration, additional setup is required to pass through the USB device. The host needs a udev rule installed to handle the NCS2 device.

Expand Down Expand Up @@ -123,18 +147,3 @@ device_cgroup_rules:
volumes:
- /dev/bus/usb:/dev/bus/usb
```

### OpenVINO Models

The included model for an OpenVINO detector comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model. Use the model configuration shown below when using the OpenVINO detector.

```yaml
model:
path: /openvino-model/ssdlite_mobilenet_v2.xml
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: bgr
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
```
14 changes: 6 additions & 8 deletions docs/docs/configuration/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,15 +74,13 @@ mqtt:
# Optional: Detectors configuration. Defaults to a single CPU detector
detectors:
# Required: name of the detector
coral:
detector_name:
# Required: type of the detector
# Valid values are 'edgetpu' (requires device property below) `openvino` (see Detectors documentation), and 'cpu'.
type: edgetpu
# Optional: Edgetpu or OpenVino device name
device: usb
# Optional: num_threads value passed to the tflite.Interpreter (default: shown below)
# This value is only used for CPU types
num_threads: 3
# Frigate provided types include 'cpu', 'edgetpu', and 'openvino' (default: shown below)
# Additional detector types can also be plugged in.
# Detectors may require additional configuration.
# Refer to the Detectors configuration page for more information.
type: cpu
# Optional: Database configuration
database:
Expand Down
15 changes: 9 additions & 6 deletions frigate/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -186,10 +186,16 @@ def start_detectors(self) -> None:
self.detection_out_events[name] = mp.Event()

try:
largest_frame = max(
[
det.model.height * det.model.width * 3
for (name, det) in self.config.detectors.items()
]
)
shm_in = mp.shared_memory.SharedMemory(
name=name,
create=True,
size=self.config.model.height * self.config.model.width * 3,
size=largest_frame,
)
except FileExistsError:
shm_in = mp.shared_memory.SharedMemory(name=name)
Expand All @@ -204,15 +210,12 @@ def start_detectors(self) -> None:
self.detection_shms.append(shm_in)
self.detection_shms.append(shm_out)

for name, detector in self.config.detectors.items():
for name, detector_config in self.config.detectors.items():
self.detectors[name] = ObjectDetectProcess(
name,
self.detection_queue,
self.detection_out_events,
self.config.model,
detector.type,
detector.device,
detector.num_threads,
detector_config,
)

def start_detected_frames_processor(self) -> None:
Expand Down
101 changes: 36 additions & 65 deletions frigate/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
import matplotlib.pyplot as plt
import numpy as np
import yaml
from pydantic import BaseModel, Extra, Field, validator
from pydantic import BaseModel, Extra, Field, validator, parse_obj_as
from pydantic.fields import PrivateAttr

from frigate.const import (
Expand All @@ -32,8 +32,15 @@
parse_preset_output_record,
parse_preset_output_rtmp,
)
from frigate.detectors import (
PixelFormatEnum,
InputTensorEnum,
ModelConfig,
DetectorConfig,
)
from frigate.version import VERSION


logger = logging.getLogger(__name__)

# TODO: Identify what the default format to display timestamps is
Expand All @@ -52,18 +59,6 @@ class Config:
extra = Extra.forbid


class DetectorTypeEnum(str, Enum):
edgetpu = "edgetpu"
openvino = "openvino"
cpu = "cpu"


class DetectorConfig(FrigateBaseModel):
type: DetectorTypeEnum = Field(default=DetectorTypeEnum.cpu, title="Detector Type")
device: str = Field(default="usb", title="Device Type")
num_threads: int = Field(default=3, title="Number of detection threads")


class UIConfig(FrigateBaseModel):
use_experimental: bool = Field(default=False, title="Experimental UI")

Expand Down Expand Up @@ -725,57 +720,6 @@ class DatabaseConfig(FrigateBaseModel):
)


class PixelFormatEnum(str, Enum):
rgb = "rgb"
bgr = "bgr"
yuv = "yuv"


class InputTensorEnum(str, Enum):
nchw = "nchw"
nhwc = "nhwc"


class ModelConfig(FrigateBaseModel):
path: Optional[str] = Field(title="Custom Object detection model path.")
labelmap_path: Optional[str] = Field(title="Label map for custom object detector.")
width: int = Field(default=320, title="Object detection model input width.")
height: int = Field(default=320, title="Object detection model input height.")
labelmap: Dict[int, str] = Field(
default_factory=dict, title="Labelmap customization."
)
input_tensor: InputTensorEnum = Field(
default=InputTensorEnum.nhwc, title="Model Input Tensor Shape"
)
input_pixel_format: PixelFormatEnum = Field(
default=PixelFormatEnum.rgb, title="Model Input Pixel Color Format"
)
_merged_labelmap: Optional[Dict[int, str]] = PrivateAttr()
_colormap: Dict[int, Tuple[int, int, int]] = PrivateAttr()

@property
def merged_labelmap(self) -> Dict[int, str]:
return self._merged_labelmap

@property
def colormap(self) -> Dict[int, Tuple[int, int, int]]:
return self._colormap

def __init__(self, **config):
super().__init__(**config)

self._merged_labelmap = {
**load_labels(config.get("labelmap_path", "/labelmap.txt")),
**config.get("labelmap", {}),
}

cmap = plt.cm.get_cmap("tab10", len(self._merged_labelmap.keys()))

self._colormap = {}
for key, val in self._merged_labelmap.items():
self._colormap[val] = tuple(int(round(255 * c)) for c in cmap(key)[:3])


class LogLevelEnum(str, Enum):
debug = "debug"
info = "info"
Expand Down Expand Up @@ -890,7 +834,7 @@ class FrigateConfig(FrigateBaseModel):
default_factory=ModelConfig, title="Detection model configuration."
)
detectors: Dict[str, DetectorConfig] = Field(
default={name: DetectorConfig(**d) for name, d in DEFAULT_DETECTORS.items()},
default=DEFAULT_DETECTORS,
title="Detector hardware configuration.",
)
logger: LoggerConfig = Field(
Expand Down Expand Up @@ -1032,6 +976,33 @@ def runtime_config(self) -> FrigateConfig:
# generate the ffmpeg commands
camera_config.create_ffmpeg_cmds()
config.cameras[name] = camera_config

for key, detector in config.detectors.items():
detector_config: DetectorConfig = parse_obj_as(DetectorConfig, detector)
if detector_config.model is None:
detector_config.model = config.model
else:
model = detector_config.model
schema = ModelConfig.schema()["properties"]
if (
model.width != schema["width"]["default"]
or model.height != schema["height"]["default"]
or model.labelmap_path is not None
or model.labelmap is not {}
or model.input_tensor != schema["input_tensor"]["default"]
or model.input_pixel_format
!= schema["input_pixel_format"]["default"]
):
logger.warning(
"Customizing more than a detector model path is unsupported."
)
merged_model = deep_merge(
detector_config.model.dict(exclude_unset=True),
config.model.dict(exclude_unset=True),
)
detector_config.model = ModelConfig.parse_obj(merged_model)
config.detectors[key] = detector_config

return config

@validator("cameras")
Expand Down
24 changes: 24 additions & 0 deletions frigate/detectors/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
import logging

from .detection_api import DetectionApi
from .detector_config import (
PixelFormatEnum,
InputTensorEnum,
ModelConfig,
)
from .detector_types import DetectorTypeEnum, api_types, DetectorConfig


logger = logging.getLogger(__name__)


def create_detector(detector_config):
if detector_config.type == DetectorTypeEnum.cpu:
logger.warning(
"CPU detectors are not recommended and should only be used for testing or for trial purposes."
)

api = api_types.get(detector_config.type)
if not api:
raise ValueError(detector_config.type)
return api(detector_config)
6 changes: 3 additions & 3 deletions frigate/detectors/detection_api.py
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
import logging

from abc import ABC, abstractmethod
from typing import Dict


logger = logging.getLogger(__name__)


class DetectionApi(ABC):
type_key: str

@abstractmethod
def __init__(self, det_device=None, model_config=None):
def __init__(self, detector_config):
pass

@abstractmethod
Expand Down
Loading

0 comments on commit 420bcd7

Please sign in to comment.