Real time video analytics with Nvidia's Jetson devices.
- Tested with NVIDIA Jetson Nano (Jetpack 4.6 [L4T 32.6.1])
sudo vim /etc/docker/daemon.json
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
sudo service docker restart
# Check
sudo docker info | grep Default
# Expected output
Default Runtime: nvidia
WARNING: No blkio weight support
WARNING: No blkio weight_device support
./scripts/build.sh
# Start docker container
./scripts/start.sh
- Pretrained models will be downloaded @
/jetson-inference/data/networks
- In
scripts/start.sh
, the models directory is mounted to local volume (/media/data/models/
). Thus, no need to re-download the models multiple times in docker environment.
cd /jetson-inference/tools
./download-models.sh
Hello world codes for using Jetson-inference.
cd basics
python3 detect.py /dev/vidoe0
python3 segment.py /dev/video0
Using Jetson-inference toghether with Supervision to do vidoe analytics.
cd analytics/
Counting number of people in a defined polygon zone.
python3 counting.py /dev/video0
Counting objects going in and going out of a line zone.
python3 flow.py /dev/video0
Detects person and redact the whole body, could be used to process video with privacy concerns.
python3 redaction.py /dev/video0