pancake is an application for panorama camera car tracking. It comes with a simple and modular program design facilitating easy implementation and application of different techniques regarding panorama stitching, object detection and object tracking.
Following features are included:
- Straight forward implementation and application of state-of-the-art panorama stitching, object detection and object tracking technologies
- Include a discretionary number of image streams of various source types
- Several options for result visualization
- Optional database logging of vehicle tracks with Sqlite3
- Modular structure for extension of new functionalities and approaches
The most recent documentation can be found here.
- pancake 🥞 - Panorama Camera Car Tracking - Documentation
Poetry is arguably Python's most sophisticated dependency management option available today. Poetry goes far beyond dependencies, with features like generating .lock files, generating project scaffolding, and a ton of configuration options, all of which are handled via a simple CLI. If you're unsure how to cleanly and effectively structure and manage your Python projects, do yourself a favor and use Poetry. Source
- Make sure
Poetry
andPython3.8
are installed:
poetry --version
which python3.8
How to install: Poetry
, Python3.8
- Set a target directory for this repo:
# this sets a temporary system variable, e.g. TARGET_DIR=~/DCAITI
export TARGET_DIR=*target directory*
- Clone our repo into the desired location:
cd $TARGET_DIR
# either via HTTPS
git clone https://github.com/mauricesvp/pancake.git
# or via SSH
git clone [email protected]:mauricesvp/pancake.git
- Afterwards, navigate to the pancake location and install the dependencies:
cd $TARGET_DIR/pancake
poetry install
- Finally, activate the virtual environment and run the main script:
poetry shell
python main.py / poetry run main
For more information on basic Poetry usage refer to: https://python-poetry.org/docs/basic-usage/
Troubleshoot
- When trying to install the dependencies:
The current project's Python requirement (X.X.XX) is not compatible with some of the required packages Python requirement:
- Navigate to pancake directory and delete the
poetry.lock
:
cd $TARGET_DIR/pancake
sudo rm poetry.lock
- Then, let poetry know we want to use
Python3.8
: (find out the location viawhich python3.8
)
poetry env use *path to python3.8*
- Now, try to install the dependencies again:
poetry install
We definitely recommend to use Poetry as python package manager. Still, in case you want to use Virtualenv or Pipenv, we provide a requirements.txt
and dev-requirements.txt
.
- Clone our repo into a desired location:
cd $TARGET_DIR
# either via HTTPS
git clone https://github.com/mauricesvp/pancake.git
# or via SSH
git clone [email protected]:mauricesvp/pancake.git
-
Create a Pipenv or Virtualenv with
Python3.8
-
Now, activate your python environment and install the dependencies:
source *path to env*/bin/activate # Pipenv
# or
workon *venv name* # Virtualenv
pip install -r requirements.txt # Base packages
pip install -r dev-requirements.txt # Development packages
- Have fun cooking up some pancakes:
python run.py
Troubleshoot
A high processing throughput is essential to allow for live tracking with our app. In order to fully leverage local computing capabilities, it is of considerable importance to source the GPU. Our experiments have shown that live application is virtually impossible without considering the latter for computations. Thus, utilizing the below mentioned softwares might be crucial.
Our application was tested on CUDA versions >=10.1.
We recommend this tutorial for installation
Our application was tested on OpenCV versions >=4.5.
We recommend this tutorial for installation
Note:
- After compilation, validate if OpenCV is able to access your CUDA device:
- Activate the project specific python environment:
cd $TARGET_DIR/pancake
poetry shell
python
- Now the python shell will open and you can check, if your CUDA device is available via:
import cv2
print(cv2.cuda.getCudaEnabledDeviceCount())
- Proceed with removing
opencv-python
from the python environment. Otherwise, python will fallback to the CPU version of OpenCV.
After you have followed the steps from the installation, simply start the main script with:
cd $TARGET_DIR/pancake
poetry shell # activate the venv
python main.py / poetry run main
All of the pancake ingredients can simply be specified in the designated pancake.yaml. Below, you will find a detailed description on the underlying parameters:
Device
Select a processing device the app should leverage.
Possible values:
DEVICE
: "CPU", "GPU", "0", "1", ...
Note: "GPU" is the same device as "0"
Logging
Select a level of verbose program output.
Possible values:
LEVEL
: "DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"
Database
Specify if the vehicle tracks should be logged in an external database.
Possible values:
STORE
: "True", "False"SCHEME PATH
: path to yaml file containing custom db schemaFILENAME
: name of the stored database file
Note:
- When using a
SCHEME PATH
different to the default, it is necessary to adapt pancake/pancake/db.py. Critical parts of the code are marked as such! - If you use the same database file for multiple runs, the database will contain data from respective execution.
- The default database design is displayed below:
Data
Specify a data source/s to retrieve images from as well as a region of interest to be applied on the latter.
SOURCE
There are several types of sources one can whip into the pancake dough. Essentially, the quantity of provided sources determine the number of frames to be assembled into a panorama.
Source type | Description | Example | Note |
---|---|---|---|
Image | Path to single image | "../samples/r45/1c/1621796022.9767.jpg" |
(None) |
Video | Path to single video | "../samples/output.avi" |
(None) |
Sequence of Images | Path to directory holding several images | "../samples/r45/1c" |
(None) |
Directories with Image Sequences/Videos | (yaml) List of (multiple) directories | The directories are only allowed to contain the same type of source (either images or videos) | |
Live Streams | Path to .txt file containing stream adresses | "../samples/streams.txt" |
Stream adresses could be from an IP camera, YouTube, Twitch and more. Example content |
Note: For database logging with correct timestamps, it is required that the images are named after their respective timestamp. Livestreams on the other hand are timed by the exact stamp the frame was polled. For videos from the past, there currently is no according timestamp strategy available.
ROI
Region of interests can be specified by providing the yaml file a dictionary containing the upper left and bottom right x, y coordinates of the region for each seperate frame.
Example
Backend
Specify the backend related configurations.
Possible values:
NAME
: name of the backend strategy according to the registryDEI
:SIMPLE
: True, False (enables simpler version of DEI)
Note: For more information on the backend registry and which strategies are currently implemented, refer to Backend.
Detector
Specify the detector related configurations.
Possible values:
NAME
: name of the detector technology according to the registry
Note: For more information on the detector registry and which detector technologies are currently implemented, refer to Detection.
Tracker
Specify the tracker related configurations.
Possible values:
NAME
: name of the tracking algorithm according to the registry
Note: For more information on the tracker registry and which tracking algorithms are currently implemented, refer to Tracking.
Result Processing
General
Parameters | Possible Values | Description |
---|---|---|
VIEW_RES |
"True", "False" | Visualize the most recent (enriched) frame |
SAVE_RES |
"True", "False" | Save results (more detailed configurations under Saving) |
ASYNC_PROC |
"True", "False" | Asynchronous result processing (a designated slave process is spawned to postprocess the frames) |
DEBUG |
"True", "False" | Allows manual frame stepping |
Note:
VIEW_RES
can't be true, whenASYNC_PROC
is turned on (cv2.imshow
not callable from within a subprocess)- Enabling
ASYNC_PROC
yields significant speedup DEBUG
is only available when the processed frame is shown
Draw Options
The parameters below make up the main visualization controllers. (applies when VIEW_RES
or SAVE_RES
is true)
Parameters | Possible Values | Description |
---|---|---|
DRAW_DET |
"True", "False" | Draw the detection bounding boxes |
DRAW_TRACKS |
"True", "False" | Draw the tracked bounding boxes |
DRAW_TRACK_HIST |
"True", "False" | Draw the corresponding tracks to the bounding boxes (draws a line representing the tracked route of the vehicle) |
MAX_TRACK_HIST_LEN |
Integer | Max track history length (max number of tracks matrices saved/considered for the track history visualization) |
Draw Details
The parameters below give you more detailed options for visualization. (applies when VIEW_RES
or SAVE_RES
is true)
Parameters | Possible Values | Description |
---|---|---|
HIDE_LABELS |
"True", "False" | Hide detected class labels and track ids |
HIDE_CONF |
"True", "False" | Hide the detection confidences |
LINE_THICKNESS |
Integer | General line and annotation thickness |
Asynchronous Queue
These configurations concern the queue that is used to store the stitched images, detection matrix and tracks matrix sended from the main process to the designated results-processing subprocess. (applies when ASYNC_PROC
is true)
Parameters | Possible Values | Description |
---|---|---|
Q_SIZE |
Integer | Queue size |
PUT_BLOCKED |
"True", "False" | When true, main loop is stopped for PUT_TIMEOUT seconds until a slot is freed, will otherwise raise an exception |
PUT_TIMEOUT |
Float | Max waiting time (in s) for feeding recent data into the queue, will throw exception when time ran out |
Note:
- the queue is filled when result processing is slower than the actual detection and tracking
Saving
Below parameters represent granular saving options. (applies when SAVE_RES
is true)
Parameters | Possible Values | Description |
---|---|---|
MODE |
"image" or "video" | Save the resulting frames either as images or a video |
PATH |
String | Relative save directory |
SUBDIR |
String | Target subdirectory under PATH , will be imcremented automatically after each run |
VID_FPS |
Integer | FPS of the resulting video (when MODE = "video") |
EXIST_OK |
"True", "False" | Do not increment automatically (keep saving in PATH /SUBDIR ) |
Note:
- the images and videos are named after the timestamp when the respective frame gets saved
The pancake framework can be thought of as a data pipeline. The incoming data is preprocessed, the backend generates detections using a detector, the tracker generates tracks, and the results are stored in a database (this happens for every frame).
Pancake has been designed with modularity in mind, that is to say the Backend, Detector and Tracker can easily be changed, which also means new ones can be implemented and integrated easily.
For more details, and instructions on how to write your own Backend, Detector or Tracker, see below.
Module | Details | API |
---|---|---|
Data (+ Preprocessing) | Data | API |
Backend | Backend | API |
Detector | Detector | API |
Tracker | Tracker | API |
Result Processing | Result Processing | API |
Storage | Storage | API |
Analysis | Analysis | - |
Google Colab for training Yolov5 models on custom data
Google Colab for executing pancake
Google Drive with various sources
As comment style we chose the Google docstrings style.
- Yolov5, Ultralytics: https://github.com/ultralytics/yolov5
- DeepSORT: https://github.com/nwojke/deep_sort
- Centroid Tracker: https://gist.github.com/adioshun/779738c3e28151ffbb9dc7d2b13c2c0a