Skip to content

Commit

Permalink
Merge pull request #82 from PUTvision/cars_detection_yolo_tutorial
Browse files Browse the repository at this point in the history
Cars detection yolo tutorial
  • Loading branch information
przemyslaw-aszkowski authored Mar 10, 2023
2 parents df37d65 + 00f65e7 commit 6d1d86d
Show file tree
Hide file tree
Showing 11 changed files with 554 additions and 17 deletions.
4 changes: 2 additions & 2 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,11 @@
import os
import re
import sys
sys.path.append(os.path.abspath('../../plugin/'))
sys.path.append(os.path.abspath('../../src/'))

# -- Project information -----------------------------------------------------

metadata_file_path = os.path.join('..', '..', 'plugin', 'deepness', 'metadata.txt')
metadata_file_path = os.path.join('..', '..', 'src', 'deepness', 'metadata.txt')
metadata_file_path = os.path.abspath(metadata_file_path)
with open(metadata_file_path, 'rt') as file:
file_content = file.read()
Expand Down
24 changes: 24 additions & 0 deletions docs/source/creators/creators_tutorial.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
Model creation tutorial
=======================


=========
Detection
=========

For one of models in our zoo - specifically for cars detection on aerial images - a complete tutorial is provided in a jupyter notebook:

.. code-block::
./tutorials/detection/cars_yolov7/car_detection__prepare_and_train.ipynb
The notebook covers:
* downloading yolov7 repository
* downloading the training dataset
* preparing training data and labels in yolov7 format
* running th training and testing
* conversion to ONNX model
* adding default parameters for Deepness plugin

Example model inference can be found in the :code:`Examples` section.
46 changes: 46 additions & 0 deletions docs/source/example/example_detection_cars_yolov7.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
YOLOv7 cars detection
===================================

The following example shows how to use the YOLOv7 model for cars (and other vehicles) detection in aerial or satellite images.

=======
Dataset
=======

The example is based on the `ITCVD cars detection dataset <https://arxiv.org/pdf/1801.07339.pdf>`_. It provides aerial images with 10 cm/px resolution. Annotation bounding boxes for the cars are provided.

=========================
Training tutorial
=========================

The entire training process has been gathered in a tutorial notebook in jupyter notebook:


.. code-block::
./tutorials/detection/cars_yolov7/car_detection__prepare_and_train.ipynb
==================
Example inference
==================

Run QGIS, next add "Poznan 2022 aerial" map using :code:`QuickMapServices` plugin.

Alternatively you can use any other aerial or satellite map with resolution of at least 10 cm/pixel

.. image:: ../images/cars_near_poznan_university_of_technology_on_ortophoto__zoom_in.webp

Then run our plugin and set parameters like in the screenshot below. You can find the pre-trained onnx model at :code:`https://chmura.put.poznan.pl/s/vgOeUN4H4tGsrGm`. Push the Run button to start processing.

.. image:: ../images/cars_near_poznan_university_of_technology_on_ortophoto.webp


Another inference on random street in Poznan:

.. image:: ../images/cars_on_ransom_street_in_poznan.webp


And output mask for an Grunwald district in Poznan:

.. image:: ../images/ecars_in_poznan_grunwald_district.webp
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
2 changes: 2 additions & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ Home
:maxdepth: 1
:caption: Examples

example/example_detection_cars_yolov7
example/example_segmentation_landcover
example/example_detection_planes_yolov7
example/example_detection_oils_yolov5
Expand All @@ -48,6 +49,7 @@ Home
:caption: For Model Creators

creators/creators_description_classes
creators/creators_tutorial
creators/creators_export_training_data_tool
creators/creators_example_onnx_model
creators/creators_add_metadata_to_model
Expand Down
27 changes: 14 additions & 13 deletions docs/source/main/model_zoo/MODEL_ZOO.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,25 +4,26 @@ The [Model ZOO](https://chmura.put.poznan.pl/s/2pJk4izRurzQwu3) is a collection

## Segmentation models

| Model name | Input size | CM/PX | Description | Example image |
|------------------------------------------------------------------------------------|---|---|---|---------------------------------------------------------|
| [Corn Field Damage Segmentation](https://chmura.put.poznan.pl/s/abWFTVYSDIcncWs) | 512 | 3 | [PUT Vision](https://putvision.github.io/) model for Corn Field Damage Segmentation created on own dataset labeled by experts. We used the classical UNet++ model. It generates 3 outputs: healthy crop, damaged crop, and out-of-field area. | [Image](https://chmura.put.poznan.pl/s/i5WVmcfqPNdBTAQ) |
| [Land Cover Segmentation](https://chmura.put.poznan.pl/s/PnAFJw27uneROkV) | 512 | 40 | The model is trained on the [LandCover.ai dataset](https://landcover.ai.linuxpolska.com/). It provides satellite images with 25 cm/px and 50 cm/px resolution. Annotation masks for the following classes are provided for the images: building (1), woodland (2), water(3), road(4). We use `DeepLabV3+` model with `tu-semnasnet_100` backend and `FocalDice` as a loss function. | [Image](https://chmura.put.poznan.pl/s/Xa29vnieNQTvSt5) |
| [Roads Segmentation](https://chmura.put.poznan.pl/s/y6S3CmodPy1fYYz) | 512 | 21 | The model segments the Google Earth satellite images into 'road' and 'not-road' classes. Model works best on wide car roads, crossroads and roundabouts. | [Image](https://chmura.put.poznan.pl/s/rln6mpbjpsXWpKg) |
| Model | Input size | CM/PX | Description | Example image |
|----------------------------------------------------------------------------------|------------|-------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------|
| [Corn Field Damage Segmentation](https://chmura.put.poznan.pl/s/abWFTVYSDIcncWs) | 512 | 3 | [PUT Vision](https://putvision.github.io/) model for Corn Field Damage Segmentation created on own dataset labeled by experts. We used the classical UNet++ model. It generates 3 outputs: healthy crop, damaged crop, and out-of-field area. | [Image](https://chmura.put.poznan.pl/s/i5WVmcfqPNdBTAQ) |
| [Land Cover Segmentation](https://chmura.put.poznan.pl/s/PnAFJw27uneROkV) | 512 | 40 | The model is trained on the [LandCover.ai dataset](https://landcover.ai.linuxpolska.com/). It provides satellite images with 25 cm/px and 50 cm/px resolution. Annotation masks for the following classes are provided for the images: building (1), woodland (2), water(3), road(4). We use `DeepLabV3+` model with `tu-semnasnet_100` backend and `FocalDice` as a loss function. | [Image](https://chmura.put.poznan.pl/s/Xa29vnieNQTvSt5) |
| [Roads Segmentation](https://chmura.put.poznan.pl/s/y6S3CmodPy1fYYz) | 512 | 21 | The model segments the Google Earth satellite images into 'road' and 'not-road' classes. Model works best on wide car roads, crossroads and roundabouts. | [Image](https://chmura.put.poznan.pl/s/rln6mpbjpsXWpKg) |

## Regression models

| Model name | Input size | CM/PX | Description | Example image |
|---|---|---|---|---|
| | | | | |
| | | | | |
| Model | Input size | CM/PX | Description | Example image |
|---------|---|---|---|---|
| | | | | |
| | | | | |

## Object detection models

| Model name | Input size | CM/PX | Description | Example image |
|---|---|---|---|---|
| [Airbus Planes Detection](https://chmura.put.poznan.pl/s/bBIJ5FDPgyQvJ49) | 256 | 70 | YOLOv7 tiny model for object detection on satellite images. Based on the [Airbus Aircraft Detection dataset](https://www.kaggle.com/datasets/airbusgeo/airbus-aircrafts-sample-dataset). | [Image](https://chmura.put.poznan.pl/s/VfLmcWhvWf0UJfI) |
| [Airbus Oil Storage Detection](https://chmura.put.poznan.pl/s/gMundpKsYUC7sNb) | 512 | 150 | YOLOv5-m model for object detection on satellite images. Based on the [Airbus Oil Storage Detection dataset](https://www.kaggle.com/datasets/airbusgeo/airbus-oil-storage-detection-dataset). | [Image](https://chmura.put.poznan.pl/s/T3pwaKlbFDBB2C3) |
| Model | Input size | CM/PX | Description | Example image |
|--------------------------------------------------------------------------------|------------|-------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------|
| [Airbus Planes Detection](https://chmura.put.poznan.pl/s/bBIJ5FDPgyQvJ49) | 256 | 70 | YOLOv7 tiny model for object detection on satellite images. Based on the [Airbus Aircraft Detection dataset](https://www.kaggle.com/datasets/airbusgeo/airbus-aircrafts-sample-dataset). | [Image](https://chmura.put.poznan.pl/s/VfLmcWhvWf0UJfI) |
| [Airbus Oil Storage Detection](https://chmura.put.poznan.pl/s/gMundpKsYUC7sNb) | 512 | 150 | YOLOv5-m model for object detection on satellite images. Based on the [Airbus Oil Storage Detection dataset](https://www.kaggle.com/datasets/airbusgeo/airbus-oil-storage-detection-dataset). | [Image](https://chmura.put.poznan.pl/s/T3pwaKlbFDBB2C3) |
| [Aerial Cars Detection](https://chmura.put.poznan.pl/s/vgOeUN4H4tGsrGm) | 640 | 10 | YOLOv7-m model for cars detection on aerial images. Based on the [ITCVD](https://arxiv.org/pdf/1801.07339.pdf). | [Image](https://chmura.put.poznan.pl/s/cPzw1mkXlprSUIJ) |

## Contributing

Expand Down
4 changes: 2 additions & 2 deletions src/deepness/metadata.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
name=Deepness: Deep Neural Remote Sensing
qgisMinimumVersion=3.22
description=Inference of deep neural network models (ONNX) for segmentation, detection and regression
version=0.4.1
version=0.5.0
author=PUT Vision
[email protected]

Expand All @@ -17,7 +17,7 @@ about=
- limiting processing range to predefined area (visible part or area defined by vector layer polygons)
- common types of models are supported: segmentation, regression, detection
- integration with layers (both for input data and model output layers). Once an output layer is created, it can be saved as a file manually
- model ZOO under development (planes detection on Bing Aerial, Corn field damage, Oil Storage tanks detection, ...)
- model ZOO under development (planes detection on Bing Aerial, Corn field damage, Oil Storage tanks detection, cars detection, ...)
- training data Export Tool - exporting raster and mask as small tiles
- parametrization of the processing for advanced users (spatial resolution, overlap, postprocessing)
Plugin requires external python packages to be installed. After the first plugin startup, a Dialog will show, to assist in this process. Please visit plugin the documentation for details.
Expand Down
Loading

0 comments on commit 6d1d86d

Please sign in to comment.