Skip to content

Commit

Permalink
modified deploy
Browse files Browse the repository at this point in the history
  • Loading branch information
alexhang212 committed Aug 10, 2024
1 parent 1826255 commit 975532a
Show file tree
Hide file tree
Showing 75 changed files with 6,662 additions and 1 deletion.
2 changes: 1 addition & 1 deletion .github/workflows/deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ jobs:
pip install sphinx sphinx_rtd_theme myst_parser
- name: Sphinx build
run: |
sphinx-build doc _build
sphinx-build source _build
- name: Deploy to GitHub Pages
uses: peaceiris/actions-gh-pages@v3
if: ${{ github.event_name == 'push' && github.ref == 'refs/heads/main' }}
Expand Down
4 changes: 4 additions & 0 deletions build/.buildinfo
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 0359ef56ae7d4f6b0e7cee9ad83b737c
tags: 645f666f9bcd5a90fca523b33c5a78b7
Binary file added build/.doctrees/annotation.doctree
Binary file not shown.
Binary file added build/.doctrees/environment.pickle
Binary file not shown.
Binary file added build/.doctrees/human.doctree
Binary file not shown.
Binary file added build/.doctrees/index.doctree
Binary file not shown.
Binary file added build/.doctrees/inference.doctree
Binary file not shown.
Binary file added build/.doctrees/intro.doctree
Binary file not shown.
Binary file added build/.doctrees/training.doctree
Binary file not shown.
Binary file added build/.doctrees/validation.doctree
Binary file not shown.
Binary file added build/_images/BBoxTemplate.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added build/_images/Export.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added build/_images/JayOutput.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added build/_images/LabelStudioJSON.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added build/_images/LabelStudioTemplate.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added build/_images/MediaLocation.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added build/_images/SampleArgs.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added build/_images/YOLOAfterTraining.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added build/_images/YOLOConfig.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added build/_images/YOLODuringTraining.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added build/_images/csvSample.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added build/_images/labellingscene.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
123 changes: 123 additions & 0 deletions build/_sources/annotation.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
.. _annotation:
Image annotation
============

Here are some guidelines for image annotation and drawing bounding boxes for as training data. Here, I will take you through how to use an open-source tool called `Label Studio <https://labelstud.io/>`_, but for training the YOLO-Behaviour framework, you can use any annotation tool of your choice, as long as your annotations follow the YOLO format.

But be aware, a lot of these online labelling tools actually **owns your data** after you upload them, so make sure you check the terms and conditions before you use them!

I've always used label studio because its fully open source and only saves the data locally, so the current page will guide you through step by step on how to use label studio to label your data.

If you used your own annotation method, I also wrote a guide `here <https://colab.research.google.com/drive/1Zbgx6gKKtF6Pu5YkI-n78baJ6PmysGgU?usp=sharing>`_ on how to format annotations in the correct format and what is required.

|start-h1| Installing and launching label studio |end-h1|
Installing label studio is straightforward, you can follow the `official website <https://labelstud.io/guide/install.html>`_ for installation instructions, but in general, if you have python + pip installed, you can just run:

.. code-block:: python
pip install label-studio
The current document was written for label studio version 1.7.3, in case label studio updates in the future and changes everything. If things in this page doesn't work, this could be a possible reason, so you can install the same older version of label studio using ``pip install label-studio==1.7.3`` to properly follow this tutorial.

After installing, you can launch the tool by simply running:

.. code-block:: python
label-studio
Which should launch the labelling tool on your default browser. If this is your first time, you will need to make an account with an email and password to protect your data.

|start-h1| Starting a labelling project |end-h1|

Click on "Create" on the top right to create a new project


Give the project a name of your choice, and now there are two things you will have to do. First is to upload images, and the second is to set-up the labelling configurations.

1. Uploading your images. We provide a utility script to random sample frames from a given video input under ``./Utils/SampleRandomFrames.py``

For example, to sample frames from the sample jay video, you can run this:

.. code-block:: python
### This will sample 10 random frames from Jay_Sample.mp4
python Utils/SampleRandomFrames.py --Input "Data/JaySampleData/Jay_Sample.mp4" --Output "Data/JaySampleData/SampledFrames/" --Frames 10
After sampling images, you can then drag all the images into label studio under the "import" tab. If label studio is throwing an error saying the import is too big, you can work around it by relaunching label studio, but with an extra parameter, by running ``set DATA_UPLOAD_MAX_MEMORY_SIZE=100000000`` in your terminal before you launch label studio.


2. Define the labelling set-up within label studio. Easiest would probably be choose the "Object Detection with Bounding Boxes" under templates, then modify the classes to your behaviours of interest. Here, since I am using the Jay dataset as a sample, I will define only 1 class, "eat".

.. image:: /images/LabelStudioTemplate.png
.. image:: /images/BBoxTemplate.png


If you are struggling with using the interface to define the task, or if something doesnt work, you can also use the "code" option, where you can customize your labelling interface. Here is a sample for one "Eat" class. To use this, you can just paste this under the "Code" section.

.. code-block:: HTML

<View>
<Image name="image" value="$image"/>
<RectangleLabels name="label" toName="image">
<Label value="Eat" background="#FFA39E"/></RectangleLabels>
</View>


After setting up the labelling setup, you can then start labelling by clicking the "Label all tasks" tab! If you want a random sample of images, you can also choose "Random sample" under the "settings" tab, to maximize variance in the dataset.

.. image:: /images/labellingscene.png

Happy labelling!

|start-h1| Exporting your dataset |end-h1|
After labelling, you have to export the annotations from label studio, then convert it to YOLO format to prepare for model training.

To export your annotations, you need 1\) the annotations, and 2\) the images.

To get the annotations, go to your project page, press export, then click on "JSON-MIN".

You might also notice there is an option to export as "YOLO" format directly, which can also work assuming you do not wish to split your dataset into train/val/test sets. So here, we still go for the JSON-MIN method to give us more control of the annotations.

.. image:: /images/Export.png

This should save a json file in your downloads folder. Copy this annotation file somewhere that make sense, you can even rename it.

Then finally, you need to retrieve the images. Label studio annoyingly renames the images, so we need to look for where label studio stores the file. To find that, you have to scroll up in the console when launching label studio:

.. image:: /images/MediaLocation.png

If you go to this location on your computer, you can then get access to the images you want, and you can copy them to another folder (your own dataset folder).

The image locations might differ between computers/ systems. One way to check that you can find the correct images is to open the exported json file above, then check the path of the first annotation.

.. image:: /images/LabelStudioJSON.png

For example, on my laptop, the images were stored under ``data/upload/1/``, so I would copy the ``1/`` folder to your dataset folder, but making sure you embedd the folder withint empty ``data/upload`` folders, so that the images can be read correctly as a relative path.

Finally, we need to convert this into a format that the package we use can read. I provided a script to do that, to convert label studio annotations into YOLO format, as well as splitting the dataset into a train, validation and test set.

To do this you need to run the ``/Code/1_LabelStudio2YOLO.py`` script. Here are the parameters for the script:

* \-\-Dataset: Path to the dataset folder, location where the images are stored
* \-\-JSON: Path to the label studio JSON file
* \-\-Output: Output folder for the YOLO dataset

Here is an example to run this in the command line with the Jay sample dataset, which I stored under the ``LabelStudio`` folder in the sample.

.. code-block:: python
python Code/1_LabelStudio2YOLO.py --Dataset "./Data/LabelsStudio" --JSON "./Data/LabelStudio/JayAnnotations.json" --Output "./Data/YOLO_Datasets/Jay2"
.. |start-h1| raw:: html

<h1>

.. |end-h1| raw:: html

</h1>
4 changes: 4 additions & 0 deletions build/_sources/human.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
Human in the loop methods
============

Welcome to the introduction of your project documentation.
32 changes: 32 additions & 0 deletions build/_sources/index.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
.. YOLO-Behaviour documentation master file, created by
sphinx-quickstart on Tue Jul 30 10:08:46 2024.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to YOLO-Behaviour's documentation!
==========================================
Here are detailed documentation on how to implement the YOLO-Behaviour framework on your own data, throughout the examples below we will use the Siberian Jays as a sample video and case study. Before you start, please make sure all pacakges are installed and sample datasets are downloaded (see Intro and installation page).

For any questions/ bugs to scripts, feel free to raise a github issue or contact me! (hoi-hang.chan[at]uni-konstanz.de)




The Pipeline
==========
.. toctree::
:maxdepth: 2

intro
annotation
training
inference

Advanced
==========
.. toctree::
:maxdepth: 2

validation
human

92 changes: 92 additions & 0 deletions build/_sources/inference.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
Visualization and inference
============

Now that you have a model trained, its time to see how good the model is. I have prepared two scripts for this, first is just for visualizing, and the second one is for inference, where the results will be saved as a pickle or a csv.


|start-h1| Visualization |end-h1|

The script to visualize results is ``Code/3_VisualizeResults.py``. This script requires the trained model and a sample video. Here are the arguments:

* \-\-Video: Path to the sample video
* \-\-Weight: Path to YOLO weight file (See :ref:`training`. for more details)
* \-\-Start: Start frame
* \-\-Frames: Total number of frames, -1 means all frames

To run this on the Jay sample video, you can run this in the terminal:

.. code-block:: python
python Code/3_VisualizeResults.py --Video "./Data/JaySampleData/Jay_Sample.mp4" --Weight "./Data/Weights/JayBest.pt" --Start 0 --Frames -1
This should then launch a window where the video will be playing, with detected bounding boxes drawn on top. It will also save the results as a video in the current directory, called ``YOLO_Sample.mp4``

.. image:: /images/JayOutput.png


|start-h1| Inference |end-h1|
If you are happy with the results, you can then proceed to run inference in a whole video. The script for this is ``Code/4_RunInference.py``, which takes in a video and outputs results as a pickle or csv. The sample scripts only does this for 1 video, so I highly encourage you to extend the script to do multiple videos! Here are the arguments:

* \-\-Video: Path to the sample video
* \-\-Weight: Path to YOLO weight file (See :ref:`training`. for more details)
* \-\-Output: Output type, either "csv" or "pickle"

To run this on the Jay sample video, you can run this in the terminal:

.. code-block:: python
python Code/4_RunInference.py --Video "./Data/JaySampleData/Jay_Sample.mp4" --Weight "./Data/Weights/JayBest.pt" --Output csv
This will run inference and save the results in a csv, with the same name as the video, in the video's directory.


|start-h1| Data Formats |end-h1|

If you chose to save it as a pickle, the data is actually saved as a big python dictionary. You can load it back using the pickle library within python, and access the data like this:

.. code-block:: python
###This is within python!!! Not the command line
import pickle
with open("Data/JaySampleData/Jay_Sample_YOLO.pkl", "rb") as f:
data = pickle.load(f)
The dictionary is structured as follows:

.. code-block:: python
{frame_number: {
"Class": [list of classes detected],
"conf": [list of confidence scores],
"bbox": [list of bounding boxes]}
}
Within the dictionary, each frame number is a key, which can be used to return detections from that frame, e.g. ``data[0]`` will return the detections from the first frame.

Within each frame, there is another dictionary, with keys "Class", "conf" and "bbox". These are strings of the classes detected, the confidence scores and the bounding boxes respectively. The bounding boxes are in the format of [x1, y1, x2, y2], where x1, y1 is the top left corner, and x2, y2 is the bottom right corner. If there are multiple bounding boxes detected for a given frame, the length of each list will be larger than 1. If nothing was detected in a frame, all the lists will be empty.

If you decided to output as a csv, this is what the data looks like:

.. image:: /images/csvSample.png


Here are the columns:
* Frame: Frame number
* Behaviour: The type of behaviour detected
* Confidence: The confidence score of the detection
* BBox_xmin: The x coordinate of the top left corner
* BBox_ymin: The y coordinate of the top left corner
* BBox_xmax: The x coordinate of the bottom right corner
* BBox_ymax: The y coordinate of the bottom right corner

Next section, I will go through the model validation and optimization using grid search, for that I always use the pickle format reduce the need of converting between data structures. So if you would like to follow along the further steps, I would go for the pickle format. But the csv format is just much easier to deal with for any further programming language you use when deploying ther framework.


.. |start-h1| raw:: html

<h1>

.. |end-h1| raw:: html

</h1>
80 changes: 80 additions & 0 deletions build/_sources/intro.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
Introduction and installation
============

Here you will install everything you need to start using the YOLO-Behaviour framework. It is mostly quite straightforward, but there are still a few things you will need to install.

Before starting, there are **3 main steps** to follow:

1. Make sure you have cloned the `YOLO_Behaviour_Repo Repository <https://github.com/alexhang212/YOLO_Behaviour_Repo>`_ to your local computer. You also need to **Change your currenct working directory** to the ``YOLO_Behaviour_Repo/`` folder, since all the code will be ran relative to there.

To change directory, you just need to use ``cd Path/To/YOLO_Behaviour_Repo/`` in your terminal.

2. You need to clone/ download the `SORT repository <https://github.com/abewley/sort>`_, and put the folder inside the ``Repositories/`` folder.

3. If you would like to work alongside the examples, also consider downloading the `Example Data <>`_ and putting it under ``Data/`` folder.


|start-h1| Installation |end-h1|

Before starting, I recommend downloading `Anaconda <https://www.anaconda.com/download/success>`_ to create a virtual environment for the project. After downloading anaconda, you will need launch anaconda prompt (if you are on windows) or just your terminal (for mac and linux). This will be where you run all of the code required in the rest of the demo.

There are two ways of installing everything, first is to do it using the requirements file. Copy paste and run each line in your terminal:

.. code-block:: python
conda create -n YOLO python=3.8
conda activate YOLO
pip install -r requirements.txt
If the above doesn't work, you can install the packages one by one:

.. code-block:: python
conda create -n YOLO python=3.8
conda activate YOLO
pip install ultralytics==8.0.143
pip install scikit-image==0.21.0
pip install filterpy==1.4.5
pip install scikit-learn==1.3.2
pip install natsort==8.4.0
Here, we made a virtual environment called "YOLO" and installed the required packages. From now on, everytime you run the code, you need to activate the virtual environment by running ``conda activate YOLO``.

After installing, you are now ready to proceed!

|start-h1| Summary of scripts in the repository |end-h1|

We provide a number of scripts to run the whole YOLO-Behaviour pipeline, everything is under the ``code/`` directory, and numbered.

* **1_LabelStudio2YOLO.py**: This script converts the LabelStudio annotations to YOLO format for training, refer to :ref:`annotation` for details on doing annotations
* **2_TrainYOLO.py**: This script trains the YOLO model using the annotations from the previous step
* **3_VisualizeResults.py**: This script is a quick script to visualize results for a given video and trained YOLO model
* **4_RunInference.py**: This script runs the YOLO model on a given video and saves the results, either as a csv or as a pickle
* **5_SampleValidation.py**: This script is a sample validation script for Siberian Jay eating detection, note that this script will need to be customized depending on the type of annotations you have!
* **6_SampleGridSearch.py**: This script is a sample script for the grid search algorithm, to find the best hyperparameters for the YOLO model
* **7_HumanInLoopSample.py**: This script provides an example to implement human in the loop, to first extract events using YOLO then manually validate.

Each script can be ran by running ``python Code/ScriptName.py`` in your terminal. The scripts can take in arguments from the command line, or you can modify the script to change the arguments. The sample scripts are also provided to run 1 video at a time, so I highly encourage you to adapt the scripts to go through multiple videos!

If you would like to run scripts using terminal arguments, you can use ``-h`` to see the arguments available:

.. code-block:: python
python Code/1_LabelStudio2YOLO.py -h
Alternatively, you can modify the script to change your own arguments/ paths. Here is an example from the ``1_LabelStudio2YOLO.py`` script:

.. image:: ./images/SampleArgs.png



.. |start-h1| raw:: html

<h1>

.. |end-h1| raw:: html

</h1>
Loading

0 comments on commit 975532a

Please sign in to comment.