Skip to content

rvandeghen/ASTOD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adaptive Self-Training for Object Detection

This repository contains the implementation of the following paper: "Adaptive Self-Training for Object Detection", that was paper presented at ICCVW2023.

This work presents a semi-supervised learning method for object detection. This repository can be used to reproduce the main results of the paper.
[paper]

bibtex

@inproceedings{Vandeghen2023Adaptive,
    author    = {Vandeghen, Renaud and Louppe, Gilles and Van Droogenbroeck, Marc},
    title     = {Adaptive Self-Training for Object Detection},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
    month     = {October},
    year      = {2023},
    pages     = {914-923}
}

Usage

Intsallation

conda create -n astod python=3.10
conda activate astod
pip install torch torchvision wandb tabulate

Dataset

Organize the dataset as follows:

├── coco
│   ├── annotations
│   │   ├── instances_train2017.json
│   │   ├── instances_val2017.json
│   ├── train2017
│   ├── val2017

You can follow the preprocessing done in STAC to generate the different splits.

Training

Teacher

The first step of training is to train the teacher model. This can be done by running the following command on a remote server using SLURM:

seed=1
split=10

sbatch --output teacher.log teacher.sh $seed $split

Labeling

Generate the candidate labels using the teacher model:

seed=1
split=10
sbatch --output labeling.log label.sh $seed $split

Process the candidate labels

Process the candidate lables based on the score histogram:

seed=1
split=10
bins=21
sbatch --output process.log process.sh $seed $split $bins

This configuration will produce the same results presented in the paper.
You can change the code argument in process.sh to change the configuration (fix threshold, global or per class threshold, etc.).

Student

Train the student model using the pseudo-labels:

seed=1
split=10
sbatch --output student.log student.sh $seed $split

Refinement

Refine the student model with the labeled data only:

seed=1
split=10
sbatch --output refine.log refine.sh $seed $split

Iterative process

You can repeat the different steps by changing the teacher model by the student model for the Labeling step.

Acknowledgments

This repository is based on torchvision.
The authors also thanks all the different great open-sourced SSOD works.

Authors

  • Renaud Vandeghen, University of Liège (ULiège).

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published