Skip to content

vlomonaco/core50

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CORe50

License: CC BY 4.0 built with Python2.7 built with Caffe built with Sacred

A new Dataset and Benchmark for Continual Learning and Object Recognition, Detection and Segmentation


  • CORe50 core code-base
  • CORe50 benchmark configuration files
  • Easy-to-access results data and batches configurations
  • Easy-setup, getting started and Python data loader
  • Experiments ported to Python 3.x
  • New realease and additional baselines within Avalanche

In this page we provide the code and all the materials related to the CORe50 benchmark. If you plan to use this dataset or other resources you'll find in this page, please cite our latest papers "CORe50: a New Dataset and Benchmark for Continuous Object Recognition" and "Fine-Grained Continual Learning":

@InProceedings{lomonaco2017core50,
   title = {CORe50: a New Dataset and Benchmark for Continuous Object Recognition},
   author = {Vincenzo Lomonaco and Davide Maltoni},
   booktitle = {Proceedings of the 1st Annual Conference on Robot Learning},
   pages = {17--26},
   year = {2017},
   volume = {78}
}

@article{lomonaco2019nicv2,
   title = {Fine-Grained Continual Learning},
   author = {Vincenzo Lomonaco and Davide Maltoni and Lorenzo Pellegrini},
   journal = {Arxiv preprint arXiv:1907.03799},
   year = {2019}
}

You can find more information about the dataset/benchmark as well as additional data to download at: vlomonaco.github.io/core50.


Dependencies

In order to extecute the code in the repository you'll need to install the following dependencies in a Python 3.x environment:

  • Numpy: Matrices operations and stuff
pip install numpy
pip install sacred
  • Caffe: Current DL back-end (easily interchangeable)

Follow the step-by-step guide for installing caffe here.


Project Structure

Up to now the projects is structured as follows:

  • confs/: In this folder you can find all the experiments configurations and the caffe definition files. sI, sII and sIII stand for the NI, NC and NIC scenarios, respectively.
  • core/: The actual code of the benchmark.
  • data/: After the setup it will be created and filled with data needed for the experiments. It will also be used for storing partial computations.
  • extras/: Results and configuration files you can download without delving into the code.
  • scripts/: It contains useful scripts to help you downloading the necessary materials, setup the environment or load the data for your experiments in Python.
  • LICENSE: Standard Creative Commons Attribution 4.0 International License.
  • README.md: This instructions file.
  • run_sI_exps.sh: Simple bash script for running the "New Instances (NI)" experiments with the different architectures and strategies
  • run_sII_exps.sh: Simple bash script for running the "New Classes (NC)" experiments with the different architectures and strategies
  • run_sIII_exps.sh: Simple bash script for running the "New Instances and Classes (NIC)" experiments with the different architectures and strategies

Getting Started

First of all, let's clone the repository:

git clone https://github.com/vlomonaco/core50.git

Then, in order to run the experiments and reproduce the benchmark we need to download the pre-trained models and the CORe50 dataset. This can be automatically done using the script provided:

cd core50
./scripts/bash/fetch_data_and_setup.sh

All the data will be downloaded in the data/ directory. After this initial step you can directly run the experiments with the bash scripts run_sI_exps.sh, run_sII_exps.sh and run_sIII_exps.sh for the NI, NC and NIC scenarios respectively.

For example, reproducing the first scenario experiments can be as easy as running:

./run_sI_exps.sh

Since this experiments can take a while (also more than 24h depending on the scenario) you can also disable some experiments just by commenting them in the bash script.


Troubleshooting

  • If you find different results from out benchmark (for a few percentage points) that is to be expected! First of all because we use the cudnn engine which is not fully deterministic for convolutions. Second because the error may be accumulated during the incremental learning process. If you want full reproducibility (which means a ~2x in terms of time needed) just set the engine param of convolutions to 1.

  • If you find some trouble with the freezeweights strategy this is probably because you need to reset the learning rate multipliers in the prototxt (sorry, my bad.. I'm currently working on a new version of the code for creating the prototxt files instead of modifying them on the fly.).

  • Hey! If you find any trouble don't get frustrated, just ask, we'll answer in a few hours! :-)


License

This work is licensed under a Creative Commons Attribution 4.0 International License.


Author