To start with, you can download the following datasets and store them under $DATA
directory. In our case we use data/
as the default and if you want to use a different path, just make sure you define it in the arguments of the python scripts that store or load from there.
The file structure looks like:
data/
|–– caltech-101/
|–– eurosat/
|–– cct20/
Datasets list:
- Caltech101
- OxfordPets
- StanfordCars
- Flowers102
- Food101
- FGVCAircraft
- SUN397
- DTD
- EuroSAT
- UCF101
- FMoW
- OCT
- CCT20
- ICCT
- Serengeti
- MMCT
The instructions to prepare each dataset are detailed below. To ensure reproducibility and fair comparison for future work, we provide fixed train/val/test splits. The fixed splits are either from the original datasets (if available) or created by the authors of CoOp and ourselves (last 6 datasets in the list).
- Create a folder named
caltech-101/
under$DATA
. - Download
101_ObjectCategories.tar.gz
from http://www.vision.caltech.edu/Image_Datasets/Caltech101/101_ObjectCategories.tar.gz and extract the file under$DATA/caltech-101
. - Download
split_zhou_Caltech101.json
from this link and put it under$DATA/caltech-101
.
The directory structure should look like
caltech-101/
|–– 101_ObjectCategories/
|–– split_zhou_Caltech101.json
- Create a folder named
oxford_pets/
under$DATA
. - Download the images from https://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz.
- Download the annotations from https://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz.
- Download
split_zhou_OxfordPets.json
from this link.
The directory structure should look like
oxford_pets/
|–– images/
|–– annotations/
|–– split_zhou_OxfordPets.json
- Create a folder named
stanford_cars/
under$DATA
. - Download the train images http://ai.stanford.edu/~jkrause/car196/cars_train.tgz.
- Download the test images http://ai.stanford.edu/~jkrause/car196/cars_test.tgz.
- Download the train labels https://ai.stanford.edu/~jkrause/cars/car_devkit.tgz.
- Download the test labels http://ai.stanford.edu/~jkrause/car196/cars_test_annos_withlabels.mat.
- Download
split_zhou_StanfordCars.json
from this link.
The directory structure should look like
stanford_cars/
|–– cars_test\
|–– cars_test_annos_withlabels.mat
|–– cars_train\
|–– devkit\
|–– split_zhou_StanfordCars.json
- Create a folder named
oxford_flowers/
under$DATA
. - Download the images and labels from https://www.robots.ox.ac.uk/~vgg/data/flowers/102/102flowers.tgz and https://www.robots.ox.ac.uk/~vgg/data/flowers/102/imagelabels.mat respectively.
- Download
cat_to_name.json
from here. - Download
split_zhou_OxfordFlowers.json
from here.
The directory structure should look like
oxford_flowers/
|–– cat_to_name.json
|–– imagelabels.mat
|–– jpg/
|–– split_zhou_OxfordFlowers.json
- Download the dataset from https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/ and extract the file
food-101.tar.gz
under$DATA
, resulting in a folder named$DATA/food-101/
. - Download
split_zhou_Food101.json
from here.
The directory structure should look like
food-101/
|–– images/
|–– license_agreement.txt
|–– meta/
|–– README.txt
|–– split_zhou_Food101.json
- Download the data from https://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/archives/fgvc-aircraft-2013b.tar.gz.
- Extract
fgvc-aircraft-2013b.tar.gz
and keep onlydata/
. - Move
data/
to$DATA
and rename the folder tofgvc_aircraft/
.
The directory structure should look like
fgvc_aircraft/
|–– images/
|–– ... # a bunch of .txt files
- Create a folder named
sun397/
under$DATA
. - Download the images http://vision.princeton.edu/projects/2010/SUN/SUN397.tar.gz.
- Download the partitions https://vision.princeton.edu/projects/2010/SUN/download/Partitions.zip.
- Extract these files under
$DATA/sun397/
. - Download
split_zhou_SUN397.json
from this link.
The directory structure should look like
sun397/
|–– SUN397/
|–– split_zhou_SUN397.json
|–– ... # a bunch of .txt files
- Download the dataset from https://www.robots.ox.ac.uk/~vgg/data/dtd/download/dtd-r1.0.1.tar.gz and extract it to
$DATA
. This should lead to$DATA/dtd/
. - Download
split_zhou_DescribableTextures.json
from this link.
The directory structure should look like
dtd/
|–– images/
|–– imdb/
|–– labels/
|–– split_zhou_DescribableTextures.json
- Create a folder named
eurosat/
under$DATA
. - Download the dataset from http://madm.dfki.de/files/sentinel/EuroSAT.zip and extract it to
$DATA/eurosat/
. - Download
split_zhou_EuroSAT.json
from here.
The directory structure should look like
eurosat/
|–– 2750/
|–– split_zhou_EuroSAT.json
- Create a folder named
ucf101/
under$DATA
. - Download the zip file
UCF-101-midframes.zip
from here and extract it to$DATA/ucf101/
. This zip file contains the extracted middle video frames. - Download
split_zhou_UCF101.json
from this link.
The directory structure should look like
ucf101/
|–– UCF-101-midframes/
|–– split_zhou_UCF101.json
The following datasets make up the challenging datasets used in the SVL-Adapter paper. We provide the splits we used in a .json file for the shake of benchmarking and comparison. When available, the train/test splits follow the ones provided by the original curators of each dataset.
The Functional Map of the World (FMoW) dataset presented in this paper contains thousands of satellite images which are labeled based on the functional purpose of the building or land they contain. We use the fMoW-rgb version of the dataset and keep a subset of the data (defined in split_FMOW.json) for efficiency.
- Create a folder named
fmow/
under$DATA
. - Download the images along with bounding box and annotations https://github.com/fMoW/dataset for both train/val and test subsets.
- Extract these files under
$DATA/fmow/
. - Download
split_FMOW.json
from this link.
The directory structure should look like:
fmow/
|–– train/
|–– test/
|–– split_FMOW.json
This dataset contains thousands of validated Optical Coherence Tomography (OCT) described and analyzed in this paper. The images are split into a training set and a testing set of independent patients with images having labels from 1 of the 4 following categories: CNV, DME, DRUSEN, and NORMAL.
- Create a folder named
oct/
under$DATA
. - Download the images from https://data.mendeley.com/datasets/rscbjbr9sj/3 (the image labels are indicated by the name of the folder they are in).
- Extract these files under
$DATA/oct/
. - Download
split_OCT.json
from this link.
The directory structure should look like:
oct/
|–– train/
|–– test/
|–– split_OCT.json
A large repository of camera trap data can be found at lila.science, including Caltech Camera Traps (CCT20), Island Conservation Camera Traps (ICCT) and Snapshot Serengeti datasets which were used for to evaluate SVL-Adapter across challenging task. For each camera trap dataset examined, we extract the bounding boxes around the object of interest given their availability. Note: If bounding box annotations are not available for a camera trap dataset, regions around animals can be extracted pretty accurately by utilizing the [MegaDetector].
- Create a folder named
cct20/
under$DATA
. - Download the images along with bounding box and annotations https://beerys.github.io/CaltechCameraTraps/.
- Extract these files under
$DATA/cct20/
. - Download
split_CCT20.json
from this link.
The directory structure should look like:
cct20/
|–– train_images/
|–– cis_val_images/
|–– cis_test_images/
|–– trans_val_images/
|–– trans_test_images/
|–– split_CCT20.json
- Create a folder named
icct/
under$DATA
. - Download the images along with bounding box and annotations https://lila.science/datasets/island-conservation-camera-traps/.
- Extract these files under
$DATA/icct/
. - Download
split_ICCT.json
from this link.
The directory structure should look like:
icct/
|–– train_images/
|–– cis_val_images/
|–– cis_test_images/
|–– trans_val_images/
|–– trans_test_images/
|–– split_ICCT.json
Note: We use a subset of this dataset (defined in split_SERENGETI.json).
- Create a folder named
serengeti/
under$DATA
. - Download the images that have bounding box information and annotations available https://lila.science/datasets/snapshot-serengeti.
- Extract these files under
$DATA/serengeti/
. - Download
split_SERENGETI.json
from this link.
The directory structure should look like:
serengeti/
|–– train/
|–– test/
|–– split_SERENGETI.json
|––
- This dataset is collected in the Maasai Mara region in Kenya for the Biome Health Project which is funded by WWF UK. The dataset not public yet. We will add a link to the data and the splits used as soon as it becomes available.