We provide a PyTorch implementation of DN4 for few-shot learning. If you use this code, please cite:
Revisiting Local Descriptor based Image-to-Class Measure for Few-shot Learning.
Wenbin Li, Lei Wang, Jinglin Xu, Jing Huo, Yang Gao and Jiebo Luo. In CVPR 2019.
- Linux
- Python 3.8
- Pytorch 1.7.0
- GPU + CUDA CuDNN
- pillow, torchvision, scipy, numpy
- Clone this repo:
git clone https://github.com/WenbinLee/DN4.git
cd DN4
- Install PyTorch 1.7.0 and other dependencies.
Caltech-UCSD Birds-200-2011, Standford Cars, Standford Dogs, miniImageNet and tieredImageNet are available at Google Drive and 百度网盘(提取码:yr1w).
- Train a 5-way 1-shot model based on Conv64:
python Train_DN4.py --dataset_dir ./path/to/miniImageNet --data_name miniImageNet --encoder_model Conv64F_Local --way_num 5 --shot_num 1
- Train a 5-way 1-shot model based on ResNet12:
python Train_DN4.py --dataset_dir ./path/to/miniImageNet --data_name miniImageNet --encoder_model ResNet12 --way_num 5 --shot_num 1
- Test the model (specify the dataset_dir, encoder_model, and data_name first):
python Test_DN4.py --resume ./results/SGD_Cosine_Lr0.05_DN4_Conv64F_Local_Epoch_30_miniImageNet_84_84_5Way_1Shot/ --encoder_model Conv64F_Local
(Compared to the originally reported results in the paper. * denotes that ResNet256F is used.)
Method | Backbone | 5-way 1-shot | 5-way 5-shot | ||
2019 Version | 2023 Version | 2019 Version | 2023 Version | ||
DN4 | Conv64F_Local | 51.24 | 51.97 | 71.02 | 73.19 |
ResNet12 | 54.37* | 61.23 | 74.44* | 75.66 |
- The results on the miniImageNet dataset reported in the orinigal paper:
If you use this code for your research, please cite our paper.
@inproceedings{DN4_CVPR_2019,
author = {Wenbin Li and
Lei Wang and
Jinglin Xu and
Jing Huo and
Yang Gao and
Jiebo Luo},
title = {Revisiting Local Descriptor Based Image-To-Class Measure for Few-Shot Learning},
booktitle = {{IEEE} Conference on Computer Vision and Pattern Recognition (CVPR)},
pages = {7260--7268},
year = {2019}
}