The official code of CVPR 2023 paper (Extracting Class Activation Maps from Non-Discriminative Features as well). arXiv
- Python 3.6, PyTorch 1.9, and others in environment.yml
- You can create the environment from environment.yml file
conda env create -f environment.yml
- Download PASCAL VOC 2012 devkit from official website. Download.
- You need to specify the path ('voc12_root') of your downloaded devkit in the following steps.
- Please specify a workspace to save the model and logs.
CUDA_VISIBLE_DEVICES=0 python run_sample.py --voc12_root ./VOCdevkit/VOC2012/ --work_space YOUR_WORK_SPACE --train_cam_pass True --make_cam_pass True --make_lpcam_pass True --eval_cam_pass True
CUDA_VISIBLE_DEVICES=0 python run_sample.py --voc12_root ./VOCdevkit/VOC2012/ --work_space YOUR_WORK_SPACE --cam_to_ir_label_pass True --train_irn_pass True --make_sem_seg_pass True --eval_sem_seg_pass True
You can download the pseudo labels from this link.
To train DeepLab-v2, we refer to deeplab-pytorch. We use the ImageNet pre-trained model for DeepLabV2 provided by AdvCAM. Please replace the groundtruth masks with generated pseudo masks.
- Download MS COCO images from the official COCO website.
- Generate mask from annotations (annToMask.py file in
./mscoco/
). - Download MS COCO image-level labels provided by ReCAM from here and put them in
./mscoco/
- Please specify a workspace to save the model and logs.
CUDA_VISIBLE_DEVICES=0 python run_sample_coco.py --mscoco_root ../MSCOCO/ --work_space YOUR_WORK_SPACE --train_cam_pass True --make_cam_pass True --make_lpcam_pass True --eval_cam_pass True
CUDA_VISIBLE_DEVICES=0 python run_sample_coco.py --mscoco_root ../MSCOCO/ --work_space YOUR_WORK_SPACE --cam_to_ir_label_pass True --train_irn_pass True --make_sem_seg_pass True --eval_sem_seg_pass True
You can download the pseudo labels from this link.
- The same as PASCAL VOC.