Official Pytorch implementation of CVPR 2023 paper "MaLP: Manipulation Localization Using a Proactive Scheme ".
Vishal Asnani, Xi Yin, Tal Hassner, Xiaoming Liu
- PyTorch 1.5.0
- Numpy 1.14.2
- Scikit-learn 0.22.2
- Every GM is used with different datasets they are trained on. The GM-dataset information is given in Tab. 2 of the supplmentary. Please refer to the test images released by Proactive detection work.
- For new datasets used, please go here.
- The training data is used as CELEBA.
The pre-trained model trained on STGAN can be downloaded using the information below:
Model | Link |
---|---|
Localization only | Model |
Localization + Detection | Model |
- Install VIT transformer package following the instructions in https://github.com/lucidrains/vit-pytorch
- Download the STGAN repository files and pre-trained model from https://github.com/csmliu/STGAN and place the train_loc_det.py file in that folder
- Run the code as shown below:
python train_loc_det.py --data_train "YOUR DATA PATH" --resume --model_path "MODEL PATH"
For training only localization module, run the code as shown below:
python train_loc.py --data_train "YOUR DATA PATH" --resume --model_path "MODEL PATH"
- Download the pre-trained model using the above links.
- Provide the model path in the code
- Run the code as shown below:
python evaluation_loc_det.py --data_train "YOUR DATA PATH" --resume --model_path "MODEL PATH"
If you would like to use our work, please cite:
@inproceedings{asnani2023pro_loc
title={MaLP: Manipulation Localization Using a Proactive Scheme},
author={Asnani, Vishal and Yin, Xi and Hassner, Tal and Liu, Xiaoming},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023}
}