This is the official implementation of the paper Towards Modern Image Manipulation Localization: A Large-Scale Dataset and Novel Methods. paper
- We propose to harness Constrained Image Manipulation Localization (CIML) models to automatically annotate the numerous unlabelled manually forged images from the web (e.g. those from PhotoshopBattles). Thereby addressing the severe scarcity of non-synthetic data for image manipulation localization.
- We propose a novel and effective paradigm CAAA for constrained image manipulation localization, which significantly improves the accuracy of the automatic annotations. We believe that this is the best paradigm for CIML-based auto-annotation.
- We propose a novel metric QES to automatically to filter out the possible bad annotations, which is crucial to ensure the quality of an automatically annotated dataset. This metric is quite effective in reflecting the quality of the predictions during the construction of the dataset, where the ground truth is not available.
- Based on the above techniques, we construct a large-scale dataset, termed as MIML, with 123,150 manually forged images and pixel-level annotations denoting the forged regions. The MIML dataset can significantly improve the generalization of different forgery localization models, especially on modern-style images (as those in IMD20 dataset).
We are confirmed that large-scale non-synthetic data is vital for deep image manipulation localization models. We sincerely hope that our methods and our dataset can shed light on the community and promote the real-world applications of deep image forensic models.
This work is an initial attempt of automatic annotation for IML, futher improvements could be made. We are glad to witnessed the development of this field together with the community.
The Modern Image Manipulation Localization (MIML) dataset is now publicly available at Kaggle and Baidu Drive.
Researchers are welcome 😃 to apply for this dataset by sending an email to [email protected] (with institution email address) and explaining:
- Who you are and your institution.
- Who is your supervisor/mentor.
- What's the purpose of requesting this dataset.
Python3.9
torch==1.13.1+cu117
mmcv==1.6.0
mmcv-full==1.6.0
Any question about this work please contact [email protected].
@inproceedings{qu2024towards,
title={Towards Modern Image Manipulation Localization: A Large-Scale Dataset and Novel Methods},
author={Qu, Chenfan and Zhong, Yiwu and Liu, Chongyu and Xu, Guitao and Peng, Dezhi and Guo, Fengjun and Jin, Lianwen},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={10781--10790},
year={2024}
}