An open source project from Data to AI Lab at MIT.
Robust Video Watermarking with Attention
- Free software: MIT license
- Documentation: https://DAI-Lab.github.io/RivaGAN
- Homepage: https://github.com/DAI-Lab/RivaGAN
The goal of video watermarking is to embed a message within a video file in a way such that it minimally impacts the viewing experience but can be recovered even if the video is redistributed and modified, allowing media producers to assert ownership over their content.
RivaGAN implements a novel architecture for robust video watermarking which features a custom attention-based mechanism for embedding arbitrary data as well as two independent adversarial networks which critique the video quality and optimize for robustness.
Using this technique, we are able to achieve state-of-the-art results in deep learning-based video watermarking and produce watermarked videos which have minimal visual distortion and are robust against common video processing operations.
RivaGAN has been developed and tested on Python3.4, 3.5, 3.6 and 3.7
Also, although it is not strictly required, the usage of a virtualenv is highly recommended in order to avoid interfering with other software installed in the system in which RivaGAN is run.
These are the minimum commands needed to create a virtualenv using python3.6 for RivaGAN:
pip install virtualenv
virtualenv -p $(which python3.6) RivaGAN-venv
Afterwards, you have to execute this command to activate the virtualenv:
source RivaGAN-venv/bin/activate
Remember to execute it every time you start a new console to work on RivaGAN!
With your virtualenv activated, you can clone the repository and install it from
source by running make install
on the stable
branch:
git clone [email protected]:DAI-Lab/RivaGAN.git
cd RivaGAN
git checkout stable
make install
If you want to contribute to the project, a few more steps are required to make the project ready for development.
Please head to the Contributing Guide for more details about this process.
In this short tutorial we will guide you through a series of steps that will help you getting started training your own instance of RivaGAN.
Start by running the following commands to automatically download the Hollywood2 and Moments in Time datasets. Depending on the speed of your internet connection, this may take up to an hour.
cd data
bash download.sh
Now you're ready to train a model.
Make sure to having activated your virtualenv and installed the project, and then execute the following python commands:
from rivagan import RivaGAN
model = RivaGAN()
model.fit("data/hollywood2", epochs=300)
model.save("/path/to/model.pt")
Make sure to replace the /path/to/model.pt
string with an appropiate save path.
You can now load the trained model and use it as follows:
data = tuple([0] * 32)
model = RivaGAN.load("/path/to/model.pt")
model.encode("/path/to/video.avi", data, "/path/to/output.avi")
After the data is encoded in the video, it can be recovered as follows:
recovered_data = model.decode("/path/to/output.avi"):
If you use RivaGAN for your research, please consider citing the following work:
Zhang, Kevin Alex and Xu, Lei and Cuesta-Infante, Alfredo and Veeramachaneni, Kalyan. Robust Invisible Video Watermarking with Attention. MIT EECS, September 2019. (PDF)
@article{zhang2019robust,
author={Kevin Alex Zhang and Lei Xu and Alfredo Cuesta-Infante and Kalyan Veeramachaneni},
title={Robust Invisible Video Watermarking with Attention},
year={2019},
eprint={1909.01285},
archivePrefix={arXiv},
primaryClass={cs.MM}
}
For more details about RivaGAN and all its possibilities and features, please check the documentation site.