Skip to content

huochaitiantang/pytorch-deep-image-matting

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

67 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

pytorch-deep-image-matting

This repository includes the non-official pytorch implementation of deep image matting.

Performance

model SAD ↓ MSE ↓ Grad ↓ Conn ↓ link
stage0-paper 59.6 0.019 40.5 59.3
stage1-paper 54.6 0.017 36.7 55.3
stage0-our 56.01 0.0173 33.71 57.57
stage1-our 54.42 0.0175 35.01 54.85 download
stage1-our-skip 52.99 0.0171 31.56 53.24 download
  • Lower metrics show better performance.
  • Training batch=1, images=43100, epochs=12 for stage1-our-skip model, it takes about 1 days.
  • Test maxSize=1600.
  • GPU memory >= 10GB

Updates

  • 2020.09.09: run the demo.py with the latest model (stage1-skip-sad-52.9.pth), and update the visualization results.
  • 2020.09.09: adopt VGG-16 backbone with skip connection, and we get better performance with less training costs (12 epochs), Get Stage1-Skip-SAD=52.9
  • 2019.10.29: conduct stage0 experiment using current code. Get Stage0-SAD=56.0.
  • 2019.09.09: conv6 kernel size from 1x1 to 3x3. Get Stage1-SAD=54.4. The performance of stage1 is as good as paper. While using model released before this day, please change the kernel_size=1 and padding=0 of conv6 in file core/net.py.
  • 2019.08.24: Fix cv2.dilate and cv2.erode iterations is set default = 1 and set triamp dilate and erode as the test 1k tirmap (k_size:2-5, iterations:5-15). Get Stage1-SAD=57.1.
  • 2019.07.05: Training with refine stage, fixed encoder-decoder. Get Stage2-SAD=57.7.
  • 2019.06.23: Training with alpha loss and composite loss. Get Stage1-SAD=58.7.
  • 2019.06.17: Training trimap generated by erode as well as dialte to balance the 0 and 1 value. Get Stage0-SAD=62.0.
  • 2019.04.22: Input image is normalized by mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225] and fix crop error. Get Stage0-SAD=69.1.
  • 2018.12.14: Initial. Get Stage0-SAD=72.9.

Installation

  • Python 2.7.12 or 3.6.5
  • Pytorch 0.4.0 or 1.0.0
  • OpenCV 3.4.3

Demo

Download our model to the ./model and run the following command. Then the predict alpha mattes will locate in the folder ./result/example/pred.

python core/demo.py

Training

Adobe-Deep-Image-Matting-Dataset

Please concat author for available.

MSCOCO-2017-Train-Dataset

Download

PASCAL-VOC-2012

Download

Composite-Dataset

Run the following command and the composite training and test dataset will locate in Combined_Dataset/Training_set/comp and Combined_Dataset/Test_set/comp, Combined_Dataset is the extracted folder of Adobe-Deep-Image-Matting-Dataset

python tools/composite.py

Pretrained-Model

Run the following command and the pretrained model will locate in ./model/vgg_state_dict.pth

python tools/chg_model.py

Start Training

Run the following command and start the training

bash train.sh

Test

Run the following command and start the test of Adobe-1k-Composite-Dataset

bash deploy.sh

Evaluation

Please eval with official Matlab Code. and get the SAD, MSE, Grad Conn.

Visualization

Running model is Stage1-Skip-SAD=52.9, please click to view whole images.

Image Trimap Pred-Alpha GT-Alpha
image image image image
image image image image
image image image image
image image image image
image image image image

Disclaimer

As covered by the ADOBE IMAGE DATASET LICENSE AGREEMENT, the pre-trained models included in this repository can only be used and distributed for non-commercial purposes.

About

Pytorch implementation of deep image matting

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published