Paper link: https://arxiv.org/abs/1612.02649
Tensorflow implementation of the paper for adapting semantic segmentation from the (A) Synthia dataset to Cityscapes dataset and (B) Cityscapes dataset to Our dataset.
-
Use Tensorflow version-1.1.0 with Python2
-
Build ccnn
cd fcns-wild mkdir build cd build cmake .. make -j8
- Download Cityscapes Dataset
- Download Synthia Dataset
- download the subset "SYNTHIA-RAND-CITYSCAPES"
- Download NMD Dataset
- contains four subsets --- Taipei, Tokyo, Roma, Rio --- used as target domain (only testing data has annotations)
- Change the data path in files under folder "./data"
-
Download and testing the trained model
cd fcns-wild sh scripts/download_demo.sh sh scripts/infer_city2NMD.sh # This shell NMD is using Taipei
The demo model is cityscapes-to-Taipei, and results will be saved in the
./train_results/
folder. Also, it shows evaluated performance. (the evaluation code is provided by Cityscapes-dataset)
-
Download the pretrained weights (model trained on source)
sh scripts/download_src.sh
-
Train the Cityscapes-to-Ours{subset} model
python ./src/train_adv.py \ --weight_path ./pretrained/train_cscape.npy \ --city {city_name} \ --src_data_path ./data/Cityscapes.txt \ --tgt_data_path ./data/{city_name}.txt \ --method GACA \
The training scripts for adapt from (A) Synthia dataset to Cityscapes dataset and (B) Cityscapes dataset to Our dataset are prepared in "scripts/run_train_syn2city.sh" and "scripts/run_train_city2ours.sh".