This is the source code of our project for Fashion Clothing Parsing. (EMCOM Lab, SeoulTech, Korea)
- Tensorflow implementation of Fully Convolutional Networks for Semantic Segmentation (FCNs).
- TensorFlow implementation of U-Net
- Improved networks based on U-Net
The implementation is largely based on the reference code provided by the authors of the paper link.
├── parseDemo20180417
│ └── clothparsing.py
├── tests
│ ├── __init__.py
│ ├── gt.png
│ ├── inference.py
│ ├── inp.png
│ ├── output.png
│ └── pred.png
│ └── test_crf.py
│ └── test_labels.py
└── .gitignore
└── __init__.py
└── BatchDatasetReader.py
└── bfscore.py
└── CalculateUtil.py
└── denseCRF.py
└── EvalMetrics.py
└── FCN.py
└── function_definitions.py
└── LICENSE
└── read_10k_data.py
└── read_CFPD_data.py
└── read_LIP_data.py
└── README.md
└── requirements.txt
└── TensorflowUtils.py
└── test_human.py
└── UNet.py
└── UNetAttention.py
└── UNetMSc.py
└── UNetPlus.py
└── UNetPlusMSc.py
- For required packages installation, run
pip install -r requirements.txt
- pydensecrf installation in windows with conda:
conda install -c conda-forge pydensecrf
. For linux, use pip:pip install pydensecrf
. - Check dataset directory in
read_dataset
function of corresponding data reading script, for example, for LIP dataset, check paths inread_LIP_data.py
and modify as necessary.
- Right now, there are dataset supports for 3 datasets. Set your directory path in the corresponding dataset reader script.
- CFPD (For preparing CFPD dataset, you can visit here: https://github.com/minar09/dataset-CFPD-windows)
- LIP
- 10k (Fashion)
- If you want to use your own dataset, please create your dataset reader. (Check
read_CFPD_data.py
for example, on how to put directory and stuff)
- To train model simply execute
python FCN.py
orpython UNet.py
- You can add training flag as well:
python FCN.py --mode=train
debug
flag can be set during training to add information regarding activations, gradients, variables etc.- Set your hyper-parameters in the corresponding model script
- To test and evaluate results use flag
--mode=test
- After testing and evaluation is complete, final results will be printed in the console, and the corresponding files will be saved in the "logs" directory.
- Set your hyper-parameters in the corresponding model script
- To visualize results for a random batch of images use flag
--mode=visualize
- Set your hyper-parameters in the corresponding model script
- Running testing will apply CRF by default.
- If you want to run standalone, run
python denseCRF.py
, after setting your paths.
- Run
python bfscore.py
, after setting your paths. - For more details, visit https://github.com/minar09/bfscore_python