Skip to content

PyTorch implementation for Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuous Sign Language Recognition.

License

Notifications You must be signed in to change notification settings

zheniu/stochastic-cslr

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Stochastic CSLR

This is the PyTorch implementation for the ECCV 2020 paper: Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuous Sign Language Recognition.

Quick Start

1. Installation

pip install git+https://github.com/zheniu/stochastic-cslr

Also, you need to install sclite for evaluation. Take a look at step 2 for instructions.

2. Prepare the dataset

  • Download the RWTH-PHOENIX-2014 dataset here.
  • Unzip it and obtain the path to phoenix-2014-multisigner/ folder for later use.
  • Install sclite for evaluation. Check phoenix-2014-multisigner/evaluation/NIST-sclite_sctk-2.4.0-20091110-0958.tar.bz2 for detail.
  • After installing sclite, put it in your PATH.

3. Run a quick test

You can use the script quick_test.py for a quick test.

python3 quick_test.py --data-root your_path_to/phoenix-2014-multisigner

By specifying the model type --model sfl/dfl, the data split --split dev/test, whether to use a language model--use-lm, you can get the following results:

Model WER (dev) sub/del/ins (dev) WER (test) sub/del/ins (test)
DFL 27.1 12.7/7.4/7.0 27.7 13.8/7.3/6.6
SFL 26.2 12.7/6.9/6.7 26.6 13.7/6.5/6.4
DFL + LM 25.6 11.5/9.2/4.9 26.4 12.4/9.3/4.7
SFL + LM 24.3 11.4/8.5/4.4 25.3 12.4/8.5/4.3

Note that these results are slightly different from the paper as a different random seed is used.

You may also take a look at quick_test.py as it shows how to use the pretrained models.

4. Train your own model

The configuration files for deterministic and stochastic fine-grained labeling are put under config/. The training script is based on a PyTorch experiment runner torchzq, which automatically reads the hyperparameters in the YAML file and passes them to stochastic_cslr/runner.py.

Before running, change the data_root in the YAML configurations to phoenix-2014-multisigner/ first.

Train (for instance, dfl):

tzq config/dfl-fp16.yml train

Test the trained model

tzq config/dfl-fp16.yml test

Citation

You may cite this work by:

@inproceedings{niu2020stochastic,
  title={Stochastic Fine-Grained Labeling of Multi-state Sign Glosses for Continuous Sign Language Recognition},
  author={Niu, Zhe and Mak, Brian},
  booktitle={European Conference on Computer Vision},
  pages={172--186},
  year={2020},
  organization={Springer}
}

About

PyTorch implementation for Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuous Sign Language Recognition.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published