Skip to content

nayanjha16/kaldi-accentrecognizer

 
 

Repository files navigation

Accent Embeddings - Accent Classifier

This repo is a part of

  1. Accent Embeddings - HMM & Baseline
  2. Current
  3. Accent Embeddings - Multitask

and form the complete work mentioned in the paper submitted in Interspeech 2018. Paper.

Pre-Requisites -

  1. You have worked with the Kaldi toolkit and are quite familiar with it, meaning you are familiar with training a DNN Acoustic Model and know the requirements.
  2. We use Mozilla CommonVoice Dataset for all the experiments. A detailed split can be found at - Accents Unearthed

What we are doing?

We train a DNN with a bottleneck layer with MFCCs as input and accent class as the target. The network contains a bottleneck layer. The bottleneck layer is expected to learn a feature representation of the accents. We extract these bottleneck features to be further used by Accent Embeddings - HMM & Baseline & Accent Embeddings - Multitask. This is the script my_run.sh.

Data Prep

  1. We are using Train7 for training the network.

Note : In the scripts, Train7 - cv_train_nz

Steps

  1. MFCCS - The script creates both standard and hires MFCCS of train data provided the correct path.
  2. alignments (accent alignments) - The script first creates triphone state alignments for the training data (This is done just to get the number of frames but there is an easy way to do it also). Then use the text format of the alignments to create accent alignments which contains accent class for every frame of each utterance. This is done using temp.cc which takes a file containing a table with utterance id and its corresponding accent class as input.
  3. Once all these are done, xconfig and training is standard.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Shell 74.4%
  • Python 19.5%
  • Perl 6.0%
  • C++ 0.1%