This repo is a part of
and form the complete work mentioned in the paper submitted in Interspeech 2018. Paper.
Pre-Requisites -
- You have worked with the Kaldi toolkit and are quite familiar with it, meaning you are familiar with training a DNN Acoustic Model and know the requirements.
- We use Mozilla CommonVoice Dataset for all the experiments. A detailed split can be found at - Accents Unearthed
We train a DNN with a bottleneck layer with MFCCs as input and accent class as the target. The network contains a bottleneck layer. The bottleneck layer is expected to learn a feature representation of the accents. We extract these bottleneck features to be further used by Accent Embeddings - HMM & Baseline & Accent Embeddings - Multitask. This is the script my_run.sh.
- We are using Train7 for training the network.
Note : In the scripts, Train7 - cv_train_nz
- MFCCS - The script creates both standard and hires MFCCS of train data provided the correct path.
- alignments (accent alignments) - The script first creates triphone state alignments for the training data (This is done just to get the number of frames but there is an easy way to do it also). Then use the text format of the alignments to create accent alignments which contains accent class for every frame of each utterance. This is done using temp.cc which takes a file containing a table with utterance id and its corresponding accent class as input.
- Once all these are done, xconfig and training is standard.