From ae31c082eb72201fcf3364283924727a3fab38eb Mon Sep 17 00:00:00 2001 From: nithinraok Date: Thu, 10 Sep 2020 10:42:44 -0700 Subject: [PATCH] Added tutorial notebook Signed-off-by: nithinraok --- .../Speaker_Recogniton_Verification.ipynb | 1180 +++++++++++++++++ 1 file changed, 1180 insertions(+) create mode 100644 tutorials/speaker_recognition/Speaker_Recogniton_Verification.ipynb diff --git a/tutorials/speaker_recognition/Speaker_Recogniton_Verification.ipynb b/tutorials/speaker_recognition/Speaker_Recogniton_Verification.ipynb new file mode 100644 index 000000000000..30cf27963120 --- /dev/null +++ b/tutorials/speaker_recognition/Speaker_Recogniton_Verification.ipynb @@ -0,0 +1,1180 @@ +{ + "nbformat": 4, + "nbformat_minor": 0, + "metadata": { + "colab": { + "name": "Speaker_Recogniton_Verification.ipynb", + "provenance": [], + "collapsed_sections": [], + "toc_visible": true + }, + "kernelspec": { + "name": "python3", + "display_name": "Python 3" + }, + "accelerator": "GPU" + }, + "cells": [ + { + "cell_type": "code", + "metadata": { + "id": "iyLoWDsb9rEs", + "colab_type": "code", + "colab": {} + }, + "source": [ + "\"\"\"\n", + "You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.\n", + "\n", + "Instructions for setting up Colab are as follows:\n", + "1. Open a new Python 3 notebook.\n", + "2. Import this notebook from GitHub (File -> Upload Notebook -> \"GITHUB\" tab -> copy/paste GitHub URL)\n", + "3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select \"GPU\" for hardware accelerator)\n", + "4. Run this cell to set up dependencies.\n", + "\"\"\"\n", + "# If you're using Google Colab and not running locally, run this cell.\n", + "\n", + "## Install dependencies\n", + "!pip install wget\n", + "!apt-get install sox libsndfile1 ffmpeg\n", + "!pip install unidecode\n", + "\n", + "# ## Install NeMo\n", + "!python -m pip install --upgrade git+https://github.com/NVIDIA/NeMo.git@aee39984ba672fce9a81e9d81a0b5a843257045d#egg=nemo_toolkit[asr]\n", + "\n", + "## Install TorchAudio\n", + "!pip install torchaudio>=0.6.0 -f https://download.pytorch.org/whl/torch_stable.html\n" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "oDzak_FIB9LS", + "colab_type": "text" + }, + "source": [ + "# **SPEAKER RECOGNITION** \n", + "Speaker Recognition (SR) is a broad research area which solves two major tasks: speaker identification (who is speaking?) and\n", + "speaker verification (is the speaker who they claim to be?). In this work, we focus on text-independent speaker recognition when the identity of the speaker is based on how the speech is spoken,\n", + "not necessarily in what is being said. Typically such SR systems operate on unconstrained speech utterances,\n", + "which are converted into vectors of fixed length, called speaker embeddings. Speaker embeddings are also used in\n", + "automatic speech recognition (ASR) and speech synthesis." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ydqmdcDxCeXb", + "colab_type": "text" + }, + "source": [ + "In this tutorial, we shall first train these embeddings on speaker related datasets, and then get speaker embeddings from a pretrained network for a new dataset. Since Google Colab has very slow read-write speeds, I'll be demonstarting this tutorial using [an4](http://www.speech.cs.cmu.edu/databases/an4/). \n", + "\n", + "Instead if you'd like to try on a bigger dataset like [hi-mia](https://arxiv.org/abs/1912.01231) use the [get_hi-mia-data.py](https://github.com/NVIDIA/NeMo/blob/master/scripts/get_hi-mia_data.py) script to download the necessary files, extract them, also re-sample to 16Khz if any of these samples are not at 16Khz. " + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "vqUBayc_Ctcr", + "colab_type": "code", + "colab": {} + }, + "source": [ + "import os\n", + "NEMO_ROOT = os.getcwd()\n", + "print(NEMO_ROOT)\n", + "import glob\n", + "import subprocess\n", + "import tarfile\n", + "import wget\n", + "\n", + "data_dir = os.path.join(NEMO_ROOT,'data')\n", + "os.makedirs(data_dir, exist_ok=True)\n", + "\n", + "# Download the dataset. This will take a few moments...\n", + "print(\"******\")\n", + "if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):\n", + " an4_url = 'http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz'\n", + " an4_path = wget.download(an4_url, data_dir)\n", + " print(f\"Dataset downloaded at: {an4_path}\")\n", + "else:\n", + " print(\"Tarfile already exists.\")\n", + " an4_path = data_dir + '/an4_sphere.tar.gz'\n", + "\n", + "# Untar and convert .sph to .wav (using sox)\n", + "tar = tarfile.open(an4_path)\n", + "tar.extractall(path=data_dir)\n", + "\n", + "print(\"Converting .sph to .wav...\")\n", + "sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)\n", + "for sph_path in sph_list:\n", + " wav_path = sph_path[:-4] + '.wav'\n", + " cmd = [\"sox\", sph_path, wav_path]\n", + " subprocess.run(cmd)\n", + "print(\"Finished conversion.\\n******\")" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "t5PrWzkiDbHy", + "colab_type": "text" + }, + "source": [ + "Since an4 is not designed for speaker recognition, this facilitates the oppurtunity to demostrate how you can generate manifest files that are necessary for training. These methods can be applied to any dataset to get similar training manifest files. \n", + "\n", + "First get a scp file(s) which has all the wav files with absolute paths for each of train, dev, and test set. This can be easily done by the `find` bash command" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "vnrUh3vuDSRN", + "colab_type": "code", + "colab": {} + }, + "source": [ + "!find {data_dir}/an4/wav/an4_clstk -iname \"*.wav\" > data/an4/wav/an4_clstk/train_all.scp" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "BhWVg2QoDhL3", + "colab_type": "text" + }, + "source": [ + "Let's look at the first 3 lines of scp file for train." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "BfnMK302Du20", + "colab_type": "code", + "colab": {} + }, + "source": [ + "!head -n 3 {data_dir}/an4/wav/an4_clstk/train_all.scp" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Y9L9Tl0XDw5Z", + "colab_type": "text" + }, + "source": [ + "Since we created the scp file for train, we use `scp_to_manifest.py` to convert this scp file to a manifest file and then optionally split the files to train \\& dev for evaluating the models while training by using the `--split` flag. We wouldn't be needing the `--split` option for test folder. \n", + "Accordingly please mention the `id` number, which is the field num seperated by `/` to be considered as the speaker label " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "_LYwHAr1G8hp", + "colab_type": "text" + }, + "source": [ + "After the download and conversion, your `data` folder should contain directories with manifest files as:\n", + "\n", + "* `data//train.json`\n", + "* `data//dev.json` \n", + "* `data//train_all.json` \n", + "\n", + "Each line in the manifest file describes a training sample - `audio_filepath` contains the path to the wav file, `duration` it's duration in seconds, and `label` is the speaker class label:\n", + "\n", + "`{\"audio_filepath\": \"data/an4/wav/an4test_clstk/menk/cen4-menk-b.wav\", \"duration\": 3.9, \"label\": \"menk\"}` " + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "mpAv77JoD98c", + "colab_type": "code", + "colab": {} + }, + "source": [ + "if not os.path.exists('scripts'):\n", + " print(\"Downloading necessary scripts\")\n", + " !mkdir scripts\n", + " !wget -P scripts https://raw.githubusercontent.com/NVIDIA/NeMo/speaker_tutorials/scripts/scp_to_manifest.py\n", + "!python {NEMO_ROOT}/scripts/scp_to_manifest.py --scp {data_dir}/an4/wav/an4_clstk/train_all.scp --id -2 --out {data_dir}/an4/wav/an4_clstk/all_manifest.json --split" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "5kPCmx5DHvY5", + "colab_type": "text" + }, + "source": [ + "Generate the scp for the test folder and then convert it to a manifest." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "nMd24GVaFBwr", + "colab_type": "code", + "colab": {} + }, + "source": [ + "!find {data_dir}/an4/wav/an4test_clstk -iname \"*.wav\" > {data_dir}/an4/wav/an4test_clstk/test_all.scp\n", + "!python {NEMO_ROOT}/scripts/scp_to_manifest.py --scp {data_dir}/an4/wav/an4test_clstk/test_all.scp --id -2 --out {data_dir}/an4/wav/an4test_clstk/test.json" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "H5FPmxUkGakD", + "colab_type": "text" + }, + "source": [ + "## Path to manifest files\n" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "vo-VnYPtJO_v", + "colab_type": "code", + "colab": {} + }, + "source": [ + "train_manifest = os.path.join(data_dir,'an4/wav/an4_clstk/train.json')\n", + "validation_manifest = os.path.join(data_dir,'an4/wav/an4_clstk/dev.json')" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "KyDVdtjAL2__", + "colab_type": "text" + }, + "source": [ + "\n", + "As the goal of most speaker related systems is to get good speaker level embeddings that could help distinguish from\n", + "other speakers, we shall first train these embeddings in end-to-end\n", + "manner optimizing the [QuatzNet](https://arxiv.org/abs/1910.10261) based encoder model on cross-entropy loss.\n", + "We modify the decoder to get these fixed size embeddings irrespective of the length of the input audio. We employ a mean and variance\n", + "based statistics pooling method to grab these embeddings." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "OJtU_GEdMUUo", + "colab_type": "text" + }, + "source": [ + "# Training\n", + "Import necessary packages" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "o1ojB0cZMSmv", + "colab_type": "code", + "colab": {} + }, + "source": [ + "import nemo\n", + "# NeMo's ASR collection - this collections contains complete ASR models and\n", + "# building blocks (modules) for ASR\n", + "import nemo.collections.asr as nemo_asr\n", + "from omegaconf import OmegaConf" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "m5Zho11LNAFJ", + "colab_type": "text" + }, + "source": [ + "## Model Configuration \n", + "The SpeakerNet Model is defined in a config file which declares multiple important sections.\n", + "\n", + "They are:\n", + "\n", + "1) model: All arguments that will relate to the Model - preprocessors, encoder, decoder, optimizer and schedulers, datasets and any other related information\n", + "\n", + "2) trainer: Any argument to be passed to PyTorch Lightning" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "6HQtZfKnMhpI", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# This line will print the entire config of sample SpeakerNet model\n", + "!mkdir conf \n", + "!wget -P conf https://raw.githubusercontent.com/NVIDIA/NeMo/speaker_tutorials/examples/speaker_recognition/conf/SpeakerNet_recognition_3x2x512.yaml\n", + "MODEL_CONFIG = os.path.join(NEMO_ROOT,'conf/SpeakerNet_recognition_3x2x512.yaml')\n", + "config = OmegaConf.load(MODEL_CONFIG)\n", + "print(OmegaConf.to_yaml(config))" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "HtbXN-cFOwxi", + "colab_type": "text" + }, + "source": [ + "## Setting up the datasets within the config\n", + "If you'll notice, there are a few config dictionaries called train_ds, validation_ds and test_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "NPBIf1jmNgjn", + "colab_type": "code", + "colab": {} + }, + "source": [ + "print(OmegaConf.to_yaml(config.model.train_ds))\n", + "print(OmegaConf.to_yaml(config.model.validation_ds))" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "PLIjKOMUP0YE", + "colab_type": "text" + }, + "source": [ + "You will often notice that some configs have ??? in place of paths. This is used as a placeholder so that the user can change the value at a later time.\n", + "\n", + "Let's add the paths to the manifests to the config above\n", + "Also, since an4 dataset doesn't have test set of same speakers used in training, we will now ignore test manifest for demonstration purposes" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "TSotpjL_O2BN", + "colab_type": "code", + "colab": {} + }, + "source": [ + "config.model.train_ds.manifest_filepath = train_manifest\n", + "config.model.validation_ds.manifest_filepath = validation_manifest" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "xy6_Lf6fW9aJ", + "colab_type": "text" + }, + "source": [ + "Also as we are training on an4 dataset, there are 74 speaker labels in training, and we need to set this in decoder config" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "-B96tFTnW8Yh", + "colab_type": "code", + "colab": {} + }, + "source": [ + "config.model.decoder.params.num_classes = 74" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "83pHBRDpQTF0", + "colab_type": "text" + }, + "source": [ + "## Building the PyTorch Lightning Trainer\n", + "NeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem!\n", + "\n", + "Lets first instantiate a Trainer object!" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "GWzGJoHMQQnG", + "colab_type": "code", + "colab": {} + }, + "source": [ + "import torch\n", + "import pytorch_lightning as pl" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "WIYf4-KFQYHl", + "colab_type": "code", + "colab": {} + }, + "source": [ + "print(\"Trainer config - \\n\")\n", + "print(OmegaConf.to_yaml(config.trainer))" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "aXuSMYMNQeW7", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Lets modify some trainer configs for this demo\n", + "# Checks if we have GPU available and uses it\n", + "cuda = 1 if torch.cuda.is_available() else 0\n", + "config.trainer.gpus = cuda\n", + "\n", + "# Reduces maximum number of epochs to 5 for quick demonstration\n", + "config.trainer.max_epochs = 5\n", + "\n", + "# Remove distributed training flags\n", + "config.trainer.distributed_backend = None" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "pBq3eCLwQhCy", + "colab_type": "code", + "colab": {} + }, + "source": [ + "trainer = pl.Trainer(**config.trainer)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "-xHq_rcmQiry", + "colab_type": "text" + }, + "source": [ + "## Setting up a NeMo Experiment\n", + "NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it !" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "DMm8MPYfQsCS", + "colab_type": "code", + "colab": {} + }, + "source": [ + "from nemo.utils.exp_manager import exp_manager\n", + "log_dir = exp_manager(trainer, config.get(\"exp_manager\", None))\n", + "# The log_dir provides a path to the current logging directory for easy access\n", + "print(log_dir)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "nQQMlXmLQ7h1", + "colab_type": "text" + }, + "source": [ + "## Building the SpeakerNet Model\n", + "SpeakerNet is an ASR model with a classification task - it generates one label for the entire provided audio stream. Therefore we encapsulate it inside the EncDecSpeakerLabelModel as follows." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "E_KY_s5LROYf", + "colab_type": "code", + "colab": {} + }, + "source": [ + "speaker_model = nemo_asr.models.EncDecSpeakerLabelModel(cfg=config.model, trainer=trainer)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "_AphpMhkSVdU", + "colab_type": "text" + }, + "source": [ + "Before we begin training, lets first create a Tensorboard visualization to monitor progress" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "BUnDpe_5SbDR", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Load the TensorBoard notebook extension\n", + "%load_ext tensorboard\n", + "%tensorboard --logdir {log_dir}" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Or8g1cksSf8C", + "colab_type": "text" + }, + "source": [ + "As any NeMo model is inherently a PyTorch Lightning Model, it can easily be trained in a single line - trainer.fit(model) !\n", + "We see below that the model begins to get modest scores on the validation set after just 5 epochs of training" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "HvYhsOWuSpL_", + "colab_type": "code", + "colab": {} + }, + "source": [ + "trainer.fit(speaker_model)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "lSRACGt3UAYn", + "colab_type": "text" + }, + "source": [ + "This config is not suited and desgined for an4 so you may observe unstable val_loss" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "jvtVKO8FZsoe", + "colab_type": "text" + }, + "source": [ + "If you have a test manifest file, we can easily compute test accuracy by running\n", + "
trainer.test(speaker_model, ckpt_path=None)\n",
+        "
\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "FlBwMsRdZfqg", + "colab_type": "text" + }, + "source": [ + "## For Faster Training\n", + "We can dramatically improve the time taken to train this model by using Multi GPU training along with Mixed Precision.\n", + "\n", + "For multi-GPU training, take a look at the [PyTorch Lightning Multi-GPU training section](https://pytorch-lightning.readthedocs.io/en/latest/multi_gpu.html)\n", + "\n", + "For mixed-precision training, take a look at the [PyTorch Lightning Mixed-Precision training section](https://pytorch-lightning.readthedocs.io/en/latest/apex.html)\n", + "\n", + "### Mixed precision:\n", + "
trainer = Trainer(amp_level='O1', precision=16)\n",
+        "
\n", + "\n", + "### Trainer with a distributed backend:\n", + "
trainer = Trainer(gpus=2, num_nodes=2, distributed_backend='ddp')\n",
+        "
\n", + "\n", + "Of course, you can combine these flags as well." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "XcnWub9-0TW2", + "colab_type": "text" + }, + "source": [ + "## Saving/Restoring a checkpoint\n", + "There are multiple ways to save and load models in NeMo. Since all NeMo models are inherently Lightning Modules, we can use the standard way that PyTorch Lightning saves and restores models.\n", + "\n", + "NeMo also provides a more advanced model save/restore format, which encapsulates all the parts of the model that are required to restore that model for immediate use.\n", + "\n", + "In this example, we will explore both ways of saving and restoring models, but we will focus on the PyTorch Lightning method.\n", + "\n", + "## Saving and Restoring via PyTorch Lightning Checkpoints\n", + "When using NeMo for training, it is advisable to utilize the exp_manager framework. It is tasked with handling checkpointing and logging (Tensorboard as well as WandB optionally!), as well as dealing with multi-node and multi-GPU logging.\n", + "\n", + "Since we utilized the exp_manager framework above, we have access to the directory where the checkpoints exist.\n", + "\n", + "exp_manager with the default settings will save multiple checkpoints for us -\n", + "\n", + "1) A few checkpoints from certain steps of training. They will have --val_loss= tags\n", + "\n", + "2) A checkpoint at the last epoch of training denotes by --last.\n", + "\n", + "3) If the model finishes training, it will also have a --end checkpoint." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "QSLjq-edaPt_", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Lets list all the checkpoints we have\n", + "checkpoint_dir = os.path.join(log_dir, 'checkpoints')\n", + "checkpoint_paths = list(glob.glob(os.path.join(checkpoint_dir, \"*.ckpt\")))\n", + "checkpoint_paths" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "BwltdVWXaroa", + "colab_type": "code", + "colab": {} + }, + "source": [ + "final_checkpoint = list(filter(lambda x: \"--end.ckpt\" in x, checkpoint_paths))[0]\n", + "print(final_checkpoint)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "1tGKKojs0fEh", + "colab_type": "text" + }, + "source": [ + "\n", + "## Restoring from a PyTorch Lightning checkpoint\n", + "To restore a model using the LightningModule.load_from_checkpoint() class method." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "EgyP9cYVbFc8", + "colab_type": "code", + "colab": {} + }, + "source": [ + "restored_model = nemo_asr.models.EncDecSpeakerLabelModel.load_from_checkpoint(final_checkpoint)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "AnZVMKZpbI_M", + "colab_type": "text" + }, + "source": [ + "# Finetuning\n", + "Since we don't have any new manifest file to finetune, I will demonstrate here by using the test manifest file we created earlier. \n", + "an4 test dataset has different set of speakers from train set (total number: 10). And as we didn't split this dataset for validation I will use the same for validation. " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "kV9gInFwQ2F5", + "colab_type": "text" + }, + "source": [ + "So to finetune all we need to do is, update our model config with these manifest paths and change num of decoder classes to create a new decoder with updated number of classes" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "HtXUWmYLQ0PJ", + "colab_type": "code", + "colab": {} + }, + "source": [ + "test_manifest = os.path.join(data_dir,'an4/wav/an4test_clstk/test.json')\n", + "config.model.train_ds.manifest_filepath = test_manifest\n", + "config.model.validation_ds.manifest_filepath = test_manifest\n", + "config.model.decoder.params.num_classes = 10" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "xpSQ_sk8Rf6z", + "colab_type": "text" + }, + "source": [ + "Once you set up the necessary model config parameters all we need to do is call setup_finetune_model method" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "Jt3yy4EVS-S6", + "colab_type": "code", + "colab": {} + }, + "source": [ + "restored_model.setup_finetune_model(config.model)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "IHy1zE1cTDZn", + "colab_type": "text" + }, + "source": [ + "So we have setup the data and changed decoder required for finetune, we now just need to create a trainer and start training with smaller learning rate for fewer epochs" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "nBmF6tQITSRl", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Setup the new trainer object\n", + "# Lets modify some trainer configs for this demo\n", + "# Checks if we have GPU available and uses it\n", + "cuda = 1 if torch.cuda.is_available() else 0\n", + "\n", + "trainer_config = OmegaConf.create(dict(\n", + " gpus=cuda,\n", + " max_epochs=5,\n", + " max_steps=None, # computed at runtime if not set\n", + " num_nodes=1,\n", + " accumulate_grad_batches=1,\n", + " checkpoint_callback=False, # Provided by exp_manager\n", + " logger=False, # Provided by exp_manager\n", + " row_log_interval=1, # Interval of logging.\n", + " val_check_interval=1.0, # Set to 0.25 to check 4 times per epoch, or an int for number of iterations\n", + "))\n", + "print(OmegaConf.to_yaml(trainer_config))" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "bRz-8-xzUHKZ", + "colab_type": "code", + "colab": {} + }, + "source": [ + "trainer_finetune = pl.Trainer(**trainer_config)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "EOwHTkW-UUy8", + "colab_type": "text" + }, + "source": [ + "## Setting the trainer to the restored model\n", + "Setting the trainer to the restored model" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "0FhYQQQOUPIk", + "colab_type": "code", + "colab": {} + }, + "source": [ + "restored_model.set_trainer(trainer_finetune)\n", + "log_dir_finetune = exp_manager(trainer_finetune, config.get(\"exp_manager\", None))\n", + "print(log_dir_finetune)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ptexCJ7tUmgs", + "colab_type": "text" + }, + "source": [ + "## Setup optimizer + scheduler\n", + "For a fine-tuning experiment, lets set up the optimizer and scheduler!\n", + "We will use a much lower learning rate than before\n" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "TUyjEAeSUjf2", + "colab_type": "code", + "colab": {} + }, + "source": [ + "import copy\n", + "optim_sched_cfg = copy.deepcopy(restored_model._cfg.optim)\n", + "# Struct mode prevents us from popping off elements from the config, so lets disable it\n", + "OmegaConf.set_struct(optim_sched_cfg, False)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "5JViMr7pUzvi", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Lets change the maximum learning rate to previous minimum learning rate\n", + "optim_sched_cfg.lr = 0.001\n", + "\n", + "# Set \"min_lr\" to lower value\n", + "optim_sched_cfg.sched.min_lr = 1e-4\n", + "\n", + "print(OmegaConf.to_yaml(optim_sched_cfg))" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "AjqdCggzVFrY", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Now lets update the optimizer settings\n", + "restored_model.setup_optimization(optim_sched_cfg)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "3mWlJZiOVIuO", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# We can also just directly replace the config inplace if we choose to\n", + "restored_model._cfg.optim = optim_sched_cfg" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "lc3fzGYVVTyi", + "colab_type": "text" + }, + "source": [ + "## Fine-tune training step\n", + "We fine-tune on the subset recognition problem. Note, the model was originally trained on these classes (the subset defined here has already been trained on above).\n", + "\n", + "When fine-tuning on a truly new dataset, we will not see such a dramatic improvement in performance. However, it should still converge a little faster than if it was trained from scratch." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "uFIOsuFYVLzr", + "colab_type": "code", + "colab": {} + }, + "source": [ + "## Fine-tuning for 5 epochs¶\n", + "trainer_finetune.fit(restored_model)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "5DNidtl4VplU", + "colab_type": "text" + }, + "source": [ + "# Saving .nemo file\n", + "Now we can save the whole config and model parameters in a single .nemo and we can anytime restore from it" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "am5wej6-VdZW", + "colab_type": "code", + "colab": {} + }, + "source": [ + "restored_model.save_to(os.path.join(log_dir_finetune, '..',\"SpeakerNet.nemo\"))" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "WnBhFJefV-Pf", + "colab_type": "code", + "colab": {} + }, + "source": [ + "!ls {log_dir_finetune}/.." + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "kVx1hNP_V_iz", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# restore from a save model\n", + "restored_model_2 = nemo_asr.models.EncDecSpeakerLabelModel.restore_from(os.path.join(log_dir_finetune, '..', \"SpeakerNet.nemo\"))\n" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "80tLWTN40uaB", + "colab_type": "text" + }, + "source": [ + "# Speaker Verification" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "VciRUIRz0y6P", + "colab_type": "text" + }, + "source": [ + "Training for a speaker verification model is almost same as speaker recognition model with a change in loss function. Angular Loss is a better function to train for a speaker verification model as the model is trained in an end to end manner with loss optimizing for embeddings cluster to be far from each other for different speaker by maximing the angle between these clusters" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ULTjBuFI19Js", + "colab_type": "text" + }, + "source": [ + "To train for verification we just need to toggle `angular` flag in `config.model.decoder.params.angular = True`\n", + "Once we set this, loss will be changed to angular loss and we can follow the above steps to the model.\n", + "Note the scale and margin values to be set for the loss function are present at `config.model.loss.scale` and `config.model.loss.margin`" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "LcKiNEY032-t", + "colab_type": "text" + }, + "source": [ + "## Extract Speaker Embeddings\n", + "Once you have a trained model or use one of our pretrained nemo checkpoints to get speaker embeddings for any speaker.\n", + "\n", + "To demonstrate this we shall use `nemo_asr.models.ExtractSpeakerEmbeddingsModel` with say 5 audio_samples from our dev manifest set. This model is specifically for inference purposes to extract embeddings from a trained `.nemo` model" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "uXEzKMHf3r6-", + "colab_type": "code", + "colab": {} + }, + "source": [ + "verification_model = nemo_asr.models.ExtractSpeakerEmbeddingsModel.restore_from(os.path.join(log_dir_finetune, '..', 'SpeakerNet.nemo'))" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Y-XiLHMQ8BIk", + "colab_type": "text" + }, + "source": [ + "Now, we need to pass the necessary manifest_filepath and params to set up the dataloader for extracting embeddings" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "lk2vsDJk9PS8", + "colab_type": "code", + "colab": {} + }, + "source": [ + "!head -5 {validation_manifest} > embeddings_manifest.json" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "DEd5poCr9yrP", + "colab_type": "code", + "colab": {} + }, + "source": [ + "config.model.train_ds" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "JIHok6LD8g0F", + "colab_type": "code", + "colab": {} + }, + "source": [ + "test_config = OmegaConf.create(dict(\n", + " manifest_filepath = os.path.join(NEMO_ROOT,'embeddings_manifest.json'),\n", + " sample_rate = 16000,\n", + " labels = None,\n", + " batch_size = 1,\n", + " shuffle = False,\n", + " time_length = 8,\n", + " embedding_dir='./'\n", + "))\n", + "print(OmegaConf.to_yaml(test_config))\n", + "verification_model.setup_test_data(test_config)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "-m86I-u1CXeJ", + "colab_type": "text" + }, + "source": [ + "Once we setup the test data, we need to create a trainer and just call `trainer.test` to save the embeddings in `embedding_dir`" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "u2FRecqD-ln5", + "colab_type": "code", + "colab": {} + }, + "source": [ + "trainer = pl.Trainer(gpus=cuda,distributed_backend=None)\n", + "trainer.test(verification_model)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "zfjXPsjzDOgr", + "colab_type": "text" + }, + "source": [ + "Embeddings are stored in dict structure with key value pair, key being uniq_name generated based on audio_filepath of sample present in manifest_file" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "hmTeSR6jD28k", + "colab_type": "code", + "colab": {} + }, + "source": [ + "" + ], + "execution_count": null, + "outputs": [] + } + ] +} \ No newline at end of file