Skip to content

Commit

Permalink
notebooks' link, typo and import fix (#4158)
Browse files Browse the repository at this point in the history
* redo missing pr 4007

Signed-off-by: fayejf <[email protected]>

* remove extremely unreliable links

Signed-off-by: fayejf <[email protected]>
  • Loading branch information
fayejf authored and ericharper committed May 18, 2022
1 parent 76e98dd commit c3b7d33
Show file tree
Hide file tree
Showing 4 changed files with 13 additions and 21 deletions.
8 changes: 4 additions & 4 deletions tutorials/asr/Offline_ASR_with_VAD_for_CTC_models.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@
"import torch\n",
"import os\n",
"from nemo.collections.asr.metrics.wer import word_error_rate\n",
"from nemo.collections.asr.parts.utils.vad_utils import stitch_segmented_asr_output, contruct_manfiest_eval"
"from nemo.collections.asr.parts.utils.vad_utils import stitch_segmented_asr_output, construct_manfiest_eval"
]
},
{
Expand Down Expand Up @@ -320,7 +320,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"If we have ground-truth <code>'text'</code> in input_manifest, we can evaluate our performance of stitched output. Let's align the <code>'text'</code> in input manifest and <code>'pred_text'</code> in stitched segmented asr output first, since some samples from input_manfiest might be pure noise and have been removed in VAD output and excluded for ASR inference. "
"If we have ground-truth <code>'text'</code> in input_manifest, we can evaluate our performance of stitched output. Let's align the <code>'text'</code> in input manifest and <code>'pred_text'</code> in stitched segmented asr output first, since some samples from input_manifest might be pure noise and have been removed in VAD output and excluded for ASR inference. "
]
},
{
Expand All @@ -329,7 +329,7 @@
"metadata": {},
"outputs": [],
"source": [
"aligned_vad_asr_output_manifest = contruct_manfiest_eval(input_manifest, stitched_output_manifest)"
"aligned_vad_asr_output_manifest = construct_manifest_eval(input_manifest, stitched_output_manifest)"
]
},
{
Expand Down Expand Up @@ -386,4 +386,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
}
}
10 changes: 3 additions & 7 deletions tutorials/asr/Speech_Commands.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -643,17 +643,13 @@
"\n",
"We can dramatically improve the time taken to train this model by using Multi GPU training along with Mixed Precision.\n",
"\n",
"For multi-GPU training, take a look at [the PyTorch Lightning Multi-GPU training section](https://pytorch-lightning.readthedocs.io/en/latest/advanced/multi_gpu.html)\n",
"\n",
"For mixed-precision training, take a look at [the PyTorch Lightning Mixed-Precision training section](https://pytorch-lightning.readthedocs.io/en/latest/guides/speed.html#mixed-precision-16-bit-training)\n",
"\n",
"```python\n",
"# Mixed precision:\n",
"trainer = Trainer(amp_level='O1', precision=16)\n",
"\n",
"# Trainer with a distributed backend:\n",
"trainer = Trainer(devices=2, num_nodes=2, accelerator='gpu', strategy='dp')\n",
"\n",
"# Mixed precision:\n",
"trainer = Trainer(amp_level='O1', precision=16)\n",
"\n",
"# Of course, you can combine these flags as well.\n",
"```"
]
Expand Down
6 changes: 3 additions & 3 deletions tutorials/asr/Voice_Activity_Detection.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -657,12 +657,12 @@
"We can dramatically improve the time taken to train this model by using Multi GPU training along with Mixed Precision.\n",
"\n",
"```python\n",
"# Mixed precision:\n",
"trainer = Trainer(amp_level='O1', precision=16)\n",
"\n",
"# Trainer with a distributed backend:\n",
"trainer = Trainer(devices=2, num_nodes=2, accelerator='gpu', strategy='dp')\n",
"\n",
"# Mixed precision:\n",
"trainer = Trainer(amp_level='O1', precision=16)\n",
"\n",
"# Of course, you can combine these flags as well.\n",
"```"
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -628,18 +628,14 @@
"## For Faster Training\n",
"We can dramatically improve the time taken to train this model by using Multi GPU training along with Mixed Precision.\n",
"\n",
"For multi-GPU training, take a look at the [PyTorch Lightning Multi-GPU training section](https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html)\n",
"\n",
"For mixed-precision training, take a look at the [PyTorch Lightning Mixed-Precision training section](https://pytorch-lightning.readthedocs.io/en/latest/guides/speed.html#mixed-precision-16-bit-training)\n",
"### Trainer with a distributed backend:\n",
"<pre><code>trainer = Trainer(devices=2, num_nodes=2, accelerator='gpu', strategy='dp')\n",
"</code></pre>\n",
"\n",
"### Mixed precision:\n",
"<pre><code>trainer = Trainer(amp_level='O1', precision=16)\n",
"</code></pre>\n",
"\n",
"### Trainer with a distributed backend:\n",
"<pre><code>trainer = Trainer(devices=2, num_nodes=2, accelerator='gpu', strategy='dp')\n",
"</code></pre>\n",
"\n",
"Of course, you can combine these flags as well."
]
},
Expand Down

0 comments on commit c3b7d33

Please sign in to comment.