Skip to content

Commit

Permalink
Merge r1.9.0 main (#4331)
Browse files Browse the repository at this point in the history
* update branch

Signed-off-by: ericharper <[email protected]>

* update package info

Signed-off-by: ericharper <[email protected]>

* cleaned up TN/ ITN doc (#4119)

* cleaned up TN/ ITN doc

Signed-off-by: Yang Zhang <[email protected]>

* fix typo

Signed-off-by: Yang Zhang <[email protected]>

* fix image

Signed-off-by: Yang Zhang <[email protected]>

* fix image

Signed-off-by: Yang Zhang <[email protected]>

* Draft: Fix restoring from checkpoint for case when `model.common_dataset_parameters.label_vocab_dir` is provided (#4136)

* Fix restoring from checkpoint with label vocab dir

Signed-off-by: PeganovAnton <[email protected]>

* Add tests for various ways to pass label ids to model

Signed-off-by: PeganovAnton <[email protected]>

* Fix typo

Signed-off-by: PeganovAnton <[email protected]>

* Fix typo

Signed-off-by: PeganovAnton <[email protected]>

* Do not create tmp directory

Signed-off-by: PeganovAnton <[email protected]>

* Fix parameter name

Signed-off-by: PeganovAnton <[email protected]>

* finish cherry-pick op

Signed-off-by: PeganovAnton <[email protected]>

* Fix labels errors

Signed-off-by: PeganovAnton <[email protected]>

* Remove duplicate stage

Signed-off-by: PeganovAnton <[email protected]>

* Change target branch

Signed-off-by: PeganovAnton <[email protected]>

* fix doc (#4146)

Signed-off-by: Yang Zhang <[email protected]>

* Tacotron2 retrain (#4103)

* fix yaml

Signed-off-by: treacker <[email protected]>

* Fix for new TTSDataset class

Signed-off-by: treacker <[email protected]>

* added wandb logging

Signed-off-by: treacker <[email protected]>

* added wandb logging

Signed-off-by: treacker <[email protected]>

* fix numpy version

Signed-off-by: treacker <[email protected]>

* fix numpy version

Signed-off-by: treacker <[email protected]>

* inference fix

Signed-off-by: treacker <[email protected]>

* removed old code

Signed-off-by: treacker <[email protected]>

* updated parser logic

Signed-off-by: treacker <[email protected]>

* reverted version update

Signed-off-by: treacker <[email protected]>

* refactored parser logic

Signed-off-by: treacker <[email protected]>

* Updated Jenkinsfile

Signed-off-by: treacker <[email protected]>

* Refactored tutorial for Tacotron2

Signed-off-by: treacker <[email protected]>

* Made backward compatibility

Signed-off-by: treacker <[email protected]>

* Made backward compatibility

Signed-off-by: treacker <[email protected]>

* Update Jenkinsfile

Signed-off-by: treacker <[email protected]>

* Update tacotron.yaml

Signed-off-by: treacker <[email protected]>

* Refactoring

Signed-off-by: treacker <[email protected]>

* cleaned up TN/ ITN doc (#4119)

* cleaned up TN/ ITN doc

Signed-off-by: Yang Zhang <[email protected]>

* fix typo

Signed-off-by: Yang Zhang <[email protected]>

* fix image

Signed-off-by: Yang Zhang <[email protected]>

* fix image

Signed-off-by: Yang Zhang <[email protected]>
Signed-off-by: treacker <[email protected]>

* Check implicit grad acc in GLUE dataset building (#4123)

* Check implicit grad acc in GLUE dataset building

Signed-off-by: MaximumEntropy <[email protected]>

* Fix jenkins test for GLUE/XNLI

Signed-off-by: MaximumEntropy <[email protected]>
Signed-off-by: treacker <[email protected]>

* Refactoring

Signed-off-by: treacker <[email protected]>

* Refactoring

Signed-off-by: treacker <[email protected]>

* Fixed jenkins

Signed-off-by: treacker <[email protected]>

* Refactoring

Signed-off-by: treacker <[email protected]>

* Refactoring

Signed-off-by: treacker <[email protected]>

* Refactoring

Signed-off-by: treacker <[email protected]>

Co-authored-by: Yang Zhang <[email protected]>
Co-authored-by: Sandeep Subramanian <[email protected]>

* Multiprocess improvements (#4127)

* initial commit

Signed-off-by: nithinraok <[email protected]>

* start fix

Signed-off-by: nithinraok <[email protected]>

* improve multiprocessing speed while creating speaker dataset

Signed-off-by: nithinraok <[email protected]>

* updated scp to filelist

Signed-off-by: nithinraok <[email protected]>

* notebooks' link, typo and import  fix  (#4158)

* redo missing pr 4007

Signed-off-by: fayejf <[email protected]>

* remove extremely unreliable links

Signed-off-by: fayejf <[email protected]>

* update speaker docs (#4164)

* update speaker docs

Signed-off-by: nithinraok <[email protected]>

* chunks -> segments

Signed-off-by: nithinraok <[email protected]>

* Khz -> kHz

Signed-off-by: nithinraok <[email protected]>

* small fix (#4180)

Signed-off-by: fayejf <[email protected]>

* fix the server key value problem (#4196)

Signed-off-by: Yi Dong <[email protected]>

* Fix/punctuation/trainer required for setting test data (#4199)

* Draft of fix

Signed-off-by: PeganovAnton <[email protected]>

* Add warnings and replace globa_step with current_epoch

Signed-off-by: PeganovAnton <[email protected]>

* Small improvements to warnings

Signed-off-by: PeganovAnton <[email protected]>

* Error and warning messages improvements

Signed-off-by: PeganovAnton <[email protected]>

* Replace self.trainer with self._trainer

Signed-off-by: PeganovAnton <[email protected]>

* Update ContextNet version (#4207)

Signed-off-by: smajumdar <[email protected]>

* fix bugs for dialogue tutorial (#4211)

Signed-off-by: Zhilin Wang <[email protected]>

* Dialogue tutorial fix (#4214)

* fix bugs for dialogue tutorial

Signed-off-by: Zhilin Wang <[email protected]>

* update path for convert_datasets.py due to conflict PR

Signed-off-by: Zhilin Wang <[email protected]>

* Add docs for Thutmose Tagger (#4173)

* Add docs for Thutmose Tagger

Signed-off-by: Alexandra Antonova <[email protected]>

* add level in docs

Signed-off-by: Alexandra Antonova <[email protected]>

* delete folder to avoid error with running when folder exists from previous run

Signed-off-by: Alexandra Antonova <[email protected]>

Co-authored-by: Alexandra Antonova <[email protected]>
Co-authored-by: ekmb <[email protected]>

* Dialogue tutorial fix (#4218)

* fix bugs for dialogue tutorial

Signed-off-by: Zhilin Wang <[email protected]>

* update path for convert_datasets.py due to conflict PR

Signed-off-by: Zhilin Wang <[email protected]>

* restore previously deleted files

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* Dialogue tutorial fix (#4221)

* fix bugs for dialogue tutorial

Signed-off-by: Zhilin Wang <[email protected]>

* update path for convert_datasets.py due to conflict PR

Signed-off-by: Zhilin Wang <[email protected]>

* restore previously deleted files

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* update tutorial

Signed-off-by: Zhilin Wang <[email protected]>

* fix syntax error in ipynb-file (#4228)

Signed-off-by: Alexandra Antonova <[email protected]>

Co-authored-by: Alexandra Antonova <[email protected]>

* fix json serialize (#4235)

Signed-off-by: Yi Dong <[email protected]>

* Prompt Learning Typo Fixes (#4238)

* Prompt tuning notebook typo fixes

Signed-off-by: Virginia Adams <[email protected]>

* Update tutorials.rst

* Update prompt_learning.rst

* Update prompt_learning.rst

* fixing bug 3642622 (#4250)

* fixing bug 3642622

Signed-off-by: Ghasem Pasandi <[email protected]>

* fixing bug 3642622

Signed-off-by: Ghasem Pasandi <[email protected]>

Co-authored-by: Ghasem Pasandi <[email protected]>

* fix broken link in the tutorial (#4257)

Signed-off-by: Alexandra Antonova <[email protected]>

Co-authored-by: Alexandra Antonova <[email protected]>

* Typo fix, branch change, better download messagae (#4262)

Signed-off-by: Virginia Adams <[email protected]>

* Raise error if bicleaner is not installed in NMT Data preprocesing notebook (#4264)

* Raise error if bicleaner is not installed

Signed-off-by: MaximumEntropy <[email protected]>

* Clear cells

Signed-off-by: MaximumEntropy <[email protected]>

* Fix missing validation dataset, whitelist certain keywords for datasets (#4269)

* Fix missing validation dataset, whitelist certain keywords for datasets

Signed-off-by: smajumdar <[email protected]>

* Fix missing validation dataset, whitelist certain keywords for datasets

Signed-off-by: smajumdar <[email protected]>

* Update asr configs with num_workers and pin_memory (#4270)

Signed-off-by: smajumdar <[email protected]>

* Fix epoch end (#4265)

Signed-off-by: MaximumEntropy <[email protected]>

Co-authored-by: Eric Harper <[email protected]>

* Set Save on train end to false (#4274)

* Set Save on train end to false

Signed-off-by: Virginia Adams <[email protected]>

* Update prompt_learning.rst

* Update prompt_learning.rst

* Update YAML (#4261)

Signed-off-by: MaximumEntropy <[email protected]>

* Updated config to fix CI test OOM error (#4279)

* Updated config to fix CI test issue

Signed-off-by: Virginia Adams <[email protected]>

* Increased num workers

Signed-off-by: Virginia Adams <[email protected]>

* verbose k2 install, skip if failed (#4289)

Signed-off-by: Aleksandr Laptev <[email protected]>

Co-authored-by: Aleksandr Laptev <[email protected]>

* Changed total virtual prompt tokens (#4295)

* Changed total virtual prompt tokens

Signed-off-by: Virginia Adams <[email protected]>

* put number of workers back

Signed-off-by: Virginia Adams <[email protected]>

* upper bound lightning

Signed-off-by: ericharper <[email protected]>

* update branch

Signed-off-by: ericharper <[email protected]>

* update config

Signed-off-by: ericharper <[email protected]>

* remove duplicate test

Signed-off-by: ericharper <[email protected]>

* fix tn test cases

Signed-off-by: ericharper <[email protected]>

* add another safe.directory

Signed-off-by: ericharper <[email protected]>

* typo

Signed-off-by: ericharper <[email protected]>

Co-authored-by: Yang Zhang <[email protected]>
Co-authored-by: PeganovAnton <[email protected]>
Co-authored-by: treacker <[email protected]>
Co-authored-by: Sandeep Subramanian <[email protected]>
Co-authored-by: Nithin Rao <[email protected]>
Co-authored-by: fayejf <[email protected]>
Co-authored-by: Yi Dong <[email protected]>
Co-authored-by: Somshubra Majumdar <[email protected]>
Co-authored-by: Zhilin Wang <[email protected]>
Co-authored-by: bene-ges <[email protected]>
Co-authored-by: Alexandra Antonova <[email protected]>
Co-authored-by: ekmb <[email protected]>
Co-authored-by: Virginia Adams <[email protected]>
Co-authored-by: Ghasem <[email protected]>
Co-authored-by: Ghasem Pasandi <[email protected]>
Co-authored-by: Aleksandr Laptev <[email protected]>
Co-authored-by: Aleksandr Laptev <[email protected]>
  • Loading branch information
18 people authored Jun 7, 2022
1 parent d4246c5 commit 62b0448
Show file tree
Hide file tree
Showing 67 changed files with 2,223 additions and 1,777 deletions.
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ RUN for f in $(ls requirements*.txt); do pip install --disable-pip-version-check

# install k2, skip if installation fails
COPY scripts /tmp/nemo/scripts/
RUN /bin/bash /tmp/nemo/scripts/speech_recognition/k2/setup.sh; exit 0
RUN /bin/bash /tmp/nemo/scripts/speech_recognition/k2/setup.sh || exit 0

# copy nemo source into a scratch image
FROM scratch as nemo-src
Expand Down
54 changes: 26 additions & 28 deletions Jenkinsfile
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ pipeline {
stage('Add git safe directory'){
steps{
sh 'git config --global --add safe.directory /var/lib/jenkins/workspace/NeMo_$GIT_BRANCH'
sh 'git config --global --add safe.directory /raid/JenkinsWorkDir/workspace/NeMo_$GIT_BRANCH'
}
}

Expand Down Expand Up @@ -1590,22 +1591,20 @@ pipeline {
}
failFast true
stages {
stage('Punctuation & Capitalization, Using model.common_dataset_parameters.label_vocab_dir') {
stage('Punctuation & Capitalization, Using model.common_datasest_parameters.label_vocab_dir') {
steps {
sh 'cd examples/nlp/token_classification && \
output_dir="$(mktemp -d -p "$(pwd)")" && \
data_dir="$(mktemp -d -p "$(pwd)")" && \
cp /home/TestData/nlp/token_classification_punctuation/*.txt "${data_dir}"/ && \
label_vocab_dir="$(mktemp -d -p "$(pwd)")" && \
label_vocab_dir=label_vocab_dir && \
mkdir -p ${label_vocab_dir} && \
punct_label_vocab="${label_vocab_dir}/punct_label_vocab.csv" && \
capit_label_vocab="${label_vocab_dir}/capit_label_vocab.csv" && \
printf "O\n,\n.\n?\n" > "${punct_label_vocab}" && \
printf "O\nU\n" > "${capit_label_vocab}" && \
python punctuation_capitalization_train_evaluate.py \
CUDA_LAUNCH_BLOCKING=1 python punctuation_capitalization_train_evaluate.py \
model.train_ds.use_tarred_dataset=false \
model.train_ds.ds_item="${data_dir}" \
model.validation_ds.ds_item="${data_dir}" \
model.test_ds.ds_item="${data_dir}" \
model.train_ds.ds_item=/home/TestData/nlp/token_classification_punctuation \
model.validation_ds.ds_item=/home/TestData/nlp/token_classification_punctuation \
model.test_ds.ds_item=/home/TestData/nlp/token_classification_punctuation \
model.language_model.pretrained_model_name=distilbert-base-uncased \
model.common_dataset_parameters.label_vocab_dir="${label_vocab_dir}" \
model.class_labels.punct_labels_file="$(basename "${punct_label_vocab}")" \
Expand All @@ -1616,69 +1615,68 @@ pipeline {
trainer.devices=[0,1] \
trainer.strategy=ddp \
trainer.max_epochs=1 \
+exp_manager.explicit_log_dir="${output_dir}" \
+exp_manager.explicit_log_dir=/home/TestData/nlp/token_classification_punctuation/output \
+do_testing=false && \
python punctuation_capitalization_train_evaluate.py \
CUDA_LAUNCH_BLOCKING=1 python punctuation_capitalization_train_evaluate.py \
+do_training=false \
+do_testing=true \
~model.train_ds \
~model.validation_ds \
model.test_ds.ds_item="${data_dir}" \
pretrained_model="${output_dir}/checkpoints/Punctuation_and_Capitalization.nemo" \
model.test_ds.ds_item=/home/TestData/nlp/token_classification_punctuation \
pretrained_model=/home/TestData/nlp/token_classification_punctuation/output/checkpoints/Punctuation_and_Capitalization.nemo \
+model.train_ds.use_cache=false \
+model.validation_ds.use_cache=false \
+model.test_ds.use_cache=false \
trainer.devices=[0,1] \
trainer.strategy=ddp \
trainer.max_epochs=1 \
exp_manager=null && \
rm -rf "${label_vocab_dir}" "${data_dir}" "${output_dir}"'
rm -r "${label_vocab_dir}" && \
rm -rf /home/TestData/nlp/token_classification_punctuation/output/*'
}
}
stage('Punctuation & Capitalization, Using model.common_dataset_parameters.{punct,capit}_label_ids') {
stage('Punctuation & Capitalization, Using model.common_datasest_parameters.{punct,capit}_label_ids') {
steps {
sh 'cd examples/nlp/token_classification && \
output_dir="$(mktemp -d -p "$(pwd)")" && \
data_dir="$(mktemp -d -p "$(pwd)")" && \
cp /home/TestData/nlp/token_classification_punctuation/*.txt "${data_dir}"/ && \
conf_path="$(mktemp -d -p "$(pwd)")" && \
conf_path=/home/TestData/nlp/token_classification_punctuation && \
conf_name=punctuation_capitalization_config_with_ids && \
cp conf/punctuation_capitalization_config.yaml "${conf_path}/${conf_name}.yaml" && \
sed -i $\'s/punct_label_ids: null/punct_label_ids: {O: 0, \\\',\\\': 1, .: 2, \\\'?\\\': 3}/\' \
"${conf_path}/${conf_name}.yaml" && \
sed -i $\'s/capit_label_ids: null/capit_label_ids: {O: 0, U: 1}/\' \
"${conf_path}/${conf_name}.yaml" && \
python punctuation_capitalization_train_evaluate.py \
CUDA_LAUNCH_BLOCKING=1 python punctuation_capitalization_train_evaluate.py \
--config-path "${conf_path}" \
--config-name "${conf_name}" \
model.train_ds.use_tarred_dataset=false \
model.train_ds.ds_item="${data_dir}" \
model.validation_ds.ds_item="${data_dir}" \
model.test_ds.ds_item="${data_dir}" \
model.train_ds.ds_item=/home/TestData/nlp/token_classification_punctuation \
model.validation_ds.ds_item=/home/TestData/nlp/token_classification_punctuation \
model.test_ds.ds_item=/home/TestData/nlp/token_classification_punctuation \
model.language_model.pretrained_model_name=distilbert-base-uncased \
+model.train_ds.use_cache=false \
+model.validation_ds.use_cache=false \
+model.test_ds.use_cache=false \
trainer.devices=[0,1] \
trainer.strategy=ddp \
trainer.max_epochs=1 \
+exp_manager.explicit_log_dir="${output_dir}" \
+exp_manager.explicit_log_dir=/home/TestData/nlp/token_classification_punctuation/output \
+do_testing=false && \
python punctuation_capitalization_train_evaluate.py \
CUDA_LAUNCH_BLOCKING=1 python punctuation_capitalization_train_evaluate.py \
+do_training=false \
+do_testing=true \
~model.train_ds \
~model.validation_ds \
model.test_ds.ds_item="${data_dir}" \
pretrained_model="${output_dir}/checkpoints/Punctuation_and_Capitalization.nemo" \
model.test_ds.ds_item=/home/TestData/nlp/token_classification_punctuation \
pretrained_model=/home/TestData/nlp/token_classification_punctuation/output/checkpoints/Punctuation_and_Capitalization.nemo \
+model.train_ds.use_cache=false \
+model.validation_ds.use_cache=false \
+model.test_ds.use_cache=false \
trainer.devices=[0,1] \
trainer.strategy=ddp \
trainer.max_epochs=1 \
exp_manager=null && \
rm -rf "${output_dir}" "${data_dir}" "${conf_path}"'
rm -rf /home/TestData/nlp/token_classification_punctuation/output/* && \
rm "${conf_path}/${conf_name}.yaml"'
}
}
}
Expand Down
31 changes: 21 additions & 10 deletions docs/source/nlp/prompt_learning.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ Instead of selecting discrete text prompts in a manual or automated fashion, pro

Our continuous learning capability for combined p-tuning and prompt tuning with GPT style models is a NeMo specific extension of the author's original work.

Please also checkout our `prompt learning tutorial notebook. <https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Multitask_Prompt_and_PTuning.ipynb>`_


Terminology
^^^^^^^^^^
Expand Down Expand Up @@ -89,14 +91,17 @@ the input will be translated into ``VVV Hypothesis: And he said, Mama, I'm home.
"prompt_template": "<|VIRTUAL_PROMPT_0|> {sentence} sentiment: {label}",
"total_virtual_tokens": 10,
"virtual_token_splits": [10],
"truncate_field": "sentence"
"truncate_field": "sentence",
"answer_only_loss": False,
},
{
"taskname": "intent_and_slot",
"prompt_template": "<|VIRTUAL_PROMPT_0|> Predict intent and slot <|VIRTUAL_PROMPT_1|> :\n{utterance}{label}",
"total_virtual_tokens": 10,
"virtual_token_splits": [7, 3],
"truncate_field": None
"truncate_field": None,
"answer_only_loss": True,
"answer_field": "label"
}
]
Expand Down Expand Up @@ -198,9 +203,9 @@ Setting New Tasks

After you p-tune or prompt-tune your model, you can always go back and p-tune or prompt-tune your model on more tasks without over writing the virtual prompts who've trained already. You can also use a different number of ``total_virtual_tokens`` between each training session as long as tasks ptuned or prompt tuned at the same time have the same number of ``total_virtual_tokens``. For this reason, when you ptune on a new task, you need to tell your model which of your tasks are new and which ones already exist (and thus you don't want to tune them). You do this by setting the ``new_tasks`` and ``existing_tasks`` values in the config file.

Example Multi-Task Prompt Tuning Command
Example Multi-Task Prompt Tuning Config and Command
^^^^^^^^^^
First define a config called ``multitask-prompt-learning.yaml`` that looks like:
First define a config called ``multitask-prompt-learning.yaml`` demonstrated below. **In the** ``exp_manager`` **portion of the config,** ``save_on_train_end`` **should be set to** ``False`` **to avoid unnecessarily saving the incorrect model weights.** This is already done in the example `megatron_gpt_prompt_learning_config.yaml config <https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/language_modeling/conf/megatron_gpt_prompt_learning_config.yaml>`_ that you should use as your starting point. The correct prompt learning model will be saved at the ``model.nemo_path`` you set.

.. code::
Expand Down Expand Up @@ -229,12 +234,15 @@ First define a config called ``multitask-prompt-learning.yaml`` that looks like:
total_virtual_tokens: 100
virtual_token_splits: [100]
truncate_field: null
answer_only_loss: False
- taskname: "intent_and_slot"
prompt_template: "<|VIRTUAL_PROMPT_0|> Predict intent and slot <|VIRTUAL_PROMPT_1|> :\n{utterance}{label}"
total_virtual_tokens: 100
virtual_token_splits: [80, 20]
truncate_field: null
answer_only_loss: True
answer_field: "label"
prompt_tuning:
new_prompt_init_methods: ["text", "text"]
Expand All @@ -259,7 +267,7 @@ Then run the command
python megatron_gpt_prompt_learning.py --config-name=multitask-prompt-learning.yaml
Example Multi-Task P-Tuning Command After Prompt-Tuning
Example Multi-Task P-Tuning Config and Command After Prompt-Tuning
^^^^^^^^^^
Update ``multitask-prompt-learning.yaml`` from the example above with p-tuning parameters for the new task. Be sure to update ``model.existing_tasks`` with the tasknames from previous prompt learning runs and to use the ``.nemo`` file saved at the end of your last prompt learning session. Values different from the config above have stars commented next to them.

Expand All @@ -284,28 +292,31 @@ In this example, the SQuAD task includes the question context as part of the pro
restore_path: multitask_prompt_tuning.nemo # ***
language_model_path: models/megatron_125M_gpt.nemo
existing_tasks: ["sentiment", "intent_and_slot"] # ***
new_tasks: ["sentiment", "intent_and_slot"]
new_tasks: ["squad"]
task_templates:
- taskname: "sentiment"
prompt_template: "<|VIRTUAL_PROMPT_0|> {sentence} sentiment: {label}"
total_virtual_tokens: 100
virtual_token_splits: [100]
truncate_field: null
answer_only_loss: False
- taskname: "intent_and_slot"
prompt_template: "<|VIRTUAL_PROMPT_0|> Predict intent and slot <|VIRTUAL_PROMPT_1|> :\n{utterance}{label}"
total_virtual_tokens: 100
virtual_token_splits: [80, 20]
truncate_field: null
answer_only_loss: True
answer_field: "label"
- taskname: "squad" # ***
prompt_template: "<|VIRTUAL_PROMPT_0|> Answer the question from the context <|VIRTUAL_PROMPT_1|> {question} <|VIRTUAL_PROMPT_2|> {context} <|VIRTUAL_PROMPT_3|> Answer: {answer}" # ***
total_virtual_tokens: 16 # ***
virtual_token_splits: [4, 4, 4, 4] # ***
prompt_template: "<|VIRTUAL_PROMPT_0|> Answer the question from the context {question} {context} Answer: {answer}" # ***
total_virtual_tokens: 9 # ***
virtual_token_splits: [9] # ***
truncate_field: context # ***
answer_only_loss: True # ***
answer_field: 'answer # ***
answer_field: "answer" # ***
p_tuning: # ***
dropout: 0.0 # ***
Expand Down
7 changes: 4 additions & 3 deletions docs/source/nlp/text_normalization/intro.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
(Inverse) Text Normalization
============================

NeMo supports Text Normalization (TN) and Inverse Text Normalization (ITN) tasks via rule-based `nemo_text_processing` python package and Neural-based TN/ITN models.

Rule-based (WFST) TN/ITN:

.. toctree::
Expand All @@ -9,11 +11,10 @@ Rule-based (WFST) TN/ITN:
wfst/intro


Neural TN/ITN:
Neural-based TN/ITN:

.. toctree::
:maxdepth: 1

nn_text_normalization

neural_models

23 changes: 23 additions & 0 deletions docs/source/nlp/text_normalization/neural_models.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
.. _neural_models:

Neural Models for (Inverse) Text Normalization
==============================================

NeMo provides two types of neural models:


Duplex T5-based TN/ITN:

.. toctree::
:maxdepth: 1

nn_text_normalization


Single-pass Tagger-based ITN:

.. toctree::
:maxdepth: 1

text_normalization_as_tagging

Loading

0 comments on commit 62b0448

Please sign in to comment.