Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate Whisper model with hg evaluation CLI interface #740

Closed
wants to merge 10 commits into from

Conversation

Ahmedsaed
Copy link
Contributor

@Ahmedsaed Ahmedsaed commented Aug 9, 2024

What does this PR do? Please describe:
This PR adds integration for whisper model with the hg evaluation CLI interface.

In the process, I have refactored some functions to be more generic and support huggingface transformers API.

Demo:

$ fairseq2 hg asr /tmp/fairseq2 --config max_num_elements=20 model_name=openai/whisper-tiny.en dtype=torch.float32
[08/09/24 19:00:49] INFO     fairseq2.recipes.hg.evaluator - Running evaluation on 1 device(s).     
eval: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   147   
[08/09/24 19:03:37] INFO     fairseq2.recipes.hg.evaluator - Eval Metrics - BLEU: 0.497937 | Elapsed
                             Time: 167s | Wall Time: 169s | brevity_penalty: 1.0 | length_ratio:    
                             1.1557228282673901 | precisions: [0.676166231361788,                   
                             0.5496829119972201, 0.4500613362139993, 0.3675025152851946] |          
                             reference_length: 52343 | translation_length: 60494                    
                    INFO     fairseq2.recipes.hg.evaluator - Evaluation complete in 169 seconds     

Does your PR introduce any breaking changes? If yes, please list them:
Hopefully none.

Check list:

  • Was the content of this PR discussed and approved via a GitHub issue? (no need for typos or documentation improvements)
  • Did you read the contributor guideline?
  • Did you make sure that your PR does only one thing instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests?
  • Did you verify new and existing tests pass locally with your changes?
  • Did you update the CHANGELOG? (no need for typos, documentation, or minor internal changes)

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 9, 2024
@antoine-tran antoine-tran self-requested a review August 9, 2024 18:15
@Ahmedsaed
Copy link
Contributor Author

Ahmedsaed commented Aug 9, 2024

I noticed that running the evaluation with max_samples > 20 on a 15GB VRAM setup was causing a CUDA out-of-memory error. To address this, I made improvements to better manage and free up memory. With these updates, you can now run the evaluation on the entire dataset, provided the batch_size fits within the available memory. I successfully tested this on the whole test split (approximately 2939 examples) using whisper-tiny and wav2vec2_asr_base_10h with max_num_elements=10.

Additionally, I found that the text data was being encoded only to be decoded again, so I eliminated this redundant step from the pipeline, along with making other refactoring adjustments.

Edit: max_num_elements can be increased significantly by defining max_audio_len as some audio samples were quite long and the whole batch gets padded to the longest sequence causing some batches to require a lot more memory than expected.

@Ahmedsaed Ahmedsaed marked this pull request as ready for review August 13, 2024 21:37
Copy link
Contributor

@antoine-tran antoine-tran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this PR !

I can see you have made attempts to remove duplicates from the first iteration and parameterize some functions such as prepare_dataset(), load_asr_evaluator(), etc.

I think we need a few more building blocks to make this PR a solid one (dynamic loading of HG model and processor, improve the AsrEvalConfig to cover output_dir , better handling of inline functions). There should be a separate PR to handle the HG model loader, too, but let's discuss this point in our call

src/fairseq2/recipes/hg/asr_eval.py Show resolved Hide resolved
src/fairseq2/recipes/hg/asr_eval.py Show resolved Hide resolved
src/fairseq2/recipes/hg/asr_eval.py Show resolved Hide resolved
src/fairseq2/recipes/hg/asr_eval.py Show resolved Hide resolved
)


class HGModelWrapper:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thought: A better way to handle this is to have a HG model loader with HG model config, where we can specify the transformers class and the preprocessor class. But this can be left to another PR

src/fairseq2/recipes/hg/asr_eval.py Show resolved Hide resolved
src/fairseq2/recipes/hg/asr_eval.py Show resolved Hide resolved
src/fairseq2/recipes/hg/asr_eval.py Show resolved Hide resolved
wall_watch = Stopwatch(start=True, device=init_device)

return HFEvaluator[Seq2SeqBatch](
model=cast(Model, HGModelWrapper(model)),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my other comment.

This forced casting is artificial IMHO - The better option is to create a model loader that creates a HG model out of its name. But let's make it in a separate PR.

@Ahmedsaed
Copy link
Contributor Author

Ahmedsaed commented Aug 15, 2024

I will close this PR in favor of 2 smaller PRs

  • AsrDatasetConfig which will also include all the refactoring made in this PR along with a config for handling ASR Datasets.
  • Whisper Integration will just include the whisper related code.

I will address all comments on the new PRs

@Ahmedsaed
Copy link
Contributor Author

I have addressed all comments in the following PRs #749 and #751

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants