Skip to content
forked from modelscope/FunASR

A Fundamental End-to-End Speech Recognition Toolkit and Open Source SOTA Pretrained Models.

License

Notifications You must be signed in to change notification settings

hefeilch/FunASR

 
 

Repository files navigation

(简体中文|English)

FunASR: A Fundamental End-to-End Speech Recognition Toolkit

FunASR hopes to build a bridge between academic research and industrial applications on speech recognition. By supporting the training & finetuning of the industrial-grade speech recognition model, researchers and developers can conduct research and production of speech recognition models more conveniently, and promote the development of speech recognition ecology. ASR for Fun!

Highlights | News | Installation | Quick Start | Runtime | Model Zoo | Contact

Highlights

  • FunASR is a fundamental speech recognition toolkit that offers a variety of features, including speech recognition (ASR), Voice Activity Detection (VAD), Punctuation Restoration, Language Models, Speaker Verification, Speaker Diarization and multi-talker ASR. FunASR provides convenient scripts and tutorials, supporting inference and fine-tuning of pre-trained models.
  • We have released a vast collection of academic and industrial pretrained models on the ModelScope and huggingface, which can be accessed through our Model Zoo. The representative Paraformer-large, a non-autoregressive end-to-end speech recognition model, has the advantages of high accuracy, high efficiency, and convenient deployment, supporting the rapid construction of speech recognition services. For more details on service deployment, please refer to the service deployment document.

What's new:

  • 2024/01/09: The Funasr SDK for Windows version 2.0 has been released, featuring support for The offline file transcription service (CPU) of Mandarin 4.1, The offline file transcription service (CPU) of English 1.2, The real-time transcription service (CPU) of Mandarin 1.6. For more details, please refer to the official documentation or release notes(FunASR-Runtime-Windows)
  • 2024/01/03: File Transcription Service 4.0 released, Added support for 8k models, optimized timestamp mismatch issues and added sentence-level timestamps, improved the effectiveness of English word FST hotwords, supported automated configuration of thread parameters, and fixed known crash issues as well as memory leak problems, refer to (docs).
  • 2024/01/03: Real-time Transcription Service 1.6 released,The 2pass-offline mode supports Ngram language model decoding and WFST hotwords, while also addressing known crash issues and memory leak problems, (docs)
  • 2024/01/03: Fixed known crash issues as well as memory leak problems, (docs).
  • 2023/12/04: The Funasr SDK for Windows version 1.0 has been released, featuring support for The offline file transcription service (CPU) of Mandarin, The offline file transcription service (CPU) of English, The real-time transcription service (CPU) of Mandarin. For more details, please refer to the official documentation or release notes(FunASR-Runtime-Windows)
  • 2023/11/08: The offline file transcription service 3.0 (CPU) of Mandarin has been released, adding punctuation large model, Ngram language model, and wfst hot words. For detailed information, please refer to docs.
  • 2023/10/17: The offline file transcription service (CPU) of English has been released. For more details, please refer to (docs).
  • 2023/10/13: SlideSpeech: A large scale multi-modal audio-visual corpus with a significant amount of real-time synchronized slides.
  • 2023/10/10: The ASR-SpeakersDiarization combined pipeline Paraformer-VAD-SPK is now released. Experience the model to get recognition results with speaker information.
  • 2023/10/07: FunCodec: A Fundamental, Reproducible and Integrable Open-source Toolkit for Neural Speech Codec.
  • 2023/09/01: The offline file transcription service 2.0 (CPU) of Mandarin has been released, with added support for ffmpeg, timestamp, and hotword models. For more details, please refer to (docs).
  • 2023/08/07: The real-time transcription service (CPU) of Mandarin has been released. For more details, please refer to (docs).
  • 2023/07/17: BAT is released, which is a low-latency and low-memory-consumption RNN-T model. For more details, please refer to (BAT).
  • 2023/06/26: ASRU2023 Multi-Channel Multi-Party Meeting Transcription Challenge 2.0 completed the competition and announced the results. For more details, please refer to (M2MeT2.0).

Installation

Please ref to installation docs

Model Zoo

FunASR has open-sourced a large number of pre-trained models on industrial data. You are free to use, copy, modify, and share FunASR models under the Model License Agreement. Below are some representative models, for more models please refer to the Model Zoo.

(Note: 🤗 represents the Huggingface model zoo link, ⭐ represents the ModelScope model zoo link)

Model Name Task Details Training Data Parameters
paraformer-zh
( 🤗 )
speech recognition, with timestamps, non-streaming 60000 hours, Mandarin 220M
paraformer-zh-spk
( 🤗 )
speech recognition with speaker diarization, with timestamps, non-streaming 60000 hours, Mandarin 220M
paraformer-zh-online
( 🤗 )
speech recognition, streaming 60000 hours, Mandarin 220M
paraformer-en
( 🤗 )
speech recognition, with timestamps, non-streaming 50000 hours, English 220M
conformer-en
( 🤗 )
speech recognition, non-streaming 50000 hours, English 220M
ct-punc
( 🤗 )
punctuation restoration 100M, Mandarin and English 1.1G
fsmn-vad
( 🤗 )
voice activity detection 5000 hours, Mandarin and English 0.4M
fa-zh
( 🤗 )
timestamp prediction 5000 hours, Mandarin 38M

Quick Start

Below is a quick start tutorial. Test audio files (Mandarin, English).

Command-line usage

funasr +model=paraformer-zh +vad_model="fsmn-vad" +punc_model="ct-punc" +input=asr_example_zh.wav

Notes: Support recognition of single audio file, as well as file list in Kaldi-style wav.scp format: wav_id wav_pat

Speech Recognition (Non-streaming)

from funasr import AutoModel
# paraformer-zh is a multi-functional asr model
# use vad, punc, spk or not as you need
model = AutoModel(model="paraformer-zh", model_revision="v2.0.4",
                  vad_model="fsmn-vad", vad_model_revision="v2.0.4",
                  punc_model="ct-punc-c", punc_model_revision="v2.0.4",
                  # spk_model="cam++", spk_model_revision="v2.0.2",
                  )
res = model.generate(input=f"{model.model_path}/example/asr_example.wav", 
                     batch_size_s=300, 
                     hotword='魔搭')
print(res)

Note: model_hub: represents the model repository, ms stands for selecting ModelScope download, hf stands for selecting Huggingface download.

Speech Recognition (Streaming)

from funasr import AutoModel

chunk_size = [0, 10, 5] #[0, 10, 5] 600ms, [0, 8, 4] 480ms
encoder_chunk_look_back = 4 #number of chunks to lookback for encoder self-attention
decoder_chunk_look_back = 1 #number of encoder chunks to lookback for decoder cross-attention

model = AutoModel(model="paraformer-zh-streaming", model_revision="v2.0.4")

import soundfile
import os

wav_file = os.path.join(model.model_path, "example/asr_example.wav")
speech, sample_rate = soundfile.read(wav_file)
chunk_stride = chunk_size[1] * 960 # 600ms

cache = {}
total_chunk_num = int(len((speech)-1)/chunk_stride+1)
for i in range(total_chunk_num):
    speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride]
    is_final = i == total_chunk_num - 1
    res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size, encoder_chunk_look_back=encoder_chunk_look_back, decoder_chunk_look_back=decoder_chunk_look_back)
    print(res)

Note: chunk_size is the configuration for streaming latency. [0,10,5] indicates that the real-time display granularity is 10*60=600ms, and the lookahead information is 5*60=300ms. Each inference input is 600ms (sample points are 16000*0.6=960), and the output is the corresponding text. For the last speech segment input, is_final=True needs to be set to force the output of the last word.

Voice Activity Detection (streaming)

from funasr import AutoModel

model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")
wav_file = f"{model.model_path}/example/asr_example.wav"
res = model.generate(input=wav_file)
print(res)

Voice Activity Detection (Non-streaming)

from funasr import AutoModel

chunk_size = 200 # ms
model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")

import soundfile

wav_file = f"{model.model_path}/example/vad_example.wav"
speech, sample_rate = soundfile.read(wav_file)
chunk_stride = int(chunk_size * sample_rate / 1000)

cache = {}
total_chunk_num = int(len((speech)-1)/chunk_stride+1)
for i in range(total_chunk_num):
    speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride]
    is_final = i == total_chunk_num - 1
    res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size)
    if len(res[0]["value"]):
        print(res)

Punctuation Restoration

from funasr import AutoModel

model = AutoModel(model="ct-punc", model_revision="v2.0.4")
res = model.generate(input="那今天的会就到这里吧 happy new year 明年见")
print(res)

Timestamp Prediction

from funasr import AutoModel

model = AutoModel(model="fa-zh", model_revision="v2.0.4")
wav_file = f"{model.model_path}/example/asr_example.wav"
text_file = f"{model.model_path}/example/text.txt"
res = model.generate(input=(wav_file, text_file), data_type=("sound", "text"))
print(res)

More examples ref to docs

Deployment Service

FunASR supports deploying pre-trained or further fine-tuned models for service. Currently, it supports the following types of service deployment:

  • File transcription service, Mandarin, CPU version, done
  • The real-time transcription service, Mandarin (CPU), done
  • File transcription service, English, CPU version, done
  • File transcription service, Mandarin, GPU version, in progress
  • and more.

For more detailed information, please refer to the service deployment documentation.

Community Communication

If you encounter problems in use, you can directly raise Issues on the github page.

You can also scan the following DingTalk group or WeChat group QR code to join the community group for communication and discussion.

DingTalk group WeChat group

Contributors

The contributors can be found in contributors list

License

This project is licensed under The MIT License. FunASR also contains various third-party components and some code modified from other repos under other open source licenses. The use of pretraining model is subject to model license

Citations

@inproceedings{gao2023funasr,
  author={Zhifu Gao and Zerui Li and Jiaming Wang and Haoneng Luo and Xian Shi and Mengzhe Chen and Yabin Li and Lingyun Zuo and Zhihao Du and Zhangyu Xiao and Shiliang Zhang},
  title={FunASR: A Fundamental End-to-End Speech Recognition Toolkit},
  year={2023},
  booktitle={INTERSPEECH},
}
@inproceedings{An2023bat,
  author={Keyu An and Xian Shi and Shiliang Zhang},
  title={BAT: Boundary aware transducer for memory-efficient and low-latency ASR},
  year={2023},
  booktitle={INTERSPEECH},
}
@inproceedings{gao22b_interspeech,
  author={Zhifu Gao and ShiLiang Zhang and Ian McLoughlin and Zhijie Yan},
  title={Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition},
  year=2022,
  booktitle={Proc. Interspeech 2022},
  pages={2063--2067},
  doi={10.21437/Interspeech.2022-9996}
}
@inproceedings{shi2023seaco,
  author={Xian Shi and Yexin Yang and Zerui Li and Yanni Chen and Zhifu Gao and Shiliang Zhang},
  title={SeACo-Paraformer: A Non-Autoregressive ASR System with Flexible and Effective Hotword Customization Ability},
  year={2023},
  booktitle={ICASSP2024}
}

About

A Fundamental End-to-End Speech Recognition Toolkit and Open Source SOTA Pretrained Models.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 74.3%
  • C++ 13.7%
  • Shell 4.0%
  • C# 3.1%
  • JavaScript 1.6%
  • Vue 0.8%
  • Other 2.5%