FunASR是一个基础语音识别工具包,提供多种功能,包括语音识别(ASR)、语音端点检测(VAD)、标点恢复、语言模型、说话人验证、说话人分离和多人对话语音识别等。FunASR提供了便捷的脚本和教程,支持预训练好的模型的推理与微调。
Go to file
2023-07-21 11:41:50 +08:00
.github/workflows update 2023-07-05 21:55:40 +08:00
docs docs 2023-07-20 20:39:11 +08:00
egs update 2023-07-13 11:08:58 +08:00
egs_modelscope add english paraformer model 2023-07-20 21:55:31 +08:00
fun_text_processing Update id_unit_test.tsv (#563) 2023-06-02 11:51:21 +08:00
funasr bugfix long wav batch_size_token_threshold_ms 2023-07-21 10:30:38 +08:00
tests Update test_asr_inference_pipeline.py 2023-07-04 13:56:08 +08:00
.gitignore sdk utils 2023-06-27 16:57:13 +08:00
Acknowledge docs zh 2023-07-06 17:16:26 +08:00
MODEL_LICENSE model license 2023-06-06 22:09:42 +08:00
README_zh.md Update README_zh.md 2023-07-21 11:40:11 +08:00
README.md docs 2023-07-20 19:06:46 +08:00
setup.py Update setup.py 2023-07-21 10:09:40 +08:00

(简体中文|English)

FunASR: A Fundamental End-to-End Speech Recognition Toolkit

FunASR hopes to build a bridge between academic research and industrial applications on speech recognition. By supporting the training & finetuning of the industrial-grade speech recognition model released on ModelScope, researchers and developers can conduct research and production of speech recognition models more conveniently, and promote the development of speech recognition ecology. ASR for Fun

News | Highlights | Installation | Quick Start | Runtime | Model Zoo | Contact

What's new:

FunASR runtime

  • 2023.07.03: We have release the FunASR runtime-SDK-0.1.0, file transcription service (Mandarin) is now supported (ZH/EN)

Multi-Channel Multi-Party Meeting Transcription 2.0 (M2MeT2.0) Challenge

Challenge details ref to (CN/EN)

Speech Recognition

Highlights

  • FunASR is a fundamental speech recognition toolkit that offers a variety of features, including speech recognition (ASR), Voice Activity Detection (VAD), Punctuation Restoration, Language Models, Speaker Verification, Speaker diarization and multi-talker ASR.
  • We have released a vast collection of academic and industrial pretrained models on the ModelScope, which can be accessed through our Model Zoo. The representative Paraformer-large model has achieved SOTA performance in many speech recognition tasks.
  • FunASR offers a user-friendly pipeline for fine-tuning pretrained models from the ModelScope. Additionally, the optimized dataloader in FunASR enables faster training speeds for large-scale datasets. This feature enhances the efficiency of the speech recognition process for researchers and practitioners.

Installation

Install from pip

pip3 install -U funasr
# For the users in China, you could install with the command:
# pip3 install -U funasr -i https://mirror.sjtu.edu.cn/pypi/web/simple

Or install from source code

git clone https://github.com/alibaba/FunASR.git && cd FunASR
pip3 install -e ./
# For the users in China, you could install with the command:
# pip3 install -e ./ -i https://mirror.sjtu.edu.cn/pypi/web/simple

If you want to use the pretrained models in ModelScope, you should install the modelscope:

pip3 install -U modelscope
# For the users in China, you could install with the command:
# pip3 install -U modelscope -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html -i https://mirror.sjtu.edu.cn/pypi/web/simple

For more details, please ref to installation

Quick Start

You can use FunASR in the following ways:

  • Service Deployment SDK
  • Industrial model egs
  • Academic model egs

Service Deployment SDK

Python version Example

Supports real-time streaming speech recognition, uses non-streaming models for error correction, and outputs text with punctuation. Currently, only single client is supported. For multi-concurrency, please refer to the C++ version service deployment SDK below.

Server Deployment
cd funasr/runtime/python/websocket
python funasr_wss_server.py --port 10095
Client Testing
python funasr_wss_client.py --host "127.0.0.1" --port 10095 --mode 2pass --chunk_size "5,10,5"

For more examples, please refer to docs.

C++ version Example

Currently, offline file transcription service (CPU) is supported, and concurrent requests of hundreds of channels are supported.

Server Deployment

You can use the following command to complete the deployment with one click:

curl -O https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/shell/funasr-runtime-deploy-offline-cpu-zh.sh
sudo bash funasr-runtime-deploy-offline-cpu-zh.sh install --workspace ./funasr-runtime-resources
Client Testing
python3 funasr_wss_client.py --host "127.0.0.1" --port 10095 --mode offline --audio_in "../audio/asr_example.wav"

For more examples, please refer to docs

Industrial Model Egs

If you want to use the pre-trained industrial models in ModelScope for inference or fine-tuning training, you can refer to the following command:

from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks

inference_pipeline = pipeline(
    task=Tasks.auto_speech_recognition,
    model='damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch',
)

rec_result = inference_pipeline(audio_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav')
print(rec_result)
# {'text': '欢迎大家来体验达摩院推出的语音识别模型'}

More examples could be found in docs

Academic model egs

If you want to train from scratch, usually for academic models, you can start training and inference with the following command:

cd egs/aishell/paraformer
. ./run.sh --CUDA_VISIBLE_DEVICES="0,1" --gpu_num=2

More examples could be found in docs

Contact

If you have any questions about FunASR, please contact us by

Dingding group Wechat group

Contributors

Acknowledge

  1. We borrowed a lot of code from Kaldi for data preparation.
  2. We borrowed a lot of code from ESPnet. FunASR follows up the training and finetuning pipelines of ESPnet.
  3. We referred Wenet for building dataloader for large scale data training.
  4. We acknowledge ChinaTelecom for contributing the VAD runtime.
  5. We acknowledge RapidAI for contributing the Paraformer and CT_Transformer-punc runtime.
  6. We acknowledge AiHealthx for contributing the websocket service and html5.

License

This project is licensed under the The MIT License. FunASR also contains various third-party components and some code modified from other repos under other open source licenses. The use of pretraining model is subject to model licencs

Stargazers over time

Stargazers over time

Citations

@inproceedings{gao2023funasr,
  author={Zhifu Gao and Zerui Li and Jiaming Wang and Haoneng Luo and Xian Shi and Mengzhe Chen and Yabin Li and Lingyun Zuo and Zhihao Du and Zhangyu Xiao and Shiliang Zhang},
  title={FunASR: A Fundamental End-to-End Speech Recognition Toolkit},
  year={2023},
  booktitle={INTERSPEECH},
}
@inproceedings{gao22b_interspeech,
  author={Zhifu Gao and ShiLiang Zhang and Ian McLoughlin and Zhijie Yan},
  title={{Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition}},
  year=2022,
  booktitle={Proc. Interspeech 2022},
  pages={2063--2067},
  doi={10.21437/Interspeech.2022-9996}
}