|
|
||
|---|---|---|
| .github/workflows | ||
| docs | ||
| egs | ||
| egs_modelscope | ||
| fun_text_processing | ||
| funasr | ||
| tests | ||
| .gitignore | ||
| Acknowledge | ||
| MODEL_LICENSE | ||
| README_zh.md | ||
| README.md | ||
| setup.py | ||
(简体中文|English)
FunASR: A Fundamental End-to-End Speech Recognition Toolkit
FunASR hopes to build a bridge between academic research and industrial applications on speech recognition. By supporting the training & finetuning of the industrial-grade speech recognition model released on ModelScope, researchers and developers can conduct research and production of speech recognition models more conveniently, and promote the development of speech recognition ecology. ASR for Fun!
News | Highlights | Installation | Quick Start | Runtime | Model Zoo | Contact
What's new:
FunASR runtime
- 2023.07.03: We have release the FunASR runtime-SDK-0.1.0, file transcription service (Mandarin) is now supported (ZH/EN)
Multi-Channel Multi-Party Meeting Transcription 2.0 (M2MeT2.0) Challenge
Challenge details ref to (CN/EN)
Speech Recognition
-
Academic Models
- Encoder-Decoder Models (AED): Transformer, Conformer, Branchformer
- Transducer Models (RNNT): RNNT streaming, BAT streaming/non-streaming
- Non-autoregressive Model (NAR): Paraformer
- Multi-speaker recognition model: MFCCA
-
Industrial-level Models
- Paraformer Models (Mandarin): Paraformer-large, Paraformer-large-long, Paraformer-large streaming, Paraformer-large-contextual
- Conformer Models (English): Conformer
- UniASR streaming offline unifying models: 16k UniASR Burmese, 16k UniASR Hebrew, 16k UniASR Urdu, 8k UniASR Mandarin financial domain, 16k UniASR Mandarin audio-visual domain, Southern Fujian Dialect model, French model, German model, Vietnamese model, Persian model
-
Speaker Recognition
-
Punctuation Restoration
- Chinese Punctuation Model: CT-Transformer, CT-Transformer streaming
-
Endpoint Detection
-
Timestamp Prediction
- Character-level FA Model: TP-Aligner
Highlights
- FunASR is a fundamental speech recognition toolkit that offers a variety of features, including speech recognition (ASR), Voice Activity Detection (VAD), Punctuation Restoration, Language Models, Speaker Verification, Speaker diarization and multi-talker ASR.
- We have released a vast collection of academic and industrial pretrained models on the ModelScope, which can be accessed through our Model Zoo. The representative Paraformer-large model has achieved SOTA performance in many speech recognition tasks.
- FunASR offers a user-friendly pipeline for fine-tuning pretrained models from the ModelScope. Additionally, the optimized dataloader in FunASR enables faster training speeds for large-scale datasets. This feature enhances the efficiency of the speech recognition process for researchers and practitioners.
Installation
Install from pip
pip3 install -U funasr
# For the users in China, you could install with the command:
# pip3 install -U funasr -i https://mirror.sjtu.edu.cn/pypi/web/simple
Or install from source code
git clone https://github.com/alibaba/FunASR.git && cd FunASR
pip3 install -e ./
# For the users in China, you could install with the command:
# pip3 install -e ./ -i https://mirror.sjtu.edu.cn/pypi/web/simple
If you want to use the pretrained models in ModelScope, you should install the modelscope:
pip3 install -U modelscope
# For the users in China, you could install with the command:
# pip3 install -U modelscope -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html -i https://mirror.sjtu.edu.cn/pypi/web/simple
For more details, please ref to installation
Quick Start
You can use FunASR in the following ways:
- Service Deployment SDK
- Industrial model egs
- Academic model egs
Service Deployment SDK
Python version Example
Supports real-time streaming speech recognition, uses non-streaming models for error correction, and outputs text with punctuation. Currently, only single client is supported. For multi-concurrency, please refer to the C++ version service deployment SDK below.
Server Deployment
cd funasr/runtime/python/websocket
python funasr_wss_server.py --port 10095
Client Testing
python funasr_wss_client.py --host "127.0.0.1" --port 10095 --mode 2pass --chunk_size "5,10,5"
For more examples, please refer to docs.
C++ version Example
Currently, offline file transcription service (CPU) is supported, and concurrent requests of hundreds of channels are supported.
Server Deployment
You can use the following command to complete the deployment with one click:
curl -O https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/shell/funasr-runtime-deploy-offline-cpu-zh.sh
sudo bash funasr-runtime-deploy-offline-cpu-zh.sh install --workspace ./funasr-runtime-resources
Client Testing
python3 funasr_wss_client.py --host "127.0.0.1" --port 10095 --mode offline --audio_in "../audio/asr_example.wav"
For more examples, please refer to docs
Industrial Model Egs
If you want to use the pre-trained industrial models in ModelScope for inference or fine-tuning training, you can refer to the following command:
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
inference_pipeline = pipeline(
task=Tasks.auto_speech_recognition,
model='damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch',
)
rec_result = inference_pipeline(audio_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav')
print(rec_result)
# {'text': '欢迎大家来体验达摩院推出的语音识别模型'}
More examples could be found in docs
Academic model egs
If you want to train from scratch, usually for academic models, you can start training and inference with the following command:
cd egs/aishell/paraformer
. ./run.sh --CUDA_VISIBLE_DEVICES="0,1" --gpu_num=2
More examples could be found in docs
Contact
If you have any questions about FunASR, please contact us by
- email: funasr@list.alibaba-inc.com
| Dingding group | Wechat group |
|---|---|
![]() |
Contributors
|
|
|
|---|
Acknowledge
- We borrowed a lot of code from Kaldi for data preparation.
- We borrowed a lot of code from ESPnet. FunASR follows up the training and finetuning pipelines of ESPnet.
- We referred Wenet for building dataloader for large scale data training.
- We acknowledge ChinaTelecom for contributing the VAD runtime.
- We acknowledge RapidAI for contributing the Paraformer and CT_Transformer-punc runtime.
- We acknowledge AiHealthx for contributing the websocket service and html5.
License
This project is licensed under the The MIT License. FunASR also contains various third-party components and some code modified from other repos under other open source licenses. The use of pretraining model is subject to model licencs
Stargazers over time
Citations
@inproceedings{gao2023funasr,
author={Zhifu Gao and Zerui Li and Jiaming Wang and Haoneng Luo and Xian Shi and Mengzhe Chen and Yabin Li and Lingyun Zuo and Zhihao Du and Zhangyu Xiao and Shiliang Zhang},
title={FunASR: A Fundamental End-to-End Speech Recognition Toolkit},
year={2023},
booktitle={INTERSPEECH},
}
@inproceedings{gao22b_interspeech,
author={Zhifu Gao and ShiLiang Zhang and Ian McLoughlin and Zhijie Yan},
title={{Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition}},
year=2022,
booktitle={Proc. Interspeech 2022},
pages={2063--2067},
doi={10.21437/Interspeech.2022-9996}
}






