FunASR是一个基础语音识别工具包,提供多种功能,包括语音识别(ASR)、语音端点检测(VAD)、标点恢复、语言模型、说话人验证、说话人分离和多人对话语音识别等。FunASR提供了便捷的脚本和教程,支持预训练好的模型的推理与微调。
Go to file
2023-02-03 11:03:42 +08:00
.github/workflows update github.io page 2022-12-26 23:03:52 +08:00
docs readme 2023-01-30 18:48:02 +08:00
egs update version 0.1.6 2023-01-16 18:46:40 +08:00
egs_modelscope fix infer_after_finetune.py 2023-02-01 10:42:27 +08:00
fun_text_processing update version 0.1.6 2023-01-16 18:46:40 +08:00
funasr update param list 2023-02-03 11:03:42 +08:00
.gitignore update repo 2022-12-30 18:47:40 +08:00
LICENSE create 2022-11-26 21:56:51 +08:00
README.md readme 2023-01-30 19:23:35 +08:00
setup.py update version 0.1.6 2023-01-16 18:46:40 +08:00

FunASR: A Fundamental End-to-End Speech Recognition Toolkit

FunASR hopes to build a bridge between academic research and industrial applications on speech recognition. By supporting the training & finetuning of the industrial-grade speech recognition model released on ModelScope, researchers and developers can conduct research and production of speech recognition models more conveniently, and promote the development of speech recognition ecology. ASR for FunModel Zoo

Release Notes:

2023.1.16, funasr-0.1.6

  • We release a new version model Paraformer-large-long, which integrate the VAD model, ASR, Punctuation model and timestamp together. The model could take in several hours long inputs.
  • We release a new type model, VAD, which could predict the duration of none-silence speech. It could be freely integrated with any ASR models in Model Zoo.
  • We release a new type model, Punctuation, which could predict the punctuation of ASR models's results. It could be freely integrated with any ASR models in Model Zoo.
  • We release a new model, Data2vec, an unsupervised pretraining model which could be finetuned on ASR and other downstream tasks.
  • We release a new model, Paraformer-Tiny, a lightweight Paraformer model which supports Mandarin command words recognition.
  • We release a new type model, SV, which could extract speaker embeddings and further perform speaker verification on paired utterances. It will be supported for speaker diarization in the future version.
  • We improve the pipeline of modelscope to speedup the inference, by integrating the process of build model into build pipeline.
  • Various new types of audio input types are now supported by modelscope inference pipeline, including wav.scp, wav format, audio bytes, wave samples...

Key Features

  • Many types of typical models are supported, e.g., Tranformer, Conformer, Paraformer.
  • We have released large number of academic and industrial pretrained models on ModelScope
  • The pretrained model Paraformer-large obtains the best performance on many tasks in SpeechIO leaderboard
  • FunASR supplies a easy-to-use pipeline to finetune pretrained models from ModelScope
  • Compared to Espnet framework, the training speed of large-scale datasets in FunASR is much faster owning to the optimized dataloader.

Installation

  • Install Conda:
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
sh Miniconda3-latest-Linux-x86_64.sh
source ~/.bashrc
conda create -n funasr python=3.7
conda activate funasr
  • Install Pytorch (version >= 1.7.0):
pip3 install torch torchvision torchaudio

For more versions, please see https://pytorch.org/get-started/locally

  • Install ModelScope:

If you are in the area of China, you could set the source to speedup the downloading.

pip config set global.index-url https://mirror.sjtu.edu.cn/pypi/web/simple

Install or upgrade modelscope.

pip install "modelscope[audio]" --upgrade -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html

For more details about modelscope, please see modelscope installation

  • Install FunASR and other packages:
git clone https://github.com/alibaba/FunASR.git && cd FunASR
pip install --editable ./

Contact

If you have any questions about FunASR, please contact us by

Dingding group Wechat group

Contributors

Acknowledge

  1. We borrowed a lot of code from Kaldi for data preparation.
  2. We borrowed a lot of code from ESPnet. FunASR follows up the training and finetuning pipelines of ESPnet.
  3. We referred Wenet for building dataloader for large scale data training.
  4. We acknowledge DeepScience for contributing the grpc service.

License

This project is licensed under the The MIT License. FunASR also contains various third-party components and some code modified from other repos under other open source licenses.

Citations

@inproceedings{gao2020universal,
  title={Universal ASR: Unifying Streaming and Non-Streaming ASR Using a Single Encoder-Decoder Model},
  author={Gao, Zhifu and Zhang, Shiliang and Lei, Ming and McLoughlin, Ian},
  booktitle={arXiv preprint arXiv:2010.14099},
  year={2020}
}

@inproceedings{gao2022paraformer,
  title={Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition},
  author={Gao, Zhifu and Zhang, Shiliang and McLoughlin, Ian and Yan, Zhijie},
  booktitle={INTERSPEECH},
  year={2022}
}