
|

|
|
|:---------------------------------------------------------------:|:---------------------------------------------------------------:|:-----------------------------------------------------------:|
## Acknowledge
1. We borrowed a lot of code from [Kaldi](http://kaldi-asr.org/) for data preparation.
2. We borrowed a lot of code from [ESPnet](https://github.com/espnet/espnet). FunASR follows up the training and finetuning pipelines of ESPnet.
3. We referred [Wenet](https://github.com/wenet-e2e/wenet) for building dataloader for large scale data training.
4. We acknowledge [DeepScience](https://www.deepscience.cn) for contributing the grpc service.
## License
This project is licensed under the [The MIT License](https://opensource.org/licenses/MIT). FunASR also contains various third-party components and some code modified from other repos under other open source licenses.
## Citations
``` bibtex
@inproceedings{gao2020universal,
title={Universal ASR: Unifying Streaming and Non-Streaming ASR Using a Single Encoder-Decoder Model},
author={Gao, Zhifu and Zhang, Shiliang and Lei, Ming and McLoughlin, Ian},
booktitle={arXiv preprint arXiv:2010.14099},
year={2020}
}
@inproceedings{gao2022paraformer,
title={Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition},
author={Gao, Zhifu and Zhang, Shiliang and McLoughlin, Ian and Yan, Zhijie},
booktitle={INTERSPEECH},
year={2022}
}
@inproceedings{Shi2023AchievingTP,
title={Achieving Timestamp Prediction While Recognizing with Non-Autoregressive End-to-End ASR Model},
author={Xian Shi and Yanni Chen and Shiliang Zhang and Zhijie Yan},
booktitle={arXiv preprint arXiv:2301.12343}
year={2023}
}
```