mirror of
https://github.com/modelscope/FunASR
synced 2025-09-15 14:48:36 +08:00
docs
This commit is contained in:
parent
73528af03f
commit
497b68b28b
108
README.md
108
README.md
@ -23,18 +23,43 @@
|
||||
<a name="whats-new"></a>
|
||||
## What's new:
|
||||
|
||||
### FunASR runtime-SDK
|
||||
### FunASR runtime
|
||||
|
||||
- 2023.07.03:
|
||||
We have release the FunASR runtime-SDK-0.1.0, file transcription service (Mandarin) is now supported ([ZH](funasr/runtime/readme_cn.md)/[EN](funasr/runtime/readme.md))
|
||||
|
||||
### Multi-Channel Multi-Party Meeting Transcription 2.0 (M2MeT2.0) Challenge
|
||||
|
||||
We are pleased to announce that the M2MeT2.0 challenge has been accepted by the ASRU 2023 challenge special session. The registration is now open. The baseline system is conducted on FunASR and is provided as a receipe of AliMeeting corpus. For more details you can see the guidence of M2MET2.0 ([CN](https://alibaba-damo-academy.github.io/FunASR/m2met2_cn/index.html)/[EN](https://alibaba-damo-academy.github.io/FunASR/m2met2/index.html)).
|
||||
Challenge details ref to ([CN](https://alibaba-damo-academy.github.io/FunASR/m2met2_cn/index.html)/[EN](https://alibaba-damo-academy.github.io/FunASR/m2met2/index.html))
|
||||
|
||||
### Release notes
|
||||
### Speech Recognition
|
||||
|
||||
- Academic Models
|
||||
- Encoder-Decoder Models (AED): [Transformer](egs/aishell/transformer), [Conformer](egs/aishell/conformer), [Branchformer](egs/aishell/branchformer)
|
||||
- Transducer Models (RNNT): [RNNT streaming](egs/aishell/rnnt), [BAT streaming/non-streaming](egs/aishell/bat)
|
||||
- Non-autoregressive Model (NAR): [Paraformer](egs/aishell/paraformer)
|
||||
- Multi-speaker recognition model: [MFCCA](egs_modelscope/asr/mfcca)
|
||||
|
||||
|
||||
- Industrial-level Models
|
||||
- Paraformer Models (Mandarin): [Paraformer-large](egs_modelscope/asr/paraformer/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch), [Paraformer-large-long](egs_modelscope/asr_vad_punc/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch), [Paraformer-large streaming](egs_modelscope/asr/paraformer/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online), [Paraformer-large-contextual](egs_modelscope/asr/paraformer/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404)
|
||||
- Conformer Models (English): [Conformer]()
|
||||
- UniASR streaming offline unifying models: [16k UniASR Burmese](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-my-16k-common-vocab696-pytorch/summary), [16k UniASR Hebrew](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-he-16k-common-vocab1085-pytorch/summary), [16k UniASR Urdu](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-ur-16k-common-vocab877-pytorch/summary), [8k UniASR Mandarin financial domain](https://www.modelscope.cn/models/damo/speech_UniASR_asr_2pass-zh-cn-8k-finance-vocab3445-online/summary), [16k UniASR Mandarin audio-visual domain](https://www.modelscope.cn/models/damo/speech_UniASR_asr_2pass-zh-cn-16k-audio_and_video-vocab3445-online/summary),
|
||||
[Southern Fujian Dialect model](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-minnan-16k-common-vocab3825/summary), [French model](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-fr-16k-common-vocab3472-tensorflow1-online/summary), [German model](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-de-16k-common-vocab3690-tensorflow1-online/summary), [Vietnamese model](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-vi-16k-common-vocab1001-pytorch-online/summary), [Persian model](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-fa-16k-common-vocab1257-pytorch-online/summary)
|
||||
|
||||
- Speaker Recognition
|
||||
- Speaker Verification Model: [xvector](egs_modelscope/speaker_verification)
|
||||
- Speaker Diarization Model: [SOND](egs/callhome/diarization/sond)
|
||||
|
||||
- Punctuation Restoration
|
||||
- Chinese Punctuation Model: [CT-Transformer](egs_modelscope/punctuation/punc_ct-transformer_zh-cn-common-vocab272727-pytorch), [CT-Transformer streaming](egs_modelscope/punctuation/punc_ct-transformer_zh-cn-common-vadrealtime-vocab272727)
|
||||
|
||||
- Endpoint Detection
|
||||
- [FSMN-VAD](egs_modelscope/vad/speech_fsmn_vad_zh-cn-16k-common)
|
||||
|
||||
- Timestamp Prediction
|
||||
- Character-level FA Model: [TP-Aligner](egs_modelscope/tp/speech_timestamp_prediction-v1-16k-offline)
|
||||
|
||||
For the release notes, please ref to [news](https://github.com/alibaba-damo-academy/FunASR/releases)
|
||||
|
||||
<a name="highlights"></a>
|
||||
## Highlights
|
||||
@ -75,22 +100,57 @@ For more details, please ref to [installation](https://alibaba-damo-academy.gith
|
||||
<a name="quick-start"></a>
|
||||
## Quick Start
|
||||
|
||||
You could use FunASR by:
|
||||
You can use FunASR in the following ways:
|
||||
|
||||
- egs
|
||||
- egs_modelscope
|
||||
- runtime
|
||||
- Service Deployment SDK
|
||||
- Industrial model egs
|
||||
- Academic model egs
|
||||
|
||||
### Service Deployment SDK
|
||||
|
||||
#### Python version Example
|
||||
Supports real-time streaming speech recognition, uses non-streaming models for error correction, and outputs text with punctuation. Currently, only single client is supported. For multi-concurrency, please refer to the C++ version service deployment SDK below.
|
||||
|
||||
##### Server Deployment
|
||||
|
||||
### egs
|
||||
If you want to train the model from scratch, you could use funasr directly by recipe, as the following:
|
||||
```shell
|
||||
cd egs/aishell/paraformer
|
||||
. ./run.sh --CUDA_VISIBLE_DEVICES="0,1" --gpu_num=2
|
||||
cd funasr/runtime/python/websocket
|
||||
python funasr_wss_server.py --port 10095
|
||||
```
|
||||
More examples could be found in [docs](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_pipeline/quick_start.html)
|
||||
|
||||
### egs_modelscope
|
||||
If you want to infer or finetune pretraining models from modelscope, you could use funasr by modelscope pipeline, as the following:
|
||||
##### Client Testing
|
||||
|
||||
```shell
|
||||
python funasr_wss_client.py --host "127.0.0.1" --port 10095 --mode 2pass --chunk_size "5,10,5"
|
||||
```
|
||||
|
||||
For more examples, please refer to [docs](https://alibaba-damo-academy.github.io/FunASR/en/runtime/websocket_python.html#id2).
|
||||
|
||||
#### C++ version Example
|
||||
|
||||
Currently, offline file transcription service (CPU) is supported, and concurrent requests of hundreds of channels are supported.
|
||||
|
||||
##### Server Deployment
|
||||
|
||||
You can use the following command to complete the deployment with one click:
|
||||
|
||||
```shell
|
||||
curl -O https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/shell/funasr-runtime-deploy-offline-cpu-zh.sh
|
||||
sudo bash funasr-runtime-deploy-offline-cpu-zh.sh install --workspace ./funasr-runtime-resources
|
||||
```
|
||||
|
||||
##### Client Testing
|
||||
|
||||
```shell
|
||||
python3 funasr_wss_client.py --host "127.0.0.1" --port 10095 --mode offline --audio_in "../audio/asr_example.wav"
|
||||
```
|
||||
|
||||
For more examples, please refer to [docs](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/docs/SDK_tutorial_zh.md)
|
||||
|
||||
|
||||
### Industrial Model Egs
|
||||
|
||||
If you want to use the pre-trained industrial models in ModelScope for inference or fine-tuning training, you can refer to the following command:
|
||||
|
||||
```python
|
||||
from modelscope.pipelines import pipeline
|
||||
@ -105,24 +165,18 @@ rec_result = inference_pipeline(audio_in='https://isv-data.oss-cn-hangzhou.aliyu
|
||||
print(rec_result)
|
||||
# {'text': '欢迎大家来体验达摩院推出的语音识别模型'}
|
||||
```
|
||||
|
||||
More examples could be found in [docs](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_pipeline/quick_start.html)
|
||||
|
||||
### runtime
|
||||
### Academic model egs
|
||||
|
||||
An example with websocket:
|
||||
If you want to train from scratch, usually for academic models, you can start training and inference with the following command:
|
||||
|
||||
For the server:
|
||||
```shell
|
||||
cd funasr/runtime/python/websocket
|
||||
python funasr_wss_server.py --port 10095
|
||||
cd egs/aishell/paraformer
|
||||
. ./run.sh --CUDA_VISIBLE_DEVICES="0,1" --gpu_num=2
|
||||
```
|
||||
|
||||
For the client:
|
||||
```shell
|
||||
python funasr_wss_client.py --host "127.0.0.1" --port 10095 --mode 2pass --chunk_size "5,10,5"
|
||||
#python funasr_wss_client.py --host "127.0.0.1" --port 10095 --mode 2pass --chunk_size "8,8,4" --audio_in "./data/wav.scp" --output_dir "./results"
|
||||
```
|
||||
More examples could be found in [docs](https://alibaba-damo-academy.github.io/FunASR/en/runtime/websocket_python.html#id2)
|
||||
More examples could be found in [docs](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_pipeline/quick_start.html)
|
||||
|
||||
<a name="contact"></a>
|
||||
## Contact
|
||||
|
||||
@ -49,7 +49,8 @@ FunASR希望在语音识别的学术研究和工业应用之间架起一座桥
|
||||
- 中文通用模型:[Paraformer-large](egs_modelscope/asr/paraformer/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch),[Paraformer-large长音频版本](egs_modelscope/asr_vad_punc/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch),[Paraformer-large流式版本](egs_modelscope/asr/paraformer/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online)
|
||||
- 中文通用热词模型:[Paraformer-large-contextual](egs_modelscope/asr/paraformer/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404),
|
||||
- 英文通用模型:[Conformer]()
|
||||
- 流式离线一体化模型:[UniASR]()
|
||||
- 流式离线一体化模型: [16k UniASR闽南语](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-minnan-16k-common-vocab3825/summary)、 [16k UniASR法语](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-fr-16k-common-vocab3472-tensorflow1-online/summary)、 [16k UniASR德语](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-de-16k-common-vocab3690-tensorflow1-online/summary)、 [16k UniASR越南语](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-vi-16k-common-vocab1001-pytorch-online/summary)、 [16k UniASR波斯语](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-fa-16k-common-vocab1257-pytorch-online/summary),
|
||||
[16k UniASR缅甸语](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-my-16k-common-vocab696-pytorch/summary)、 [16k UniASR希伯来语](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-he-16k-common-vocab1085-pytorch/summary)、 [16k UniASR乌尔都语](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-ur-16k-common-vocab877-pytorch/summary)、 [8k UniASR中文金融领域](https://www.modelscope.cn/models/damo/speech_UniASR_asr_2pass-zh-cn-8k-finance-vocab3445-online/summary)、[16k UniASR中文音视频领域](https://www.modelscope.cn/models/damo/speech_UniASR_asr_2pass-zh-cn-16k-audio_and_video-vocab3445-online/summary)
|
||||
|
||||
### 说话人识别
|
||||
- 说话人确认模型:[xvector](egs_modelscope/speaker_verification)
|
||||
|
||||
Loading…
Reference in New Issue
Block a user