mirror of
https://github.com/modelscope/FunASR
synced 2025-09-15 14:48:36 +08:00
docs
This commit is contained in:
parent
eba1fccfa0
commit
977dc3eb98
@ -178,6 +178,9 @@ text_file = f"{model.model_path}/example/text.txt"
|
||||
res = model.generate(input=(wav_file, text_file), data_type=("sound", "text"))
|
||||
print(res)
|
||||
```
|
||||
|
||||
More examples ref to [docs](https://github.com/alibaba-damo-academy/FunASR/tree/main/examples/industrial_data_pretraining)
|
||||
|
||||
[//]: # (FunASR supports inference and fine-tuning of models trained on industrial datasets of tens of thousands of hours. For more details, please refer to ([modelscope_egs](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_pipeline/quick_start.html)). It also supports training and fine-tuning of models on academic standard datasets. For more details, please refer to([egs](https://alibaba-damo-academy.github.io/FunASR/en/academic_recipe/asr_recipe.html)). The models include speech recognition (ASR), speech activity detection (VAD), punctuation recovery, language model, speaker verification, speaker separation, and multi-party conversation speech recognition. For a detailed list of models, please refer to the [Model Zoo](https://github.com/alibaba-damo-academy/FunASR/blob/main/docs/model_zoo/modelscope_models.md):)
|
||||
|
||||
## Deployment Service
|
||||
|
||||
@ -182,7 +182,7 @@ text_file = f"{model.model_path}/example/text.txt"
|
||||
res = model.generate(input=(wav_file, text_file), data_type=("sound", "text"))
|
||||
print(res)
|
||||
```
|
||||
更多详细用法([示例](examples/industrial_data_pretraining))
|
||||
更多详细用法([示例](https://github.com/alibaba-damo-academy/FunASR/tree/main/examples/industrial_data_pretraining))
|
||||
|
||||
|
||||
<a name="服务部署"></a>
|
||||
|
||||
@ -7,5 +7,6 @@ from funasr import AutoModel
|
||||
|
||||
model = AutoModel(model="damo/emotion2vec_base", model_revision="v2.0.1")
|
||||
|
||||
res = model.generate(input="https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav", output_dir="./outputs")
|
||||
wav_file = f"{model.model_path}/example/example/test.wav"
|
||||
res = model.generate(wav_file, output_dir="./outputs", granularity="utterance")
|
||||
print(res)
|
||||
Loading…
Reference in New Issue
Block a user