mirror of
https://github.com/modelscope/FunASR
synced 2025-09-15 14:48:36 +08:00
| .. | ||
| conf | ||
| modelscope_utils | ||
| utils | ||
| modelscope_common_finetune.sh | ||
| modelscope_common_infer_after_finetune.sh | ||
| modelscope_common_infer.sh | ||
| path.sh | ||
| README.md | ||
ModelScope Model
How to finetune and infer using a pretrained ModelScope Model
Finetune
- Modify finetune training related parameters in
conf/train_asr_uniasr_40e1_12d1_20e2_12d2_1280_320_lfr6.yaml - Setting parameters in
modelscope_common_finetune.sh- dataset: the dataset dir needs to include files: train/wav.scp, train/text; optional dev/wav.scp, dev/text, test/wav.scp test/text
- tag: exp tag
- init_model_name: speech_UniASR_asr_2pass-zh-cn-8k-common-vocab3445-pytorch-online # pre-trained model, download from modelscope during fine-tuning
- Then you can run the pipeline to finetune with our model download from modelscope:
sh ./modelscope_common_finetune.sh
Inference
Or you can use the finetuned model for inference directly.
- Setting parameters in
modelscope_common_infer.sh- data_dir: # wav list, ${data_dir}/wav.scp
- exp_dir: the result path
- model_name: speech_UniASR_asr_2pass-zh-cn-8k-common-vocab3445-pytorch-online # pre-trained model, download from modelscope
- Then you can run the pipeline to infer with:
sh ./modelscope_common_infer.sh