FunASR/egs_modelscope/common
2022-11-26 21:56:51 +08:00
..
conf create 2022-11-26 21:56:51 +08:00
modelscope_utils create 2022-11-26 21:56:51 +08:00
modelscope_common_finetune.sh create 2022-11-26 21:56:51 +08:00
modelscope_common_infer_after_finetune.sh create 2022-11-26 21:56:51 +08:00
modelscope_common_infer.sh create 2022-11-26 21:56:51 +08:00
path.sh create 2022-11-26 21:56:51 +08:00
README.md create 2022-11-26 21:56:51 +08:00
utils create 2022-11-26 21:56:51 +08:00

ModelScope Model

How to finetune and infer using a pretrained ModelScope Model

Finetune

  • Modify finetune training related parameters in conf/train_asr_paraformer_sanm_50e_16d_2048_512_lfr6.yaml
  • Setting parameters in modelscope_common_finetune.sh
    • dataset: the dataset dir needs to include files: train/wav.scp, train/text; optional dev/wav.scp, dev/text, test/wav.scp test/text
    • tag: exp tag
    • init_model_name: speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch # pre-trained model, download from modelscope during fine-tuning
  • Then you can run the pipeline to finetune with our model download from modelscope:
    sh ./modelscope_common_finetune.sh

Inference

Or you can use the finetuned model for inference directly.

  • Setting parameters in modelscope_common_infer.sh
    • data_dir: # wav list, ${data_dir}/wav.scp
    • exp_dir: the result path
    • model_name: speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch # pre-trained model, download from modelscope
  • Then you can run the pipeline to infer with:
    sh ./modelscope_common_infer.sh