FunASR/egs_modelscope/common_uniasr
2022-12-09 22:16:23 +08:00
..
conf update FunASR version==0.1.4 2022-12-09 22:16:23 +08:00
modelscope_utils update FunASR version==0.1.4 2022-12-09 22:16:23 +08:00
utils update FunASR version==0.1.4 2022-12-09 22:16:23 +08:00
modelscope_common_finetune.sh update FunASR version==0.1.4 2022-12-09 22:16:23 +08:00
modelscope_common_infer_after_finetune.sh update FunASR version==0.1.4 2022-12-09 22:16:23 +08:00
modelscope_common_infer.sh update FunASR version==0.1.4 2022-12-09 22:16:23 +08:00
path.sh update FunASR version==0.1.4 2022-12-09 22:16:23 +08:00
README.md update FunASR version==0.1.4 2022-12-09 22:16:23 +08:00

ModelScope Model

How to finetune and infer using a pretrained ModelScope Model

Finetune

  • Modify finetune training related parameters in conf/train_asr_uniasr_40e1_12d1_20e2_12d2_1280_320_lfr6.yaml
  • Setting parameters in modelscope_common_finetune.sh
    • dataset: the dataset dir needs to include files: train/wav.scp, train/text; optional dev/wav.scp, dev/text, test/wav.scp test/text
    • tag: exp tag
    • init_model_name: speech_UniASR_asr_2pass-zh-cn-8k-common-vocab3445-pytorch-online # pre-trained model, download from modelscope during fine-tuning
  • Then you can run the pipeline to finetune with our model download from modelscope:
    sh ./modelscope_common_finetune.sh

Inference

Or you can use the finetuned model for inference directly.

  • Setting parameters in modelscope_common_infer.sh
    • data_dir: # wav list, ${data_dir}/wav.scp
    • exp_dir: the result path
    • model_name: speech_UniASR_asr_2pass-zh-cn-8k-common-vocab3445-pytorch-online # pre-trained model, download from modelscope
  • Then you can run the pipeline to infer with:
    sh ./modelscope_common_infer.sh