diff --git a/docs/academic_recipe/asr_recipe.md b/docs/academic_recipe/asr_recipe.md index 5a11dc5ee..9e19c61ef 100644 --- a/docs/academic_recipe/asr_recipe.md +++ b/docs/academic_recipe/asr_recipe.md @@ -12,7 +12,7 @@ cd egs/aishell/paraformer Then you can directly start the recipe as follows: ```sh conda activate funasr -. ./run.sh --CUDA_VISIBLE_DEVICES="0,1" --gpu_num=2 +bash run.sh --CUDA_VISIBLE_DEVICES "0,1" --gpu_num 2 ``` The training log files are saved in `${exp_dir}/exp/${model_dir}/log/train.log.*`, which can be viewed using the following command: @@ -264,4 +264,4 @@ Users can use ModelScope for inference and fine-tuning based on a trained academ ### Decoding by CPU or GPU -We support CPU and GPU decoding. For CPU decoding, set `gpu_inference=false` and `njob` to specific the total number of CPU jobs. For GPU decoding, first set `gpu_inference=true`. Then set `gpuid_list` to specific which GPUs for decoding and `njob` to specific the number of decoding jobs on each GPU. \ No newline at end of file +We support CPU and GPU decoding. For CPU decoding, set `gpu_inference=false` and `njob` to specific the total number of CPU jobs. For GPU decoding, first set `gpu_inference=true`. Then set `gpuid_list` to specific which GPUs for decoding and `njob` to specific the number of decoding jobs on each GPU.