mirror of
https://github.com/modelscope/FunASR
synced 2025-09-15 14:48:36 +08:00
Merge pull request #1000 from alibaba-damo-academy/dev_lhn
update github io
This commit is contained in:
commit
0f95934e80
@ -12,7 +12,7 @@ cd egs/aishell/paraformer
|
|||||||
Then you can directly start the recipe as follows:
|
Then you can directly start the recipe as follows:
|
||||||
```sh
|
```sh
|
||||||
conda activate funasr
|
conda activate funasr
|
||||||
. ./run.sh --CUDA_VISIBLE_DEVICES="0,1" --gpu_num=2
|
bash run.sh --CUDA_VISIBLE_DEVICES "0,1" --gpu_num 2
|
||||||
```
|
```
|
||||||
|
|
||||||
The training log files are saved in `${exp_dir}/exp/${model_dir}/log/train.log.*`, which can be viewed using the following command:
|
The training log files are saved in `${exp_dir}/exp/${model_dir}/log/train.log.*`, which can be viewed using the following command:
|
||||||
@ -264,4 +264,4 @@ Users can use ModelScope for inference and fine-tuning based on a trained academ
|
|||||||
|
|
||||||
### Decoding by CPU or GPU
|
### Decoding by CPU or GPU
|
||||||
|
|
||||||
We support CPU and GPU decoding. For CPU decoding, set `gpu_inference=false` and `njob` to specific the total number of CPU jobs. For GPU decoding, first set `gpu_inference=true`. Then set `gpuid_list` to specific which GPUs for decoding and `njob` to specific the number of decoding jobs on each GPU.
|
We support CPU and GPU decoding. For CPU decoding, set `gpu_inference=false` and `njob` to specific the total number of CPU jobs. For GPU decoding, first set `gpu_inference=true`. Then set `gpuid_list` to specific which GPUs for decoding and `njob` to specific the number of decoding jobs on each GPU.
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user