FunASR/docs/modescope_pipeline/asr_pipeline.md
2023-04-19 22:57:54 +08:00

1.9 KiB
Raw Blame History

Speech Recognition

Inference

Quick start

Paraformer model

from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks

inference_pipeline = pipeline(
    task=Tasks.auto_speech_recognition,
    model='damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch',
)

rec_result = inference_pipeline(audio_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav')
print(rec_result)

API-reference

define pipeline
  • task: Tasks.auto_speech_recognition
  • model: model name in model zoo, or model path in local disk
  • ngpu: 1 (Defalut), decoding on GPU. If ngpu=0, decoding on CPU
  • ncpu: 1 (Defalut), sets the number of threads used for intraop parallelism on CPU
  • output_dir: None (Defalut), the output path of results if set
  • batch_size: 1 (Defalut), batch size when decoding
infer pipeline
  • audio_in: the input to decode, which could be:
    • wav_path, e.g.: asr_example.wav,
    • pcm_path, e.g.: asr_example.pcm,
    • audio bytes stream, e.g.: bytes data from a microphone
    • audio sample pointe.g.: audio, rate = soundfile.read("asr_example_zh.wav"), the dtype is numpy.ndarray or torch.Tensor
    • wav.scp, kaldi style wav list (wav_id \t wav_path``), e.g.`:
    asr_example1  ./audios/asr_example1.wav
    asr_example2  ./audios/asr_example2.wav
    
    In this case of wav.scp input, output_dir must be set to save the output results
  • audio_fs: audio sampling rate, only set when audio_in is pcm audio

Inference with you data

Inference with multi-threads on CPU

Inference with multi GPU

Finetune with pipeline

Quick start

Finetune with your data

Inference with your finetuned model