FunASR/funasr/runtime/python/onnxruntime
2023-04-15 00:18:10 +08:00
..
funasr_onnx Author 2023-04-14 10:24:13 +08:00
__init__.py export model 2023-02-13 17:43:01 +08:00
demo_punc_offline.py fix 2023-04-07 15:41:53 +08:00
demo_punc_online.py fix 2023-04-07 15:41:53 +08:00
demo_vad_offline.py Author 2023-04-14 10:24:13 +08:00
demo_vad_online.py Author 2023-04-14 10:24:13 +08:00
demo.py Merge branch 'main' into dev_cmz2 2023-04-07 15:54:09 +08:00
README.md readme 2023-04-15 00:07:23 +08:00
setup.py readme 2023-04-15 00:18:10 +08:00

Using funasr with ONNXRuntime

Steps:

  1. Export the model.

    • Command: (Tips: torch >= 1.11.0 is required.)

      More details ref to (export docs)

      • e.g., Export model from modelscope
        python -m funasr.export.export_model --model-name damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type onnx --quantize False
        
      • e.g., Export model from local path, the model'name must be model.pb.
        python -m funasr.export.export_model --model-name ./damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type onnx --quantize False
        
  2. Install the funasr_onnx

install from pip

pip install -U funasr_onnx
# For the users in China, you could install with the command:
# pip install -U funasr_onnx -i https://mirror.sjtu.edu.cn/pypi/web/simple

or install from source code

git clone https://github.com/alibaba/FunASR.git && cd FunASR
cd funasr/runtime/python/onnxruntime
pip install -e ./
# For the users in China, you could install with the command:
# pip install -e ./ -i https://mirror.sjtu.edu.cn/pypi/web/simple

  1. Run the demo.
    • Model_dir: the model path, which contains model.onnx, config.yaml, am.mvn.
    • Input: wav formt file, support formats: str, np.ndarray, List[str]
    • Output: List[str]: recognition result.
    • Example:
      from funasr_onnx import Paraformer
      
      model_dir = "/nfs/zhifu.gzf/export/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch"
      model = Paraformer(model_dir, batch_size=1)
      
      wav_path = ['/nfs/zhifu.gzf/export/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/example/asr_example.wav']
      
      result = model(wav_path)
      print(result)
      

Performance benchmark

Please ref to benchmark

Acknowledge

  1. This project is maintained by FunASR community.
  2. We acknowledge SWHL for contributing the onnxruntime (for paraformer model).