mirror of
https://github.com/modelscope/FunASR
synced 2025-09-15 14:48:36 +08:00
| .. | ||
| funasr_onnx | ||
| __init__.py | ||
| demo_punc_offline.py | ||
| demo_punc_online.py | ||
| demo_vad.py | ||
| demo.py | ||
| README.md | ||
| setup.py | ||
Using funasr with ONNXRuntime
Introduction
- Model comes from speech_paraformer.
Steps:
-
Export the model.
-
Command: (
Tips: torch >= 1.11.0 is required.)More details ref to (export docs)
e.g., Export model from modelscopepython -m funasr.export.export_model --model-name damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type onnx --quantize Falsee.g., Export model from local path, the model'name must bemodel.pb.python -m funasr.export.export_model --model-name ./damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type onnx --quantize False
-
-
Install the
funasr_onnx
install from pip
pip install --upgrade funasr_onnx -i https://pypi.Python.org/simple
or install from source code
git clone https://github.com/alibaba/FunASR.git && cd FunASR
cd funasr/runtime/python/funasr_onnx
python setup.py build
python setup.py install
- Run the demo.
- Model_dir: the model path, which contains
model.onnx,config.yaml,am.mvn. - Input: wav formt file, support formats:
str, np.ndarray, List[str] - Output:
List[str]: recognition result. - Example:
from funasr_onnx import Paraformer model_dir = "/nfs/zhifu.gzf/export/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch" model = Paraformer(model_dir, batch_size=1) wav_path = ['/nfs/zhifu.gzf/export/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/example/asr_example.wav'] result = model(wav_path) print(result)
- Model_dir: the model path, which contains
Performance benchmark
Please ref to benchmark
Acknowledge
- This project is maintained by FunASR community.
- We acknowledge SWHL for contributing the onnxruntime (for paraformer model).