mirror of
https://github.com/modelscope/FunASR
synced 2025-09-15 14:48:36 +08:00
| .. | ||
| rapid_paraformer | ||
| __init__.py | ||
| .gitignore | ||
| README.md | ||
| requirements.txt | ||
Using paraformer with ONNXRuntime
Introduction
- Model comes from speech_paraformer.
Steps:
-
Download the whole directory (
funasr/runtime/python/onnxruntime) to the local. -
Install the related packages.
pip install requirements.txt -
Export the model.
- Export your model(docs), or Download Link
-
Run the demo.
- Model_dir: the root path, which contains model.onnx, config.yaml, am.mvn.
- Input: wav formt file, support formats:
str, np.ndarray, List[str] - Output:
List[str]: recognition result. - Example:
from paraformer_onnx import Paraformer model_dir = "/nfs/zhifu.gzf/export/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch" model = Paraformer(model_dir, batch_size=1) wav_path = ['/nfs/zhifu.gzf/export/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/example/asr_example.wav'] result = model(wav_path) print(result)
Acknowledge
- We acknowledge SWHL for contributing the onnxruntime(python api).