3.0 KiB
Export models
Environments
Install modelscope and funasr
The installation is the same as funasr
pip3 install torch torchaudio
pip install -U modelscope funasr
# For the users in China, you could install with the command:
# pip install -U modelscope funasr -i https://mirror.sjtu.edu.cn/pypi/web/simple
Install the quantization tools
pip install torch-quant # Optional, for torchscript quantization
pip install onnxruntime # Optional, for onnx quantization
Export model
Tips: torch>=1.11.0
python -m funasr.export.export_model \
--model-name [model_name] \
--export-dir [export_dir] \
--type [onnx, torch] \
--quantize [true, false] \
--fallback-num [fallback_num]
model-name: the model is to export. It could be the models from modelscope, or local finetuned model(named: model.pb).
export-dir: the dir where the onnx is export.
type: onnx or torch, export onnx format model or torchscript format model.
quantize: true, export quantized model at the same time; false, export fp32 model only.
fallback-num: specify the number of fallback layers to perform automatic mixed precision quantization.
Performance Benchmark of Runtime
Paraformer on CPU
Paraformer on GPU
For example
Export onnx format model
Export model from modelscope
python -m funasr.export.export_model --model-name damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type onnx
Export model from local path, the model'name must be model.pb.
python -m funasr.export.export_model --model-name /mnt/workspace/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type onnx
Export torchscripts format model
Export model from modelscope
python -m funasr.export.export_model --model-name damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type torch
Export model from local path, the model'name must be model.pb.
python -m funasr.export.export_model --model-name /mnt/workspace/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type torch
Acknowledge
Torch model quantization is supported by BladeDISC, an end-to-end DynamIc Shape Compiler project for machine learning workloads. BladeDISC provides general, transparent, and ease of use performance optimization for TensorFlow/PyTorch workloads on GPGPU and CPU backends. If you are interested, please contact us.