* init * update * add LoadConfigFromYaml * update * update * update * del time stat * update * update * update * update * update * update * update * add cpp websocket online 2pass srv * [feature] multithread grpc server * update * update * update * [feature] support 2pass grpc cpp server and python client, can change mode to use offline, online or 2pass decoding * update * update * update * update * add paraformer online onnx model export * add paraformer online onnx model export * add paraformer online onnx model export * add paraformer online onnxruntime * add paraformer online onnxruntime * add paraformer online onnxruntime * fix export paraformer online onnx model bug * for client closed earlier and core dump * support GRPC two pass decoding (#813) * [refator] optimize grpc server pipeline and instruction * [refator] rm useless file * [refator] optimize grpc client pipeline and instruction * [debug] hanlde coredump when client ternimated * [refator] rm useless log * [refator] modify grpc cmake * Create run_server_2pass.sh * Update SDK_tutorial_online_zh.md * Update SDK_tutorial_online.md * Update SDK_advanced_guide_online.md * Update SDK_advanced_guide_online_zh.md * Update SDK_tutorial_online_zh.md * Update SDK_tutorial_online.md * update --------- Co-authored-by: zhaoming <zhaomingwork@qq.com> Co-authored-by: boji123 <boji123@aliyun.com> Co-authored-by: haoneng.lhn <haoneng.lhn@alibaba-inc.com> |
||
|---|---|---|
| .. | ||
| models | ||
| test | ||
| utils | ||
| __init__.py | ||
| export_conformer.py | ||
| export_model.py | ||
| README.md | ||
Export models
Environments
Install modelscope and funasr
The installation is the same as funasr
# pip3 install torch torchaudio
pip install -U modelscope funasr
# For the users in China, you could install with the command:
# pip install -U modelscope funasr -i https://mirror.sjtu.edu.cn/pypi/web/simple
Install the quantization tools
pip install torch-quant # Optional, for torchscript quantization
pip install onnx onnxruntime # Optional, for onnx quantization
Usage
Tips: torch>=1.11.0
python -m funasr.export.export_model \
--model-name [model_name] \
--export-dir [export_dir] \
--type [onnx, torch] \
--quantize [true, false] \
--fallback-num [fallback_num]
model-name: the model is to export. It could be the models from modelscope, or local finetuned model(named: model.pb).
export-dir: the dir where the onnx is export.
type: onnx or torch, export onnx format model or torchscript format model.
quantize: true, export quantized model at the same time; false, export fp32 model only.
fallback-num: specify the number of fallback layers to perform automatic mixed precision quantization.
Export onnx format model
Export model from modelscope
python -m funasr.export.export_model --model-name damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type onnx --quantize false
Export model from local path
The model'name must be model.pb
python -m funasr.export.export_model --model-name /mnt/workspace/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type onnx --quantize false
Test onnx model
Ref to test
Export torchscripts format model
Export model from modelscope
python -m funasr.export.export_model --model-name damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type torch --quantize false
Export model from local path
The model'name must be model.pb
python -m funasr.export.export_model --model-name /mnt/workspace/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type torch --quantize false
Test onnx model
Ref to test
Runtime
ONNXRuntime
ONNXRuntime-python
Ref to docs
ONNXRuntime-cpp
Ref to docs
Libtorch
Libtorch-python
Ref to docs
Libtorch-cpp
Undo
Performance Benchmark
Paraformer on CPU
Paraformer on GPU
Acknowledge
Torch model quantization is supported by BladeDISC, an end-to-end DynamIc Shape Compiler project for machine learning workloads. BladeDISC provides general, transparent, and ease of use performance optimization for TensorFlow/PyTorch workloads on GPGPU and CPU backends. If you are interested, please contact us.