mirror of
https://github.com/modelscope/FunASR
synced 2025-09-15 14:48:36 +08:00
readme
This commit is contained in:
parent
d2d17d3ff4
commit
24932e7d76
@ -52,3 +52,7 @@ Usage: ./cmake/build/paraformer_server port thread_num /path/to/model_file quant
|
||||
cd ../python/grpc
|
||||
python grpc_main_client_mic.py --host $server_ip --port 10108
|
||||
```
|
||||
|
||||
## Acknowledge
|
||||
1. This project is maintained by [FunASR community](https://github.com/alibaba-damo-academy/FunASR).
|
||||
2. We acknowledge [DeepScience](https://www.deepscience.cn) for contributing the grpc service.
|
||||
|
||||
@ -1 +0,0 @@
|
||||
Place model.onnx here!
|
||||
File diff suppressed because it is too large
Load Diff
@ -12,7 +12,7 @@ See the bottom of this page: Building Guidance
|
||||
|
||||
### 运行程序
|
||||
|
||||
tester /path/to/models/dir /path/to/wave/file quantize(true or false)
|
||||
tester /path/to/models_dir /path/to/wave_file quantize(true or false)
|
||||
|
||||
例如: tester /data/models /data/test.wav false
|
||||
|
||||
@ -75,33 +75,11 @@ onnxruntime_xxx
|
||||
└───lib
|
||||
```
|
||||
|
||||
## 线程数与性能关系
|
||||
|
||||
测试环境Rocky Linux 8,仅测试cpp版本结果(未测python版本),@acely
|
||||
|
||||
简述:
|
||||
在3台配置不同的机器上分别编译并测试,在fftw和onnxruntime版本都相同的前提下,识别同一个30分钟的音频文件,分别测试不同onnx线程数量的表现。
|
||||
|
||||

|
||||
|
||||
目前可以总结出大致规律:
|
||||
|
||||
并非onnx线程数越多越好
|
||||
2线程比1线程提升显著,线程再多则提升较小
|
||||
线程数等于CPU物理核心数时效率最好
|
||||
实操建议:
|
||||
|
||||
大部分场景用3-4线程性价比最高
|
||||
低配机器用2线程合适
|
||||
|
||||
## 演示
|
||||
|
||||

|
||||
|
||||
## 注意
|
||||
本程序只支持 采样率16000hz, 位深16bit的 **单声道** 音频。
|
||||
|
||||
|
||||
## Acknowledge
|
||||
1. We acknowledge [mayong](https://github.com/RapidAI/RapidASR/tree/main/cpp_onnx) for contributing the onnxruntime(cpp api).
|
||||
2. We borrowed a lot of code from [FastASR](https://github.com/chenkui164/FastASR) for audio frontend and text-postprocess.
|
||||
1. This project is maintained by [FunASR community](https://github.com/alibaba-damo-academy/FunASR).
|
||||
2. We acknowledge [mayong](https://github.com/RapidAI/RapidASR/tree/main/cpp_onnx) for contributing the onnxruntime(cpp api).
|
||||
3. We borrowed a lot of code from [FastASR](https://github.com/chenkui164/FastASR) for audio frontend and text-postprocess.
|
||||
|
||||
@ -53,6 +53,10 @@
|
||||
print(result)
|
||||
```
|
||||
|
||||
## Performance benchmark
|
||||
|
||||
Please ref to [benchmark](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/python/benchmark_libtorch.md)
|
||||
|
||||
## Speed
|
||||
|
||||
Environment:Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz
|
||||
|
||||
@ -54,17 +54,9 @@ python setup.py install
|
||||
print(result)
|
||||
```
|
||||
|
||||
## Speed
|
||||
|
||||
Environment:Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz
|
||||
|
||||
Test [wav, 5.53s, 100 times avg.](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav)
|
||||
|
||||
| Backend | RTF |
|
||||
|:-------:|:-----------------:|
|
||||
| Pytorch | 0.110 |
|
||||
| Onnx | 0.038 |
|
||||
## Performance benchmark
|
||||
|
||||
Please ref to [benchmark](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/python/benchmark_onnx.md)
|
||||
|
||||
## Acknowledge
|
||||
1. This project is maintained by [FunASR community](https://github.com/alibaba-damo-academy/FunASR).
|
||||
|
||||
Loading…
Reference in New Issue
Block a user