FunASR/funasr/runtime/websocket
2023-05-09 19:16:48 +08:00
..
CMakeLists.txt Update CMakeLists.txt 2023-05-09 19:16:48 +08:00
readme.md modified for new asr api 2023-05-09 10:21:21 +00:00
websocketclient.cpp modified for new asr api 2023-05-09 10:21:21 +00:00
websocketmain.cpp modified for new asr api 2023-05-09 10:21:21 +00:00
websocketsrv.cpp modified for new asr api 2023-05-09 10:21:21 +00:00
websocketsrv.h modified for new asr api 2023-05-09 10:21:21 +00:00

Service with websocket-cpp

Export the model

Install modelscope and funasr

# pip3 install torch torchaudio
pip install -U modelscope funasr
# For the users in China, you could install with the command:
# pip install -U modelscope funasr -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html -i https://mirror.sjtu.edu.cn/pypi/web/simple

Export onnx model

python -m funasr.export.export_model --model-name damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type onnx --quantize True

Building for Linux/Unix

Download onnxruntime

# download an appropriate onnxruntime from https://github.com/microsoft/onnxruntime/releases/tag/v1.14.0
# here we get a copy of onnxruntime for linux 64
wget https://github.com/microsoft/onnxruntime/releases/download/v1.14.0/onnxruntime-linux-x64-1.14.0.tgz
tar -zxvf onnxruntime-linux-x64-1.14.0.tgz

Install openblas

sudo apt-get install libopenblas-dev #ubuntu
# sudo yum -y install openblas-devel #centos

Build runtime

git clone https://github.com/alibaba-damo-academy/FunASR.git && cd funasr/runtime/websocket
mkdir build && cd build
cmake  -DCMAKE_BUILD_TYPE=release .. -DONNXRUNTIME_DIR=/path/to/onnxruntime-linux-x64-1.14.0
make

Run the websocket server

cd bin
./websocketmain  [--model_thread_num <int>] [--decoder_thread_num <int>]
                    [--io_thread_num <int>] [--port <int>] [--listen_ip
                    <string>] [--punc-quant <string>] [--punc-dir <string>]
                    [--vad-quant <string>] [--vad-dir <string>] [--quantize
                    <string>] --model-dir <string> [--] [--version] [-h]
Where:
   --model-dir <string>
     (required)  the asr model path, which contains model.onnx, config.yaml, am.mvn
   --quantize <string>
     false (Default), load the model of model.onnx in model_dir. If set true, load the model of model_quant.onnx in model_dir

   --vad-dir <string>
     the vad model path, which contains model.onnx, vad.yaml, vad.mvn
   --vad-quant <string>
     false (Default), load the model of model.onnx in vad_dir. If set true, load the model of model_quant.onnx in vad_dir

   --punc-dir <string>
     the punc model path, which contains model.onnx, punc.yaml
   --punc-quant <string>
     false (Default), load the model of model.onnx in punc_dir. If set true, load the model of model_quant.onnx in punc_dir

   --decoder_thread_num <int>
     number of threads for decoder, default:8
   --io_thread_num <int>
     number of threads for network io, default:8
   --port <int>
     listen port, default:8889
  
   Required:  --model-dir <string>
   If use vad, please add: --vad-dir <string>
   If use punc, please add: --punc-dir <string>
example:
   websocketmain --model-dir /FunASR/funasr/runtime/onnxruntime/export/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch

Run websocket client test

Usage: websocketclient server_ip port wav_path threads_num

example:

websocketclient 127.0.0.1 8889 funasr/runtime/websocket/test.pcm.wav 64

result json, example like:
{"text":"一二三四五六七八九十一二三四五六七八九十"}

Acknowledge

  1. This project is maintained by FunASR community.
  2. We acknowledge zhaoming for contributing the websocket(cpp-api).