FunASR/funasr/runtime/python/websocket
2023-03-24 21:26:20 +08:00
..
ASR_client.py 多端连接 2023-03-24 15:11:56 +08:00
ASR_server.py websocket 2023-03-24 21:26:20 +08:00
README.md grpc readme 2023-03-24 11:25:57 +08:00
requirements_client.txt websocket 2023-03-23 13:41:38 +08:00
requirements_server.txt websocket 2023-03-23 13:33:55 +08:00

Using funasr with websocket

We can send streaming audio data to server in real-time with grpc client every 300 ms e.g., and get transcribed text when stop speaking. The audio data is in streaming, the asr inference process is in offline.

For the Server

Install the modelscope and funasr

pip install "modelscope[audio_asr]" -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html
git clone https://github.com/alibaba/FunASR.git && cd FunASR
pip install --editable ./

Install the requirements for server

cd funasr/runtime/python/websocket
pip install -r requirements_server.txt

Start server

python ASR_server.py --host "0.0.0.0" --port 10095 --asr_model "damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch"

For the client

Install the requirements for client

git clone https://github.com/alibaba/FunASR.git && cd FunASR
cd funasr/runtime/python/websocket
pip install -r requirements_client.txt

Start client

python ASR_client.py --host "127.0.0.1" --port 10095 --chunk_size 300

Acknowledge

  1. We acknowledge cgisky1980 for contributing the websocket service.