mirror of
https://github.com/modelscope/FunASR
synced 2025-09-15 14:48:36 +08:00
* add hotword for deploy_tools * Support wfst decoder and contextual biasing (#1039) * Support wfst decoder and contextual biasing * Turn on fstbin compilation --------- Co-authored-by: gongbo.gb <gongbo.gb@alibaba-inc.com> * mv funasr/runtime runtime * Fix crash caused by OOV in hotwords list * funasr infer * funasr infer * funasr infer * funasr infer * funasr infer * fix some bugs about fst hotword; support wfst for websocket server and clients; mv runtime out of funasr; modify relative docs * del onnxruntime/include/gflags * update tensor.h * update run_server.sh * update deploy tools * update deploy tools * update websocket-server * update funasr-wss-server * Remove self loop propagation * Update websocket_protocol_zh.md * Update websocket_protocol_zh.md * update hotword protocol * author zhaomingwork: change hotwords for h5 and java * update hotword protocol * catch exception for json_fst_hws * update hotword on message * update onnx benchmark for ngram&hotword * update docs * update funasr-wss-serve * add NONE for LM_DIR * update docs * update run_server.sh * add whats-new * modify whats-new * update whats-new * update whats-new * Support decoder option for beam searching * update benchmark_onnx_cpp * Support decoder option for websocket * fix bug of CompileHotwordEmbedding * update html client * update docs --------- Co-authored-by: gongbo.gb <35997837+aibulamusi@users.noreply.github.com> Co-authored-by: gongbo.gb <gongbo.gb@alibaba-inc.com> Co-authored-by: 游雁 <zhifu.gzf@alibaba-inc.com> |
||
|---|---|---|
| .. | ||
| paraformer.proto | ||
| Readme.md | ||
| workflow.png | ||
service ASR { //grpc service
rpc Recognize (stream Request) returns (stream Response) {} //Stub
}
message Request { //request data
bytes audio_data = 1; //audio data in bytes.
string user = 2; //user allowed.
string language = 3; //language, zh-CN for now.
bool speaking = 4; //flag for speaking.
bool isEnd = 5; //flag for end. set isEnd to true when you stop asr:
//vad:is_speech then speaking=True & isEnd = False, audio data will be appended for the specfied user.
//vad:silence then speaking=False & isEnd = False, clear audio buffer and do asr inference.
}
message Response { //response data.
string sentence = 1; //json, includes flag for success and asr text .
string user = 2; //same to request user.
string language = 3; //same to request language.
string action = 4; //server status:
//terminate:asr stopped;
//speaking:user is speaking, audio data is appended;
//decoding: server is decoding;
//finish: get asr text, most used.
}