streaming bugfix (#1271)

* funasr1.0 funetine

* funasr1.0 pbar

* update with main (#1260)

* Update websocket_protocol_zh.md

* update

---------

Co-authored-by: Yabin Li <wucong.lyb@alibaba-inc.com>
Co-authored-by: shixian.shi <shixian.shi@alibaba-inc.com>

* update with main (#1264)

* Funasr1.0 (#1261)

* funasr1.0 funetine

* funasr1.0 pbar

* update with main (#1260)

* Update websocket_protocol_zh.md

* update

---------

Co-authored-by: Yabin Li <wucong.lyb@alibaba-inc.com>
Co-authored-by: shixian.shi <shixian.shi@alibaba-inc.com>

---------

Co-authored-by: Yabin Li <wucong.lyb@alibaba-inc.com>
Co-authored-by: shixian.shi <shixian.shi@alibaba-inc.com>

* bug fix

---------

Co-authored-by: Yabin Li <wucong.lyb@alibaba-inc.com>
Co-authored-by: shixian.shi <shixian.shi@alibaba-inc.com>

* funasr1.0 sanm scama

* funasr1.0 infer_after_finetune

* funasr1.0 fsmn-vad bug fix

* funasr1.0 fsmn-vad bug fix

* funasr1.0 fsmn-vad bug fix

---------

Co-authored-by: Yabin Li <wucong.lyb@alibaba-inc.com>
Co-authored-by: shixian.shi <shixian.shi@alibaba-inc.com>
This commit is contained in:
zhifu gao 2024-01-18 23:21:12 +08:00 committed by GitHub
parent b28f3c9da9
commit 12496e559f
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
3 changed files with 5 additions and 5 deletions

View File

@ -10,7 +10,6 @@ encoder_chunk_look_back = 4 #number of chunks to lookback for encoder self-atten
decoder_chunk_look_back = 1 #number of encoder chunks to lookback for decoder cross-attention
model = AutoModel(model="damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online", model_revision="v2.0.2")
cache = {}
res = model.generate(input="https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav",
chunk_size=chunk_size,
encoder_chunk_look_back=encoder_chunk_look_back,

View File

@ -501,7 +501,9 @@ class FsmnVADStreaming(nn.Module):
# self.AllResetDetection()
return segments
def init_cache(self, cache: dict = {}, **kwargs):
cache["frontend"] = {}
cache["prev_samples"] = torch.empty(0)
cache["encoder"] = {}
@ -583,7 +585,7 @@ class FsmnVADStreaming(nn.Module):
cache["prev_samples"] = audio_sample[:-m]
if _is_final:
cache = {}
self.init_cache(cache)
ibest_writer = None
if ibest_writer is None and kwargs.get("output_dir") is not None:

View File

@ -503,7 +503,6 @@ class ParaformerStreaming(Paraformer):
self.init_beam_search(**kwargs)
self.nbest = kwargs.get("nbest", 1)
if len(cache) == 0:
self.init_cache(cache, **kwargs)