FunASR/funasr/models/sense_voice/model.py
zhifu gao d19f48e174
Dev gzf exp (#1593)
* update

* update with main (#1582)

* update

* Expose the max_end_silence_time to the user (#1532)

* update

* update

* update

* update

* update

* update

* update

* update

* update

* finetune

* finetune

* finetune

* finetune

* finetune

* finetune

* fix: resolve IndexError when using spk model and the audio contains only 1 segment (#1535)

* install requirements automatically

* install requirements automatically

* install requirements automatically

* install requirements automatically

* install requirements automatically

* install requirements automatically

* install requirements automatically

* v1.0.19

* v1.0.19

* train

* train

* docs

* update

* update

* update

* update

* update

* update

* update

* train update

* bugfix seg_dict_file

* bugfix seg_dict_file

* train

* train

* train (#1548)

* Dev gzf new (#1551)

* train

* train

* <funasr>: <punc online> (#1552)

1.修正添加标点时英文首单词和第二个单词被错误合并的问题。

Co-authored-by: carl.che <carl.che@cloudminds.com>

* Dev gzf new (#1553)

* train

* train

* train

* train

* train

* train

* train

* train

* Dev gzf new (#1554)

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* Dev gzf new (#1555)

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* 修正commit 87b62d6895 引入的英文整句标点预测导致末尾两个单词中间的空格被删除的问题。 (#1556)

* <funasr>: <punc online>

1.修正添加标点时英文首单词和第二个单词被错误合并的问题。

* <funasr>: <punc online>

1.修正commit 87b62d6895 引入的英文整句标点预测导致末尾两个单词中间的空格被删除的问题。

---------

Co-authored-by: carl.che <carl.che@cloudminds.com>

* Dev gzf new (#1557)

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* Dev gzf new (#1559)

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* Dev gzf new (#1561)

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* Dev gzf new (#1562)

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* Dev gzf new (#1567)

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* whisper_lib for sense voice

* aishell recipe

* sense voice (#1568)

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* whisper_lib for sense voice

* aishell recipe

* sense voice

* Dev gzf new (#1574)

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* whisper_lib for sense voice

* aishell recipe

* sense voice

* docs

* bugfix (#1580)

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* whisper_lib for sense voice

* aishell recipe

* sense voice

* docs

* bugfix

* v1.0.20

---------

Co-authored-by: BOBOTANG <tzfjobmail@gmail.com>
Co-authored-by: Atomie CHEN <atomic_cwh@163.com>
Co-authored-by: Carl <415692979@qq.com>
Co-authored-by: carl.che <carl.che@cloudminds.com>

* ctc

* ctc

* ctc

* ctc

* update with main (#1592)

* update

* Expose the max_end_silence_time to the user (#1532)

* update

* update

* update

* update

* update

* update

* update

* update

* update

* finetune

* finetune

* finetune

* finetune

* finetune

* finetune

* fix: resolve IndexError when using spk model and the audio contains only 1 segment (#1535)

* install requirements automatically

* install requirements automatically

* install requirements automatically

* install requirements automatically

* install requirements automatically

* install requirements automatically

* install requirements automatically

* v1.0.19

* v1.0.19

* train

* train

* docs

* update

* update

* update

* update

* update

* update

* update

* train update

* bugfix seg_dict_file

* bugfix seg_dict_file

* train

* train

* train (#1548)

* Dev gzf new (#1551)

* train

* train

* <funasr>: <punc online> (#1552)

1.修正添加标点时英文首单词和第二个单词被错误合并的问题。

Co-authored-by: carl.che <carl.che@cloudminds.com>

* Dev gzf new (#1553)

* train

* train

* train

* train

* train

* train

* train

* train

* Dev gzf new (#1554)

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* Dev gzf new (#1555)

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* 修正commit 87b62d6895 引入的英文整句标点预测导致末尾两个单词中间的空格被删除的问题。 (#1556)

* <funasr>: <punc online>

1.修正添加标点时英文首单词和第二个单词被错误合并的问题。

* <funasr>: <punc online>

1.修正commit 87b62d6895 引入的英文整句标点预测导致末尾两个单词中间的空格被删除的问题。

---------

Co-authored-by: carl.che <carl.che@cloudminds.com>

* Dev gzf new (#1557)

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* Dev gzf new (#1559)

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* Dev gzf new (#1561)

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* Dev gzf new (#1562)

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* Dev gzf new (#1567)

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* whisper_lib for sense voice

* aishell recipe

* sense voice (#1568)

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* whisper_lib for sense voice

* aishell recipe

* sense voice

* Dev gzf new (#1574)

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* whisper_lib for sense voice

* aishell recipe

* sense voice

* docs

* bugfix (#1580)

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* train

* whisper_lib for sense voice

* aishell recipe

* sense voice

* docs

* bugfix

* v1.0.20

* update demo page (#1585)

* commit web page vue

* optimize web page

* optimize web page

* remove other private component

* modify web page

* Update index.vue

* Update lxwjzxfw.vue

* Update sstx.vue

* update static file

---------

Co-authored-by: BOBOTANG <tzfjobmail@gmail.com>
Co-authored-by: Atomie CHEN <atomic_cwh@163.com>
Co-authored-by: Carl <415692979@qq.com>
Co-authored-by: carl.che <carl.che@cloudminds.com>
Co-authored-by: bltcn <blt@tom.com>

* sensevoice

* sensevoice

---------

Co-authored-by: BOBOTANG <tzfjobmail@gmail.com>
Co-authored-by: Atomie CHEN <atomic_cwh@163.com>
Co-authored-by: Carl <415692979@qq.com>
Co-authored-by: carl.che <carl.che@cloudminds.com>
Co-authored-by: bltcn <blt@tom.com>
2024-04-08 18:51:53 +08:00

107 lines
4.4 KiB
Python

from dataclasses import dataclass
from typing import Dict
from typing import Iterable, Optional
import time
import numpy as np
import torch
import torch.nn.functional as F
from torch import Tensor
from torch import nn
from . import whisper_lib as whisper
from funasr.utils.load_utils import load_audio_text_image_video, extract_fbank
from funasr.register import tables
@tables.register("model_classes", "SenseVoice")
class SenseVoice(nn.Module):
def __init__(self, *args, **kwargs):
super().__init__()
hub = kwargs.get("hub", "funasr")
dims = kwargs.get("dims", {})
dims = whisper.model.ModelDimensions(**dims)
model = whisper.model.Whisper(dims=dims)
self.model = model
self.encoder_output_size = self.model.dims.n_audio_state
def forward(self, ):
pass
def inference(self,
data_in,
data_lengths=None,
key: list = None,
tokenizer=None,
frontend=None,
**kwargs,
):
if kwargs.get("batch_size", 1) > 1:
raise NotImplementedError("batch decoding is not implemented")
if frontend is None and not hasattr(self, "frontend"):
frontend_class = tables.frontend_classes.get("WhisperFrontend")
frontend = frontend_class(n_mels=self.model.dims.n_mels, do_pad_trim=kwargs.get("do_pad_trim", True))
self.frontend = frontend
else:
frontend = frontend if frontend is not None else self.frontend
meta_data = {}
if isinstance(data_in, torch.Tensor) and kwargs.get("data_type", "sound") == "fbank": # fbank
speech, speech_lengths = data_in, data_lengths
if len(speech.shape) < 3:
speech = speech[None, :, :]
if speech_lengths is None:
speech_lengths = speech.shape[1]
else:
# extract fbank feats
time1 = time.perf_counter()
audio_sample_list = load_audio_text_image_video(data_in, fs=frontend.fs if hasattr(frontend, "fs") else 16000, audio_fs=kwargs.get("fs", 16000),
data_type=kwargs.get("data_type", "sound"),
tokenizer=tokenizer)
time2 = time.perf_counter()
meta_data["load_data"] = f"{time2 - time1:0.3f}"
speech, speech_lengths = extract_fbank(audio_sample_list, data_type=kwargs.get("data_type", "sound"),
frontend=frontend)
time3 = time.perf_counter()
meta_data["extract_feat"] = f"{time3 - time2:0.3f}"
frame_shift = frontend.frame_shift if hasattr(frontend, "frame_shift") else 10
lfr_n = frontend.lfr_n if hasattr(frontend, "lfr_n") else 1
meta_data["batch_data_time"] = speech_lengths.sum().item() * frame_shift * lfr_n / 1000
speech = speech.to(device=kwargs["device"])[0, :, :]
speech_lengths = speech_lengths.to(device=kwargs["device"])
task = kwargs.get("task", "ASR")
if isinstance(task, str):
task = [task]
task = "".join([f"<|{x}|>" for x in task])
initial_prompt = kwargs.get("initial_prompt", f"<|startoftranscript|>{task}")
language = kwargs.get("language", None)
language = None if language == "auto" else language
# if language is None:
# # detect the spoken language
# _, probs = self.model.detect_language(speech, initial_prompt=initial_prompt)
# print(f"Detected language: {max(probs, key=probs.get)}")
# language = max(probs, key=probs.get)
# language = language if kwargs.get("language", None) is None else kwargs.get("language")
# decode the audio
# initial_prompt = kwargs.get("initial_prompt", "<|startoftranscript|><|ASR|>")
vocab_path = kwargs.get("vocab_path", None)
options = whisper.DecodingOptions(language=language, fp16=False, without_timestamps=True, initial_prompt=initial_prompt, vocab_path=vocab_path)
result = whisper.decode(self.model, speech, options)
results = []
result_i = {"key": key[0], "text": result.text}
results.append(result_i)
return results, meta_data