mirror of
https://github.com/modelscope/FunASR
synced 2025-09-15 14:48:36 +08:00
* update with main (#1817) * add cmakelist * add paraformer-torch * add debug for funasr-onnx-offline * fix redefinition of jieba StdExtension.hpp * add loading torch models * update funasr-onnx-offline * add SwitchArg for wss-server * add SwitchArg for funasr-onnx-offline * update cmakelist * update funasr-onnx-offline-rtf * add define condition * add gpu define for offlne-stream * update com define * update offline-stream * update cmakelist * update func CompileHotwordEmbedding * add timestamp for paraformer-torch * add C10_USE_GLOG for paraformer-torch * update paraformer-torch * fix func FunASRWfstDecoderInit * update model.h * fix func FunASRWfstDecoderInit * fix tpass_stream * update paraformer-torch * add bladedisc for funasr-onnx-offline * update comdefine * update funasr-wss-server * add log for torch * fix GetValue BLADEDISC * fix log * update cmakelist * update warmup to 10 * update funasrruntime * add batch_size for wss-server * add batch for bins * add batch for offline-stream * add batch for paraformer * add batch for offline-stream * fix func SetBatchSize * add SetBatchSize for model * add SetBatchSize for model * fix func Forward * fix padding * update funasrruntime * add dec reset for batch * set batch default value * add argv for CutSplit * sort frame_queue * sorted msgs * fix FunOfflineInfer * add dynamic batch for fetch * fix FetchDynamic * update run_server.sh * update run_server.sh * cpp http post server support (#1739) * add cpp http server * add some comment * remove some comments * del debug infos * restore run_server.sh * adapt to new model struct * 修复了onnxruntime在macos下编译失败的错误 (#1748) * Add files via upload 增加macos的编译支持 * Add files via upload 增加macos支持 * Add files via upload target_link_directories(funasr PUBLIC ${ONNXRUNTIME_DIR}/lib) target_link_directories(funasr PUBLIC ${FFMPEG_DIR}/lib) 添加 if(APPLE) 限制 --------- Co-authored-by: Yabin Li <wucong.lyb@alibaba-inc.com> * Delete docs/images/wechat.png * Add files via upload * fixed the issues about seaco-onnx timestamp * fix bug (#1764) 当语音识别结果包含 `http` 时,标点符号预测会把它会被当成 url * fix empty asr result (#1765) 解码结果为空的语音片段,text 用空字符串 * update export * update export * docs * docs * update export name * docs * update * docs * docs * keep empty speech result (#1772) * docs * docs * update wechat QRcode * Add python funasr api support for websocket srv (#1777) * add python funasr_api supoort * change little to README.md * add core tools stream * modified a little * fix bug for timeout * support for buffer decode * add ffmpeg decode for buffer * libtorch demo * update libtorch infer * update utils * update demo * update demo * update libtorch inference * update model class * update seaco paraformer * bug fix * bug fix * auto frontend * auto frontend * auto frontend * auto frontend * auto frontend * auto frontend * auto frontend * auto frontend * Dev gzf exp (#1785) * resume from step * batch * batch * batch * batch * batch * batch * batch * batch * batch * batch * batch * batch * batch * batch * batch * train_loss_avg train_acc_avg * train_loss_avg train_acc_avg * train_loss_avg train_acc_avg * log step * wav is not exist * wav is not exist * decoding * decoding * decoding * wechat * decoding key * decoding key * decoding key * decoding key * decoding key * decoding key * dynamic batch * start_data_split_i=0 * total_time/accum_grad * total_time/accum_grad * total_time/accum_grad * update avg slice * update avg slice * sensevoice sanm * sensevoice sanm * sensevoice sanm --------- Co-authored-by: 北念 <lzr265946@alibaba-inc.com> * auto frontend * update paraformer timestamp * [Optimization] support bladedisc fp16 optimization (#1790) * add cif_v1 and cif_export * Update SDK_advanced_guide_offline_zh.md * add cif_wo_hidden_v1 * [fix] fix empty asr result (#1794) * english timestamp for valilla paraformer * wechat * [fix] better solution for handling empty result (#1796) * update scripts * modify the qformer adaptor (#1804) Co-authored-by: nichongjia-2007 <nichongjia@gmail.com> * add ctc inference code (#1806) Co-authored-by: haoneng.lhn <haoneng.lhn@alibaba-inc.com> * Update auto_model.py 修复空字串进入speaker model时报raw_text变量不存在的bug * Update auto_model.py 修复识别出空串后spk_model内变量未定义问题 * update model name * fix paramter 'quantize' unused issue (#1813) Co-authored-by: ZihanLiao <liaozihan1@xdf.cn> * wechat * Update cif_predictor.py (#1811) * Update cif_predictor.py * modify cif_v1_export under extreme cases, max_label_len calculated by batch_len misaligns with token_num * Update cif_predictor.py torch.cumsum precision degradation, using float64 instead * update code --------- Co-authored-by: 雾聪 <wucong.lyb@alibaba-inc.com> Co-authored-by: zhaomingwork <61895407+zhaomingwork@users.noreply.github.com> Co-authored-by: szsteven008 <97944818+szsteven008@users.noreply.github.com> Co-authored-by: Ephemeroptera <605686962@qq.com> Co-authored-by: 彭震东 <zhendong.peng@qq.com> Co-authored-by: Shi Xian <40013335+R1ckShi@users.noreply.github.com> Co-authored-by: 维石 <shixian.shi@alibaba-inc.com> Co-authored-by: 北念 <lzr265946@alibaba-inc.com> Co-authored-by: xiaowan0322 <wanchen.swc@alibaba-inc.com> Co-authored-by: zhuangzhong <zhuangzhong@corp.netease.com> Co-authored-by: Xingchen Song(宋星辰) <xingchensong1996@163.com> Co-authored-by: nichongjia-2007 <nichongjia@gmail.com> Co-authored-by: haoneng.lhn <haoneng.lhn@alibaba-inc.com> Co-authored-by: liugz18 <57401541+liugz18@users.noreply.github.com> Co-authored-by: Marlowe <54339989+ZihanLiao@users.noreply.github.com> Co-authored-by: ZihanLiao <liaozihan1@xdf.cn> Co-authored-by: zhong zhuang <zhuangz@lamda.nju.edu.cn> * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice --------- Co-authored-by: 雾聪 <wucong.lyb@alibaba-inc.com> Co-authored-by: zhaomingwork <61895407+zhaomingwork@users.noreply.github.com> Co-authored-by: szsteven008 <97944818+szsteven008@users.noreply.github.com> Co-authored-by: Ephemeroptera <605686962@qq.com> Co-authored-by: 彭震东 <zhendong.peng@qq.com> Co-authored-by: Shi Xian <40013335+R1ckShi@users.noreply.github.com> Co-authored-by: 维石 <shixian.shi@alibaba-inc.com> Co-authored-by: 北念 <lzr265946@alibaba-inc.com> Co-authored-by: xiaowan0322 <wanchen.swc@alibaba-inc.com> Co-authored-by: zhuangzhong <zhuangzhong@corp.netease.com> Co-authored-by: Xingchen Song(宋星辰) <xingchensong1996@163.com> Co-authored-by: nichongjia-2007 <nichongjia@gmail.com> Co-authored-by: haoneng.lhn <haoneng.lhn@alibaba-inc.com> Co-authored-by: liugz18 <57401541+liugz18@users.noreply.github.com> Co-authored-by: Marlowe <54339989+ZihanLiao@users.noreply.github.com> Co-authored-by: ZihanLiao <liaozihan1@xdf.cn> Co-authored-by: zhong zhuang <zhuangz@lamda.nju.edu.cn>
117 lines
4.0 KiB
Python
117 lines
4.0 KiB
Python
import os
|
|
import json
|
|
import torch
|
|
import logging
|
|
|
|
import librosa
|
|
import random
|
|
import torch.distributed as dist
|
|
|
|
from funasr.register import tables
|
|
|
|
|
|
@tables.register("index_ds_classes", "OpenAIIndexDSJsonl")
|
|
class OpenAIIndexDSJsonl(torch.utils.data.Dataset): # torch.utils.data.Dataset
|
|
|
|
def __init__(self, path: str, **kwargs):
|
|
super().__init__()
|
|
|
|
self.max_source_length = kwargs.get("max_source_length", 3000)
|
|
self.min_source_length = kwargs.get("min_source_length", 0)
|
|
self.max_target_length = kwargs.get("max_target_length", 2048)
|
|
self.min_target_length = kwargs.get("min_target_length", 0)
|
|
self.max_token_length = kwargs.get("max_token_length", 2200)
|
|
|
|
is_training = kwargs.get("is_training", True)
|
|
if not (path.endswith(".jsonl") or path.endswith(".json")):
|
|
# jsonl list file
|
|
data_split_num = kwargs.get("data_split_num", 1)
|
|
data_split_i = kwargs.get("data_split_i", 0)
|
|
|
|
if not is_training:
|
|
data_split_num = 1
|
|
data_split_i = 0
|
|
with open(path, encoding="utf-8") as fin:
|
|
file_list_all = fin.readlines()
|
|
|
|
num_per_slice = (len(file_list_all) - 1) // data_split_num + 1 # 16
|
|
file_list = file_list_all[
|
|
data_split_i * num_per_slice : (data_split_i + 1) * num_per_slice
|
|
]
|
|
logging.info(
|
|
f"is_training: {is_training}, data_split_num: {data_split_num}, data_split_i: {data_split_i}, \nfile_list: {file_list}, \nfile_list_all: {file_list_all}"
|
|
)
|
|
|
|
else:
|
|
file_list = [path]
|
|
|
|
contents = []
|
|
for file_json in file_list:
|
|
with open(file_json.strip(), encoding="utf-8") as fin:
|
|
for line in fin:
|
|
data_dict = json.loads(line.strip())
|
|
data = data_dict["messages"]
|
|
speech_length = data_dict.get("speech_length", -1) // 8
|
|
text_length = data_dict.get("text_length", 0)
|
|
if speech_length > self.max_source_length:
|
|
logging.info(
|
|
"speech_length: {speech_length} > {self.max_source_length}, drop it"
|
|
)
|
|
continue
|
|
if text_length > self.max_target_length:
|
|
continue
|
|
|
|
self.max_target_length = kwargs.get("max_target_length", 2048)
|
|
|
|
system, user, assistant = [], [], []
|
|
for i, item in enumerate(data):
|
|
role = item["role"]
|
|
content = item["content"]
|
|
if role == "system":
|
|
system.append(content)
|
|
elif role == "user":
|
|
user.append(content)
|
|
elif role == "assistant":
|
|
assistant.append(content)
|
|
|
|
system = system * len(user)
|
|
|
|
contents_i = {
|
|
"system": system,
|
|
"user": user,
|
|
"assistant": assistant,
|
|
"source_len": speech_length + text_length,
|
|
}
|
|
contents.append(contents_i)
|
|
|
|
self.contents = contents
|
|
|
|
logging.info("total_num of samplers: {}, {}".format(len(self.contents), path))
|
|
|
|
def __len__(self):
|
|
return len(self.contents)
|
|
|
|
def __getitem__(self, index):
|
|
|
|
data = self.contents[index]
|
|
|
|
return data
|
|
|
|
def get_source_len(self, data_dict):
|
|
source_len = data_dict.get("source_len", -1)
|
|
if source_len < 0:
|
|
source_len = len(data_dict["system"]) + len(data_dict["user"])
|
|
return source_len
|
|
|
|
def get_target_len(self, data_dict):
|
|
|
|
return 0
|
|
|
|
|
|
if __name__ == "__main__":
|
|
index_ds = OpenAIIndexDSJsonl(
|
|
path="/Users/zhifu/funasr1.0/test_local/data_tmp/tmp_wav_10.jsonl"
|
|
)
|
|
print(index_ds.contents)
|
|
pass
|