mirror of
https://github.com/modelscope/FunASR
synced 2025-09-15 14:48:36 +08:00
* total_time/accum_grad * fp16 * update with main (#1817) * add cmakelist * add paraformer-torch * add debug for funasr-onnx-offline * fix redefinition of jieba StdExtension.hpp * add loading torch models * update funasr-onnx-offline * add SwitchArg for wss-server * add SwitchArg for funasr-onnx-offline * update cmakelist * update funasr-onnx-offline-rtf * add define condition * add gpu define for offlne-stream * update com define * update offline-stream * update cmakelist * update func CompileHotwordEmbedding * add timestamp for paraformer-torch * add C10_USE_GLOG for paraformer-torch * update paraformer-torch * fix func FunASRWfstDecoderInit * update model.h * fix func FunASRWfstDecoderInit * fix tpass_stream * update paraformer-torch * add bladedisc for funasr-onnx-offline * update comdefine * update funasr-wss-server * add log for torch * fix GetValue BLADEDISC * fix log * update cmakelist * update warmup to 10 * update funasrruntime * add batch_size for wss-server * add batch for bins * add batch for offline-stream * add batch for paraformer * add batch for offline-stream * fix func SetBatchSize * add SetBatchSize for model * add SetBatchSize for model * fix func Forward * fix padding * update funasrruntime * add dec reset for batch * set batch default value * add argv for CutSplit * sort frame_queue * sorted msgs * fix FunOfflineInfer * add dynamic batch for fetch * fix FetchDynamic * update run_server.sh * update run_server.sh * cpp http post server support (#1739) * add cpp http server * add some comment * remove some comments * del debug infos * restore run_server.sh * adapt to new model struct * 修复了onnxruntime在macos下编译失败的错误 (#1748) * Add files via upload 增加macos的编译支持 * Add files via upload 增加macos支持 * Add files via upload target_link_directories(funasr PUBLIC ${ONNXRUNTIME_DIR}/lib) target_link_directories(funasr PUBLIC ${FFMPEG_DIR}/lib) 添加 if(APPLE) 限制 --------- Co-authored-by: Yabin Li <wucong.lyb@alibaba-inc.com> * Delete docs/images/wechat.png * Add files via upload * fixed the issues about seaco-onnx timestamp * fix bug (#1764) 当语音识别结果包含 `http` 时,标点符号预测会把它会被当成 url * fix empty asr result (#1765) 解码结果为空的语音片段,text 用空字符串 * update export * update export * docs * docs * update export name * docs * update * docs * docs * keep empty speech result (#1772) * docs * docs * update wechat QRcode * Add python funasr api support for websocket srv (#1777) * add python funasr_api supoort * change little to README.md * add core tools stream * modified a little * fix bug for timeout * support for buffer decode * add ffmpeg decode for buffer * libtorch demo * update libtorch infer * update utils * update demo * update demo * update libtorch inference * update model class * update seaco paraformer * bug fix * bug fix * auto frontend * auto frontend * auto frontend * auto frontend * auto frontend * auto frontend * auto frontend * auto frontend * Dev gzf exp (#1785) * resume from step * batch * batch * batch * batch * batch * batch * batch * batch * batch * batch * batch * batch * batch * batch * batch * train_loss_avg train_acc_avg * train_loss_avg train_acc_avg * train_loss_avg train_acc_avg * log step * wav is not exist * wav is not exist * decoding * decoding * decoding * wechat * decoding key * decoding key * decoding key * decoding key * decoding key * decoding key * dynamic batch * start_data_split_i=0 * total_time/accum_grad * total_time/accum_grad * total_time/accum_grad * update avg slice * update avg slice * sensevoice sanm * sensevoice sanm * sensevoice sanm --------- Co-authored-by: 北念 <lzr265946@alibaba-inc.com> * auto frontend * update paraformer timestamp * [Optimization] support bladedisc fp16 optimization (#1790) * add cif_v1 and cif_export * Update SDK_advanced_guide_offline_zh.md * add cif_wo_hidden_v1 * [fix] fix empty asr result (#1794) * english timestamp for valilla paraformer * wechat * [fix] better solution for handling empty result (#1796) * update scripts * modify the qformer adaptor (#1804) Co-authored-by: nichongjia-2007 <nichongjia@gmail.com> * add ctc inference code (#1806) Co-authored-by: haoneng.lhn <haoneng.lhn@alibaba-inc.com> * Update auto_model.py 修复空字串进入speaker model时报raw_text变量不存在的bug * Update auto_model.py 修复识别出空串后spk_model内变量未定义问题 * update model name * fix paramter 'quantize' unused issue (#1813) Co-authored-by: ZihanLiao <liaozihan1@xdf.cn> * wechat * Update cif_predictor.py (#1811) * Update cif_predictor.py * modify cif_v1_export under extreme cases, max_label_len calculated by batch_len misaligns with token_num * Update cif_predictor.py torch.cumsum precision degradation, using float64 instead * update code --------- Co-authored-by: 雾聪 <wucong.lyb@alibaba-inc.com> Co-authored-by: zhaomingwork <61895407+zhaomingwork@users.noreply.github.com> Co-authored-by: szsteven008 <97944818+szsteven008@users.noreply.github.com> Co-authored-by: Ephemeroptera <605686962@qq.com> Co-authored-by: 彭震东 <zhendong.peng@qq.com> Co-authored-by: Shi Xian <40013335+R1ckShi@users.noreply.github.com> Co-authored-by: 维石 <shixian.shi@alibaba-inc.com> Co-authored-by: 北念 <lzr265946@alibaba-inc.com> Co-authored-by: xiaowan0322 <wanchen.swc@alibaba-inc.com> Co-authored-by: zhuangzhong <zhuangzhong@corp.netease.com> Co-authored-by: Xingchen Song(宋星辰) <xingchensong1996@163.com> Co-authored-by: nichongjia-2007 <nichongjia@gmail.com> Co-authored-by: haoneng.lhn <haoneng.lhn@alibaba-inc.com> Co-authored-by: liugz18 <57401541+liugz18@users.noreply.github.com> Co-authored-by: Marlowe <54339989+ZihanLiao@users.noreply.github.com> Co-authored-by: ZihanLiao <liaozihan1@xdf.cn> Co-authored-by: zhong zhuang <zhuangz@lamda.nju.edu.cn> * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * v1.0.28 (#1836) * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * update (#1841) * v1.0.28 * version checker * version checker * rollback cif_v1 for training bug * fixbug * fixbug for cif * fixbug --------- Co-authored-by: 维石 <shixian.shi@alibaba-inc.com> * update (#1842) * v1.0.28 * version checker * version checker * rollback cif_v1 for training bug * fixbug * fixbug for cif * fixbug --------- Co-authored-by: 维石 <shixian.shi@alibaba-inc.com> * inference --------- Co-authored-by: 雾聪 <wucong.lyb@alibaba-inc.com> Co-authored-by: zhaomingwork <61895407+zhaomingwork@users.noreply.github.com> Co-authored-by: szsteven008 <97944818+szsteven008@users.noreply.github.com> Co-authored-by: Ephemeroptera <605686962@qq.com> Co-authored-by: 彭震东 <zhendong.peng@qq.com> Co-authored-by: Shi Xian <40013335+R1ckShi@users.noreply.github.com> Co-authored-by: 维石 <shixian.shi@alibaba-inc.com> Co-authored-by: 北念 <lzr265946@alibaba-inc.com> Co-authored-by: xiaowan0322 <wanchen.swc@alibaba-inc.com> Co-authored-by: zhuangzhong <zhuangzhong@corp.netease.com> Co-authored-by: Xingchen Song(宋星辰) <xingchensong1996@163.com> Co-authored-by: nichongjia-2007 <nichongjia@gmail.com> Co-authored-by: haoneng.lhn <haoneng.lhn@alibaba-inc.com> Co-authored-by: liugz18 <57401541+liugz18@users.noreply.github.com> Co-authored-by: Marlowe <54339989+ZihanLiao@users.noreply.github.com> Co-authored-by: ZihanLiao <liaozihan1@xdf.cn> Co-authored-by: zhong zhuang <zhuangz@lamda.nju.edu.cn>
151 lines
6.1 KiB
Python
151 lines
6.1 KiB
Python
import os
|
|
import json
|
|
import torch
|
|
import logging
|
|
|
|
import librosa
|
|
import random
|
|
import torch.distributed as dist
|
|
|
|
from funasr.register import tables
|
|
|
|
|
|
@tables.register("index_ds_classes", "IndexDSJsonl")
|
|
@tables.register("index_ds_classes", "IndexDSJsonlRankFull")
|
|
@tables.register("index_ds_classes", "IndexDSJsonlRankSplit")
|
|
class IndexDSJsonlRankFull(torch.utils.data.Dataset):
|
|
|
|
def __init__(self, path: str, **kwargs):
|
|
super().__init__()
|
|
self.max_source_length = kwargs.get("max_source_length", 2048)
|
|
self.min_source_length = kwargs.get("min_source_length", 0)
|
|
self.max_target_length = kwargs.get("max_target_length", 2048)
|
|
self.min_target_length = kwargs.get("min_target_length", 0)
|
|
self.max_token_length = kwargs.get("max_token_length", 2200)
|
|
|
|
is_training = kwargs.get("is_training", True)
|
|
if not (path.endswith(".jsonl") or path.endswith(".json")):
|
|
# jsonl list file
|
|
data_split_num = kwargs.get("data_split_num", 1)
|
|
data_split_i = kwargs.get("data_split_i", 0)
|
|
|
|
if not is_training:
|
|
data_split_num = 1
|
|
data_split_i = 0
|
|
with open(path, encoding="utf-8") as fin:
|
|
file_list_all = fin.readlines()
|
|
|
|
num_per_slice = (len(file_list_all) - 1) // data_split_num + 1 # 16
|
|
file_list = file_list_all[
|
|
data_split_i * num_per_slice : (data_split_i + 1) * num_per_slice
|
|
]
|
|
logging.info(
|
|
f"is_training: {is_training}, data_split_num: {data_split_num}, data_split_i: {data_split_i}, \nfile_list: {file_list}, \nfile_list_all: {file_list_all}"
|
|
)
|
|
|
|
else:
|
|
file_list = [path]
|
|
|
|
# total_num = len(file_list)
|
|
# try:
|
|
# rank = dist.get_rank()
|
|
# world_size = dist.get_world_size()
|
|
# except:
|
|
# rank = 0
|
|
# world_size = 1
|
|
# logging.info("distributed is not initialized, only single shard")
|
|
#
|
|
# if not kwargs.get("rank_split", False):
|
|
# logging.info(f"Warning, rank_split disenabled, batch and shuffle data in global")
|
|
# rank = 0
|
|
# world_size = 1
|
|
#
|
|
# num_per_rank = total_num // world_size
|
|
# if num_per_rank * world_size < total_num:
|
|
# logging.info(f"Warning, jsonl file:{total_num} could not be divided by world_size: {world_size}, {path}")
|
|
# total_num_needed = num_per_rank * world_size
|
|
#
|
|
# extra_num = total_num_needed - total_num
|
|
# file_list_tmp = random.choices(file_list, k=extra_num)
|
|
# file_list += file_list_tmp
|
|
# logging.info(f"Warning, after random choices: {file_list}")
|
|
#
|
|
# file_list_rank = file_list[rank * num_per_rank:(rank + 1) * num_per_rank]
|
|
#
|
|
# logging.info(
|
|
# f"is_training: {is_training}, file_list_rank: {file_list_rank}")
|
|
|
|
# contents = []
|
|
# for file_json in file_list_rank:
|
|
contents = []
|
|
for file_json in file_list:
|
|
with open(file_json.strip(), encoding="utf-8") as fin:
|
|
for line in fin:
|
|
data = json.loads(line.strip())
|
|
if "text" in data: # for sft
|
|
contents.append(data["text"])
|
|
if "source" in data: # for speech lab pretrain
|
|
prompt = data.get("prompt", "<ASR>")
|
|
source = data["source"].replace(
|
|
"/cpfs01", "/cpfs_speech/data"
|
|
) # only use in alibaba gpu group: .replace("/cpfs01", "/cpfs_speech/data")
|
|
target = data["target"]
|
|
source_len = data.get("source_len", 1)
|
|
target_len = data.get("target_len", 0)
|
|
if "aishell" in source:
|
|
target = target.replace(" ", "")
|
|
if (
|
|
source_len < self.min_source_length
|
|
or source_len > self.max_source_length
|
|
):
|
|
continue
|
|
if (
|
|
target_len < self.min_target_length
|
|
or target_len > self.max_target_length
|
|
):
|
|
continue
|
|
|
|
if (source_len + target_len) > self.max_token_length:
|
|
continue
|
|
|
|
contents_i = {
|
|
"source": source,
|
|
"prompt": prompt,
|
|
"target": target,
|
|
"source_len": source_len,
|
|
"target_len": target_len,
|
|
}
|
|
text_language = data.get("text_language", None)
|
|
if text_language is not None:
|
|
contents_i["text_language"] = text_language
|
|
if "emo_target" in data:
|
|
contents_i["emo_target"] = data["emo_target"]
|
|
if "event_target" in data:
|
|
contents_i["event_target"] = data["event_target"]
|
|
if "with_or_wo_itn" in data:
|
|
contents_i["with_or_wo_itn"] = data["with_or_wo_itn"]
|
|
# audio_language = data.get("audio_language", None)
|
|
# if audio_language is not None:
|
|
# contents_i["audio_language"] = audio_language
|
|
contents.append(contents_i)
|
|
|
|
self.contents = contents
|
|
|
|
logging.info("total_num of samplers: {}, {}".format(len(self.contents), path))
|
|
|
|
def __len__(self):
|
|
return len(self.contents)
|
|
|
|
def __getitem__(self, index):
|
|
|
|
data = self.contents[index]
|
|
|
|
return data
|
|
|
|
def get_source_len(self, data_dict):
|
|
return data_dict.get("source_len", 1)
|
|
|
|
def get_target_len(self, data_dict):
|
|
|
|
return data_dict.get("target_len", 0)
|