Dev gzf exp (#1650)

* sensevoice finetune

* sensevoice finetune

* sensevoice finetune

* sensevoice finetune

* sensevoice finetune

* sensevoice finetune

* sensevoice finetune

* sensevoice finetune

* sensevoice finetune

* sensevoice finetune

* bugfix

* update with main (#1631)

* update seaco finetune

* v1.0.24

---------

Co-authored-by: 维石 <shixian.shi@alibaba-inc.com>

* sensevoice

* sensevoice

* sensevoice

* update with main (#1638)

* update seaco finetune

* v1.0.24

* update rwkv template

---------

Co-authored-by: 维石 <shixian.shi@alibaba-inc.com>

* sensevoice

* sensevoice

* sensevoice

* sensevoice

* sensevoice

* sensevoice

* sensevoice

* sensevoice

* sensevoice

* sensevoice

* sensevoice

* sensevoice

* sensevoice

* sensevoice

* sensevoice

* sense voice

* sense voice

* sense voice

* sense voice

* sense voice

* sense voice

* sense voice

* sense voice

* sense voice

* sense voice

* sense voice

* sense voice

* sense voice

* sense voice

* sense voice

* sense voice

* sense voice

* sense voice

* sense voice

---------

Co-authored-by: 维石 <shixian.shi@alibaba-inc.com>
This commit is contained in:
zhifu gao 2024-04-23 19:51:32 +08:00 committed by GitHub
parent 8795bf5bf1
commit 61d631fb5b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -34,7 +34,7 @@ class IndexDSJsonlRankFull(torch.utils.data.Dataset):
with open(path, encoding='utf-8') as fin: with open(path, encoding='utf-8') as fin:
file_list_all = fin.readlines() file_list_all = fin.readlines()
num_per_slice = len(file_list_all) // data_split_num num_per_slice = (len(file_list_all)-1) // data_split_num + 1
file_list = file_list_all[data_split_i * num_per_slice:(data_split_i + 1) * num_per_slice] file_list = file_list_all[data_split_i * num_per_slice:(data_split_i + 1) * num_per_slice]
logging.info( logging.info(
f"is_training: {is_training}, data_split_num: {data_split_num}, data_split_i: {data_split_i}, \nfile_list: {file_list}, \nfile_list_all: {file_list_all}") f"is_training: {is_training}, data_split_num: {data_split_num}, data_split_i: {data_split_i}, \nfile_list: {file_list}, \nfile_list_all: {file_list_all}")