mirror of
https://github.com/modelscope/FunASR
synced 2025-09-15 14:48:36 +08:00
* total_time/accum_grad * fp16 * update with main (#1817) * add cmakelist * add paraformer-torch * add debug for funasr-onnx-offline * fix redefinition of jieba StdExtension.hpp * add loading torch models * update funasr-onnx-offline * add SwitchArg for wss-server * add SwitchArg for funasr-onnx-offline * update cmakelist * update funasr-onnx-offline-rtf * add define condition * add gpu define for offlne-stream * update com define * update offline-stream * update cmakelist * update func CompileHotwordEmbedding * add timestamp for paraformer-torch * add C10_USE_GLOG for paraformer-torch * update paraformer-torch * fix func FunASRWfstDecoderInit * update model.h * fix func FunASRWfstDecoderInit * fix tpass_stream * update paraformer-torch * add bladedisc for funasr-onnx-offline * update comdefine * update funasr-wss-server * add log for torch * fix GetValue BLADEDISC * fix log * update cmakelist * update warmup to 10 * update funasrruntime * add batch_size for wss-server * add batch for bins * add batch for offline-stream * add batch for paraformer * add batch for offline-stream * fix func SetBatchSize * add SetBatchSize for model * add SetBatchSize for model * fix func Forward * fix padding * update funasrruntime * add dec reset for batch * set batch default value * add argv for CutSplit * sort frame_queue * sorted msgs * fix FunOfflineInfer * add dynamic batch for fetch * fix FetchDynamic * update run_server.sh * update run_server.sh * cpp http post server support (#1739) * add cpp http server * add some comment * remove some comments * del debug infos * restore run_server.sh * adapt to new model struct * 修复了onnxruntime在macos下编译失败的错误 (#1748) * Add files via upload 增加macos的编译支持 * Add files via upload 增加macos支持 * Add files via upload target_link_directories(funasr PUBLIC ${ONNXRUNTIME_DIR}/lib) target_link_directories(funasr PUBLIC ${FFMPEG_DIR}/lib) 添加 if(APPLE) 限制 --------- Co-authored-by: Yabin Li <wucong.lyb@alibaba-inc.com> * Delete docs/images/wechat.png * Add files via upload * fixed the issues about seaco-onnx timestamp * fix bug (#1764) 当语音识别结果包含 `http` 时,标点符号预测会把它会被当成 url * fix empty asr result (#1765) 解码结果为空的语音片段,text 用空字符串 * update export * update export * docs * docs * update export name * docs * update * docs * docs * keep empty speech result (#1772) * docs * docs * update wechat QRcode * Add python funasr api support for websocket srv (#1777) * add python funasr_api supoort * change little to README.md * add core tools stream * modified a little * fix bug for timeout * support for buffer decode * add ffmpeg decode for buffer * libtorch demo * update libtorch infer * update utils * update demo * update demo * update libtorch inference * update model class * update seaco paraformer * bug fix * bug fix * auto frontend * auto frontend * auto frontend * auto frontend * auto frontend * auto frontend * auto frontend * auto frontend * Dev gzf exp (#1785) * resume from step * batch * batch * batch * batch * batch * batch * batch * batch * batch * batch * batch * batch * batch * batch * batch * train_loss_avg train_acc_avg * train_loss_avg train_acc_avg * train_loss_avg train_acc_avg * log step * wav is not exist * wav is not exist * decoding * decoding * decoding * wechat * decoding key * decoding key * decoding key * decoding key * decoding key * decoding key * dynamic batch * start_data_split_i=0 * total_time/accum_grad * total_time/accum_grad * total_time/accum_grad * update avg slice * update avg slice * sensevoice sanm * sensevoice sanm * sensevoice sanm --------- Co-authored-by: 北念 <lzr265946@alibaba-inc.com> * auto frontend * update paraformer timestamp * [Optimization] support bladedisc fp16 optimization (#1790) * add cif_v1 and cif_export * Update SDK_advanced_guide_offline_zh.md * add cif_wo_hidden_v1 * [fix] fix empty asr result (#1794) * english timestamp for valilla paraformer * wechat * [fix] better solution for handling empty result (#1796) * update scripts * modify the qformer adaptor (#1804) Co-authored-by: nichongjia-2007 <nichongjia@gmail.com> * add ctc inference code (#1806) Co-authored-by: haoneng.lhn <haoneng.lhn@alibaba-inc.com> * Update auto_model.py 修复空字串进入speaker model时报raw_text变量不存在的bug * Update auto_model.py 修复识别出空串后spk_model内变量未定义问题 * update model name * fix paramter 'quantize' unused issue (#1813) Co-authored-by: ZihanLiao <liaozihan1@xdf.cn> * wechat * Update cif_predictor.py (#1811) * Update cif_predictor.py * modify cif_v1_export under extreme cases, max_label_len calculated by batch_len misaligns with token_num * Update cif_predictor.py torch.cumsum precision degradation, using float64 instead * update code --------- Co-authored-by: 雾聪 <wucong.lyb@alibaba-inc.com> Co-authored-by: zhaomingwork <61895407+zhaomingwork@users.noreply.github.com> Co-authored-by: szsteven008 <97944818+szsteven008@users.noreply.github.com> Co-authored-by: Ephemeroptera <605686962@qq.com> Co-authored-by: 彭震东 <zhendong.peng@qq.com> Co-authored-by: Shi Xian <40013335+R1ckShi@users.noreply.github.com> Co-authored-by: 维石 <shixian.shi@alibaba-inc.com> Co-authored-by: 北念 <lzr265946@alibaba-inc.com> Co-authored-by: xiaowan0322 <wanchen.swc@alibaba-inc.com> Co-authored-by: zhuangzhong <zhuangzhong@corp.netease.com> Co-authored-by: Xingchen Song(宋星辰) <xingchensong1996@163.com> Co-authored-by: nichongjia-2007 <nichongjia@gmail.com> Co-authored-by: haoneng.lhn <haoneng.lhn@alibaba-inc.com> Co-authored-by: liugz18 <57401541+liugz18@users.noreply.github.com> Co-authored-by: Marlowe <54339989+ZihanLiao@users.noreply.github.com> Co-authored-by: ZihanLiao <liaozihan1@xdf.cn> Co-authored-by: zhong zhuang <zhuangz@lamda.nju.edu.cn> * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * v1.0.28 (#1836) * sensevoice * sensevoice * sensevoice * sensevoice * sensevoice * update (#1841) * v1.0.28 * version checker * version checker * rollback cif_v1 for training bug * fixbug * fixbug for cif * fixbug --------- Co-authored-by: 维石 <shixian.shi@alibaba-inc.com> * update (#1842) * v1.0.28 * version checker * version checker * rollback cif_v1 for training bug * fixbug * fixbug for cif * fixbug --------- Co-authored-by: 维石 <shixian.shi@alibaba-inc.com> * inference * inference * inference * requests * finetune * finetune * finetune * finetune * finetune * add inference prepare func (#1848) * docs * docs * docs * docs * docs --------- Co-authored-by: 雾聪 <wucong.lyb@alibaba-inc.com> Co-authored-by: zhaomingwork <61895407+zhaomingwork@users.noreply.github.com> Co-authored-by: szsteven008 <97944818+szsteven008@users.noreply.github.com> Co-authored-by: Ephemeroptera <605686962@qq.com> Co-authored-by: 彭震东 <zhendong.peng@qq.com> Co-authored-by: Shi Xian <40013335+R1ckShi@users.noreply.github.com> Co-authored-by: 维石 <shixian.shi@alibaba-inc.com> Co-authored-by: 北念 <lzr265946@alibaba-inc.com> Co-authored-by: xiaowan0322 <wanchen.swc@alibaba-inc.com> Co-authored-by: zhuangzhong <zhuangzhong@corp.netease.com> Co-authored-by: Xingchen Song(宋星辰) <xingchensong1996@163.com> Co-authored-by: nichongjia-2007 <nichongjia@gmail.com> Co-authored-by: haoneng.lhn <haoneng.lhn@alibaba-inc.com> Co-authored-by: liugz18 <57401541+liugz18@users.noreply.github.com> Co-authored-by: Marlowe <54339989+ZihanLiao@users.noreply.github.com> Co-authored-by: ZihanLiao <liaozihan1@xdf.cn> Co-authored-by: zhong zhuang <zhuangz@lamda.nju.edu.cn> Co-authored-by: PerfeZ <90945395+PerfeZ@users.noreply.github.com>
84 lines
2.6 KiB
Bash
84 lines
2.6 KiB
Bash
# Copyright FunASR (https://github.com/alibaba-damo-academy/FunASR). All Rights Reserved.
|
|
# MIT License (https://opensource.org/licenses/MIT)
|
|
|
|
workspace=`pwd`
|
|
|
|
# method1, finetune from model hub
|
|
|
|
# which gpu to train or finetune
|
|
export CUDA_VISIBLE_DEVICES="0,1"
|
|
gpu_num=$(echo $CUDA_VISIBLE_DEVICES | awk -F "," '{print NF}')
|
|
|
|
# model_name from model_hub, or model_dir in local path
|
|
|
|
## option 1, download model automatically
|
|
model_name_or_model_dir="iic/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch"
|
|
|
|
## option 2, download model by git
|
|
#local_path_root=${workspace}/modelscope_models
|
|
#mkdir -p ${local_path_root}/${model_name_or_model_dir}
|
|
#git clone https://www.modelscope.cn/${model_name_or_model_dir}.git ${local_path_root}/${model_name_or_model_dir}
|
|
#model_name_or_model_dir=${local_path_root}/${model_name_or_model_dir}
|
|
|
|
|
|
# data dir, which contains: train.json, val.json
|
|
data_dir="../../../data/list"
|
|
|
|
train_data="${data_dir}/train.jsonl"
|
|
val_data="${data_dir}/val.jsonl"
|
|
|
|
# generate train.jsonl and val.jsonl from wav.scp and text.txt
|
|
scp2jsonl \
|
|
++scp_file_list='["../../../data/list/train_wav.scp", "../../../data/list/train_text.txt"]' \
|
|
++data_type_list='["source", "target"]' \
|
|
++jsonl_file_out="${train_data}"
|
|
|
|
scp2jsonl \
|
|
++scp_file_list='["../../../data/list/val_wav.scp", "../../../data/list/val_text.txt"]' \
|
|
++data_type_list='["source", "target"]' \
|
|
++jsonl_file_out="${val_data}"
|
|
|
|
|
|
# exp output dir
|
|
output_dir="./outputs"
|
|
log_file="${output_dir}/log.txt"
|
|
|
|
|
|
mkdir -p ${output_dir}
|
|
echo "log_file: ${log_file}"
|
|
|
|
deepspeed_config=${workspace}/../../ds_stage1.json
|
|
|
|
DISTRIBUTED_ARGS="
|
|
--nnodes ${WORLD_SIZE:-1} \
|
|
--nproc_per_node $gpu_num \
|
|
--node_rank ${RANK:-0} \
|
|
--master_addr ${MASTER_ADDR:-127.0.0.1} \
|
|
--master_port ${MASTER_PORT:-26669}
|
|
"
|
|
|
|
echo $DISTRIBUTED_ARGS
|
|
|
|
torchrun $DISTRIBUTED_ARGS \
|
|
../../../funasr/bin/train_ds.py \
|
|
++model="${model_name_or_model_dir}" \
|
|
++train_data_set_list="${train_data}" \
|
|
++valid_data_set_list="${val_data}" \
|
|
++dataset="AudioDataset" \
|
|
++dataset_conf.index_ds="IndexDSJsonl" \
|
|
++dataset_conf.data_split_num=1 \
|
|
++dataset_conf.batch_sampler="BatchSampler" \
|
|
++dataset_conf.batch_size=6000 \
|
|
++dataset_conf.sort_size=1024 \
|
|
++dataset_conf.batch_type="token" \
|
|
++dataset_conf.num_workers=4 \
|
|
++train_conf.max_epoch=50 \
|
|
++train_conf.log_interval=1 \
|
|
++train_conf.resume=true \
|
|
++train_conf.validate_interval=2000 \
|
|
++train_conf.save_checkpoint_interval=2000 \
|
|
++train_conf.keep_nbest_models=20 \
|
|
++train_conf.use_deepspeed=false \
|
|
++train_conf.deepspeed_config=${deepspeed_config} \
|
|
++optim_conf.lr=0.0002 \
|
|
++output_dir="${output_dir}" &> ${log_file} |