FunASR/examples/industrial_data_pretraining/sense_voice/demo_fsmn.py
zhifu gao 00d0df3a10
Dev gzf decoding (#1695)
* resume from step

* batch

* batch

* batch

* batch

* batch

* batch

* batch

* batch

* batch

* batch

* batch

* batch

* batch

* batch

* batch

* train_loss_avg train_acc_avg

* train_loss_avg train_acc_avg

* train_loss_avg train_acc_avg

* log step

* wav is not exist

* wav is not exist

* decoding

* decoding
2024-05-06 22:17:25 +08:00

28 lines
786 B
Python

#!/usr/bin/env python3
# -*- encoding: utf-8 -*-
# Copyright FunASR (https://github.com/alibaba-damo-academy/FunASR). All Rights Reserved.
# MIT License (https://opensource.org/licenses/MIT)
from funasr import AutoModel
model = AutoModel(
model="/Users/zhifu/Downloads/modelscope_models/SenseVoiceModelscopeFSMN",
vad_model="iic/speech_fsmn_vad_zh-cn-16k-common-pytorch",
vad_kwargs={"max_single_segment_time": 30000},
)
input_wav = (
"https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav"
)
DecodingOptions = {
"task": ("ASR", "AED", "SER"),
"language": "auto",
"fp16": True,
"gain_event": True,
}
res = model.generate(input=input_wav, batch_size_s=0, DecodingOptions=DecodingOptions, beam_size=5)
print(res)