mirror of
https://github.com/modelscope/FunASR
synced 2025-09-15 14:48:36 +08:00
docs
This commit is contained in:
parent
e950a92173
commit
048543b1cd
@ -45,10 +45,9 @@ Where,
|
||||
- `max_single_segment_time`: The maximum length for audio segmentation in VAD, specified in `ms`.
|
||||
|
||||
Suggestion: When encountering OOM (Out of Memory) issues with long audio inputs, as the GPU memory usage increases with the square of the audio duration, there are three possible scenarios:
|
||||
|
||||
a) In the initial inference stage, GPU memory usage primarily depends on `batch_size_token`. Reducing this value appropriately can help reduce memory usage.
|
||||
b) In the middle of the inference process, when encountering long audio segments segmented by VAD, if the total number of tokens is still smaller than `batch_size_token` but OOM issues persist, reducing `batch_size_token_threshold_s` can help. If the threshold is exceeded, forcing the batch size to 1 can be considered.
|
||||
c) Towards the end of the inference process, when encountering long audio segments segmented by VAD and the total number of tokens is smaller than `batch_size_token` but exceeds the threshold `batch_size_token_threshold_s`, forcing the batch size to 1 may still result in OOM errors. In such cases, reducing `max_single_segment_time` can be considered to shorten the duration of audio segments generated by VAD.
|
||||
- a) In the initial inference stage, GPU memory usage primarily depends on `batch_size_token`. Reducing this value appropriately can help reduce memory usage.
|
||||
- b) In the middle of the inference process, when encountering long audio segments segmented by VAD, if the total number of tokens is still smaller than `batch_size_token` but OOM issues persist, reducing `batch_size_token_threshold_s` can help. If the threshold is exceeded, forcing the batch size to 1 can be considered.
|
||||
- c) Towards the end of the inference process, when encountering long audio segments segmented by VAD and the total number of tokens is smaller than `batch_size_token` but exceeds the threshold `batch_size_token_threshold_s`, forcing the batch size to 1 may still result in OOM errors. In such cases, reducing `max_single_segment_time` can be considered to shorten the duration of audio segments generated by VAD.
|
||||
|
||||
#### [Paraformer-online Model](https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online/summary)
|
||||
##### Streaming Decoding
|
||||
|
||||
@ -36,7 +36,7 @@ chunk_stride = 1600# 100ms
|
||||
speech_chunk = speech[0:chunk_stride]
|
||||
rec_result = inference_pipeline(audio_in=speech_chunk, param_dict=param_dict)
|
||||
print(rec_result)
|
||||
# next chunk, 480ms
|
||||
# next chunk, 100ms
|
||||
speech_chunk = speech[chunk_stride:chunk_stride+chunk_stride]
|
||||
rec_result = inference_pipeline(audio_in=speech_chunk, param_dict=param_dict)
|
||||
print(rec_result)
|
||||
|
||||
@ -36,7 +36,7 @@ chunk_stride = 1600# 100ms
|
||||
speech_chunk = speech[0:chunk_stride]
|
||||
rec_result = inference_pipeline(audio_in=speech_chunk, param_dict=param_dict)
|
||||
print(rec_result)
|
||||
# next chunk, 480ms
|
||||
# next chunk, 100ms
|
||||
speech_chunk = speech[chunk_stride:chunk_stride+chunk_stride]
|
||||
rec_result = inference_pipeline(audio_in=speech_chunk, param_dict=param_dict)
|
||||
print(rec_result)
|
||||
|
||||
Loading…
Reference in New Issue
Block a user