mirror of
https://github.com/modelscope/FunASR
synced 2025-09-15 14:48:36 +08:00
Fixed a bug where calling torch.cuda.empty_cache() caused extra memory usage on 'cuda:0', leading to unexpected 'out of memory' errors in multi-GPU environments. Reference: - https://github.com/pytorch/pytorch/issues/25752 - https://github.com/pytorch/pytorch/issues/144025 |
||
|---|---|---|
| .. | ||
| auto | ||
| bin | ||
| datasets | ||
| download | ||
| frontends | ||
| losses | ||
| metrics | ||
| models | ||
| optimizers | ||
| schedulers | ||
| tokenizer | ||
| train_utils | ||
| utils | ||
| __init__.py | ||
| register.py | ||
| version.txt | ||