mirror of
https://github.com/FunAudioLLM/SenseVoice.git
synced 2025-09-15 15:08:35 +08:00
docs
This commit is contained in:
parent
6156332296
commit
90b0b77745
38
.github/ISSUE_TEMPLATE/ask_questions.md
vendored
Normal file
38
.github/ISSUE_TEMPLATE/ask_questions.md
vendored
Normal file
@ -0,0 +1,38 @@
|
||||
---
|
||||
name: ❓ Questions/Help
|
||||
about: If you have questions, please first search existing issues and docs
|
||||
labels: 'question, needs triage'
|
||||
---
|
||||
|
||||
Notice: In order to resolve issues more efficiently, please raise issue following the template.
|
||||
(注意:为了更加高效率解决您遇到的问题,请按照模板提问,补充细节)
|
||||
|
||||
## ❓ Questions and Help
|
||||
|
||||
|
||||
### Before asking:
|
||||
1. search the issues.
|
||||
2. search the docs.
|
||||
|
||||
<!-- If you still can't find what you need: -->
|
||||
|
||||
#### What is your question?
|
||||
|
||||
#### Code
|
||||
|
||||
<!-- Please paste a code snippet if your question requires it! -->
|
||||
|
||||
#### What have you tried?
|
||||
|
||||
#### What's your environment?
|
||||
|
||||
- OS (e.g., Linux):
|
||||
- FunASR Version (e.g., 1.0.0):
|
||||
- ModelScope Version (e.g., 1.11.0):
|
||||
- PyTorch Version (e.g., 2.0.0):
|
||||
- How you installed funasr (`pip`, source):
|
||||
- Python version:
|
||||
- GPU (e.g., V100M32)
|
||||
- CUDA/cuDNN version (e.g., cuda11.7):
|
||||
- Docker version (e.g., funasr-runtime-sdk-cpu-0.4.1)
|
||||
- Any other relevant information:
|
||||
47
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
47
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
@ -0,0 +1,47 @@
|
||||
---
|
||||
name: 🐛 Bug Report
|
||||
about: Submit a bug report to help us improve
|
||||
labels: 'bug, needs triage'
|
||||
---
|
||||
|
||||
Notice: In order to resolve issues more efficiently, please raise issue following the template.
|
||||
(注意:为了更加高效率解决您遇到的问题,请按照模板提问,补充细节)
|
||||
|
||||
## 🐛 Bug
|
||||
|
||||
<!-- A clear and concise description of what the bug is. -->
|
||||
|
||||
### To Reproduce
|
||||
|
||||
Steps to reproduce the behavior (**always include the command you ran**):
|
||||
|
||||
1. Run cmd '....'
|
||||
2. See error
|
||||
|
||||
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
|
||||
|
||||
|
||||
#### Code sample
|
||||
<!-- Ideally attach a minimal code sample to reproduce the decried issue.
|
||||
Minimal means having the shortest code but still preserving the bug. -->
|
||||
|
||||
### Expected behavior
|
||||
|
||||
<!-- A clear and concise description of what you expected to happen. -->
|
||||
|
||||
### Environment
|
||||
|
||||
- OS (e.g., Linux):
|
||||
- FunASR Version (e.g., 1.0.0):
|
||||
- ModelScope Version (e.g., 1.11.0):
|
||||
- PyTorch Version (e.g., 2.0.0):
|
||||
- How you installed funasr (`pip`, source):
|
||||
- Python version:
|
||||
- GPU (e.g., V100M32)
|
||||
- CUDA/cuDNN version (e.g., cuda11.7):
|
||||
- Docker version (e.g., funasr-runtime-sdk-cpu-0.4.1)
|
||||
- Any other relevant information:
|
||||
|
||||
### Additional context
|
||||
|
||||
<!-- Add any other context about the problem here. -->
|
||||
1
.github/ISSUE_TEMPLATE/config.yaml
vendored
Normal file
1
.github/ISSUE_TEMPLATE/config.yaml
vendored
Normal file
@ -0,0 +1 @@
|
||||
blank_issues_enabled: false
|
||||
15
.github/ISSUE_TEMPLATE/error_docs.md
vendored
Normal file
15
.github/ISSUE_TEMPLATE/error_docs.md
vendored
Normal file
@ -0,0 +1,15 @@
|
||||
---
|
||||
name: 📚 Documentation/Typos
|
||||
about: Report an issue related to documentation or a typo
|
||||
labels: 'documentation, needs triage'
|
||||
---
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
For typos and doc fixes, please go ahead and:
|
||||
|
||||
1. Create an issue.
|
||||
2. Fix the typo.
|
||||
3. Submit a PR.
|
||||
|
||||
Thanks!
|
||||
@ -121,7 +121,7 @@ model = AutoModel(
|
||||
res = model.generate(
|
||||
input=f"{model.model_path}/example/en.mp3",
|
||||
cache={},
|
||||
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
|
||||
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
|
||||
use_itn=True,
|
||||
batch_size_s=60,
|
||||
merge_vad=True, #
|
||||
@ -150,7 +150,7 @@ model = AutoModel(model=model_dir, trust_remote_code=True, device="cuda:0")
|
||||
res = model.generate(
|
||||
input=f"{model.model_path}/example/en.mp3",
|
||||
cache={},
|
||||
language="zh", # "zn", "en", "yue", "ja", "ko", "nospeech"
|
||||
language="zh", # "zh", "en", "yue", "ja", "ko", "nospeech"
|
||||
use_itn=False,
|
||||
batch_size=64,
|
||||
)
|
||||
@ -172,7 +172,7 @@ m, kwargs = SenseVoiceSmall.from_pretrained(model=model_dir, device="cuda:0")
|
||||
|
||||
res = m.inference(
|
||||
data_in=f"{kwargs['model_path']}/example/en.mp3",
|
||||
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
|
||||
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
|
||||
use_itn=False,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
@ -121,7 +121,7 @@ model = AutoModel(
|
||||
res = model.generate(
|
||||
input=f"{model.model_path}/example/en.mp3",
|
||||
cache={},
|
||||
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
|
||||
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
|
||||
use_itn=True,
|
||||
batch_size_s=60,
|
||||
merge_vad=True, #
|
||||
@ -150,7 +150,7 @@ model = AutoModel(model=model_dir, trust_remote_code=True, device="cuda:0")
|
||||
res = model.generate(
|
||||
input=f"{model.model_path}/example/en.mp3",
|
||||
cache={},
|
||||
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
|
||||
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
|
||||
use_itn=True,
|
||||
batch_size=64,
|
||||
)
|
||||
@ -172,7 +172,7 @@ m, kwargs = SenseVoiceSmall.from_pretrained(model=model_dir, device="cuda:0")
|
||||
|
||||
res = m.inference(
|
||||
data_in=f"{kwargs['model_path']}/example/en.mp3",
|
||||
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
|
||||
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
|
||||
use_itn=False,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
@ -125,7 +125,7 @@ model = AutoModel(
|
||||
res = model.generate(
|
||||
input=f"{model.model_path}/example/en.mp3",
|
||||
cache={},
|
||||
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
|
||||
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
|
||||
use_itn=True,
|
||||
batch_size_s=60,
|
||||
merge_vad=True, #
|
||||
@ -154,7 +154,7 @@ model = AutoModel(model=model_dir, trust_remote_code=True, device="cuda:0")
|
||||
res = model.generate(
|
||||
input=f"{model.model_path}/example/en.mp3",
|
||||
cache={},
|
||||
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
|
||||
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
|
||||
use_itn=True,
|
||||
batch_size=64,
|
||||
)
|
||||
@ -176,7 +176,7 @@ m, kwargs = SenseVoiceSmall.from_pretrained(model=model_dir, device="cuda:0")
|
||||
|
||||
res = m.inference(
|
||||
data_in=f"{kwargs['model_path']}/example/en.mp3",
|
||||
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
|
||||
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
|
||||
use_itn=False,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
10
demo1.py
10
demo1.py
@ -22,7 +22,7 @@ model = AutoModel(
|
||||
res = model.generate(
|
||||
input=f"{model.model_path}/example/en.mp3",
|
||||
cache={},
|
||||
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
|
||||
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
|
||||
use_itn=True,
|
||||
batch_size_s=60,
|
||||
merge_vad=True, #
|
||||
@ -35,7 +35,7 @@ print(text)
|
||||
res = model.generate(
|
||||
input=f"{model.model_path}/example/zh.mp3",
|
||||
cache={},
|
||||
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
|
||||
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
|
||||
use_itn=True,
|
||||
batch_size_s=60,
|
||||
merge_vad=True, #
|
||||
@ -48,7 +48,7 @@ print(text)
|
||||
res = model.generate(
|
||||
input=f"{model.model_path}/example/yue.mp3",
|
||||
cache={},
|
||||
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
|
||||
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
|
||||
use_itn=True,
|
||||
batch_size_s=60,
|
||||
merge_vad=True, #
|
||||
@ -61,7 +61,7 @@ print(text)
|
||||
res = model.generate(
|
||||
input=f"{model.model_path}/example/ja.mp3",
|
||||
cache={},
|
||||
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
|
||||
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
|
||||
use_itn=True,
|
||||
batch_size_s=60,
|
||||
merge_vad=True, #
|
||||
@ -75,7 +75,7 @@ print(text)
|
||||
res = model.generate(
|
||||
input=f"{model.model_path}/example/ko.mp3",
|
||||
cache={},
|
||||
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
|
||||
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
|
||||
use_itn=True,
|
||||
batch_size_s=60,
|
||||
merge_vad=True, #
|
||||
|
||||
2
demo2.py
2
demo2.py
@ -13,7 +13,7 @@ m, kwargs = SenseVoiceSmall.from_pretrained(model=model_dir, device="cuda:0")
|
||||
|
||||
res = m.inference(
|
||||
data_in=f"{kwargs['model_path']}/example/en.mp3",
|
||||
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
|
||||
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
|
||||
use_itn=False,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
Loading…
Reference in New Issue
Block a user