fix format error

This commit is contained in:
yhliang 2023-05-10 17:12:30 +08:00
parent 555ca020b7
commit 48ee4dd536
20 changed files with 19 additions and 15 deletions

View File

@ -141,9 +141,10 @@ Before running <code class="docutils literal notranslate"><span class="pre">run.
<span class="p">|</span>——<span class="w"> </span>Test_Ali_near
<span class="p">|</span>——<span class="w"> </span>Train_Ali_far
<span class="p">|</span>——<span class="w"> </span>Train_Ali_near
Before<span class="w"> </span>running<span class="w"> </span><span class="sb">`</span>run_m2met_2023_infer.sh<span class="sb">`</span>,<span class="w"> </span>you<span class="w"> </span>need<span class="w"> </span>to<span class="w"> </span>place<span class="w"> </span>the<span class="w"> </span>new<span class="w"> </span><span class="nb">test</span><span class="w"> </span><span class="nb">set</span><span class="w"> </span><span class="sb">`</span>Test_2023_Ali_far<span class="sb">`</span><span class="w"> </span><span class="o">(</span>to<span class="w"> </span>be<span class="w"> </span>released<span class="w"> </span>after<span class="w"> </span>the<span class="w"> </span>challenge<span class="w"> </span>starts<span class="o">)</span><span class="w"> </span><span class="k">in</span><span class="w"> </span>the<span class="w"> </span><span class="sb">`</span>./dataset<span class="sb">`</span><span class="w"> </span>directory,<span class="w"> </span>which<span class="w"> </span>contains<span class="w"> </span>only<span class="w"> </span>raw<span class="w"> </span>audios.<span class="w"> </span>Then<span class="w"> </span>put<span class="w"> </span>the<span class="w"> </span>given<span class="w"> </span><span class="sb">`</span>wav.scp<span class="sb">`</span>,<span class="w"> </span><span class="sb">`</span>wav_raw.scp<span class="sb">`</span>,<span class="w"> </span><span class="sb">`</span>segments<span class="sb">`</span>,<span class="w"> </span><span class="sb">`</span>utt2spk<span class="sb">`</span><span class="w"> </span>and<span class="w"> </span><span class="sb">`</span>spk2utt<span class="sb">`</span><span class="w"> </span><span class="k">in</span><span class="w"> </span>the<span class="w"> </span><span class="sb">`</span>./data/Test_2023_Ali_far<span class="sb">`</span><span class="w"> </span>directory.<span class="w"> </span>
<span class="sb">```</span>shell
data/Test_2023_Ali_far
</pre></div>
</div>
<p>Before running <code class="docutils literal notranslate"><span class="pre">run_m2met_2023_infer.sh</span></code>, you need to place the new test set <code class="docutils literal notranslate"><span class="pre">Test_2023_Ali_far</span></code> (to be released after the challenge starts) in the <code class="docutils literal notranslate"><span class="pre">./dataset</span></code> directory, which contains only raw audios. Then put the given <code class="docutils literal notranslate"><span class="pre">wav.scp</span></code>, <code class="docutils literal notranslate"><span class="pre">wav_raw.scp</span></code>, <code class="docutils literal notranslate"><span class="pre">segments</span></code>, <code class="docutils literal notranslate"><span class="pre">utt2spk</span></code> and <code class="docutils literal notranslate"><span class="pre">spk2utt</span></code> in the <code class="docutils literal notranslate"><span class="pre">./data/Test_2023_Ali_far</span></code> directory.</p>
<div class="highlight-shell notranslate"><div class="highlight"><pre><span></span>data/Test_2023_Ali_far
<span class="p">|</span>——<span class="w"> </span>wav.scp
<span class="p">|</span>——<span class="w"> </span>wav_raw.scp
<span class="p">|</span>——<span class="w"> </span>segments

View File

@ -130,7 +130,7 @@
<p>Automatic speech recognition (ASR) and speaker diarization have made significant strides in recent years, resulting in a surge of speech technology applications across various domains. However, meetings present unique challenges to speech technologies due to their complex acoustic conditions and diverse speaking styles, including overlapping speech, variable numbers of speakers, far-field signals in large conference rooms, and environmental noise and reverberation.</p>
<p>Over the years, several challenges have been organized to advance the development of meeting transcription, including the Rich Transcription evaluation and Computational Hearing in Multisource Environments (CHIME) challenges. The latest iteration of the CHIME challenge has a particular focus on distant automatic speech recognition and developing systems that can generalize across various array topologies and application scenarios. However, while progress has been made in English meeting transcription, language differences remain a significant barrier to achieving comparable results in non-English languages, such as Mandarin. The Multimodal Information Based Speech Processing (MISP) and Multi-Channel Multi-Party Meeting Transcription (M2MeT) challenges have been instrumental in advancing Mandarin meeting transcription. The MISP challenge seeks to address the problem of audio-visual distant multi-microphone signal processing in everyday home environments, while the M2MeT challenge focuses on tackling the speech overlap issue in offline meeting rooms.</p>
<p>The ICASSP2022 M2MeT challenge focuses on meeting scenarios, and it comprises two main tasks: speaker diarization and multi-speaker automatic speech recognition. The former involves identifying who spoke when in the meeting, while the latter aims to transcribe speech from multiple speakers simultaneously, which poses significant technical difficulties due to overlapping speech and acoustic interferences.</p>
<p>Building on the success of the previous M2MeT challenge, we are excited to propose the M2MeT2.0 challenge as an ASRU2023 challenge special session. In the original M2MeT challenge, the evaluation metric was speaker-independent, which meant that the transcription could be determined, but not the corresponding speaker. To address this limitation and further advance the current multi-talker ASR system towards practicality, the M2MeT2.0 challenge proposes the speaker-attributed ASR task with two sub-tracks: fixed and open training conditions. The speaker-attribute automatic speech recognition (ASR) task aims to tackle the practical and challenging problem of identifying “who spoke what at when”. To facilitate reproducible research in this field, we offer a comprehensive overview of the dataset, rules, evaluation metrics, and baseline systems. Furthermore, we will release a carefully curated test set, comprising approximately 10 hours of audio, according to the timeline. The new test set is designed to enable researchers to validate and compare their models performance and advance the state of the art in this area.</p>
<p>Building on the success of the previous M2MeT challenge, we are excited to propose the M2MeT2.0 challenge as an ASRU 2023 challenge special session. In the original M2MeT challenge, the evaluation metric was speaker-independent, which meant that the transcription could be determined, but not the corresponding speaker. To address this limitation and further advance the current multi-talker ASR system towards practicality, the M2MeT2.0 challenge proposes the speaker-attributed ASR task with two sub-tracks: fixed and open training conditions. The speaker-attribute automatic speech recognition (ASR) task aims to tackle the practical and challenging problem of identifying “who spoke what at when”. To facilitate reproducible research in this field, we offer a comprehensive overview of the dataset, rules, evaluation metrics, and baseline systems. Furthermore, we will release a carefully curated test set, comprising approximately 10 hours of audio, according to the timeline. The new test set is designed to enable researchers to validate and compare their models performance and advance the state of the art in this area.</p>
</section>
<section id="timeline-aoe-time">
<h2>Timeline(AOE Time)<a class="headerlink" href="#timeline-aoe-time" title="Permalink to this heading"></a></h2>

View File

@ -16,6 +16,7 @@ dataset
|—— Test_Ali_near
|—— Train_Ali_far
|—— Train_Ali_near
```
Before running `run_m2met_2023_infer.sh`, you need to place the new test set `Test_2023_Ali_far` (to be released after the challenge starts) in the `./dataset` directory, which contains only raw audios. Then put the given `wav.scp`, `wav_raw.scp`, `segments`, `utt2spk` and `spk2utt` in the `./data/Test_2023_Ali_far` directory.
```shell
data/Test_2023_Ali_far

View File

@ -6,7 +6,7 @@ Over the years, several challenges have been organized to advance the developmen
The ICASSP2022 M2MeT challenge focuses on meeting scenarios, and it comprises two main tasks: speaker diarization and multi-speaker automatic speech recognition. The former involves identifying who spoke when in the meeting, while the latter aims to transcribe speech from multiple speakers simultaneously, which poses significant technical difficulties due to overlapping speech and acoustic interferences.
Building on the success of the previous M2MeT challenge, we are excited to propose the M2MeT2.0 challenge as an ASRU2023 challenge special session. In the original M2MeT challenge, the evaluation metric was speaker-independent, which meant that the transcription could be determined, but not the corresponding speaker. To address this limitation and further advance the current multi-talker ASR system towards practicality, the M2MeT2.0 challenge proposes the speaker-attributed ASR task with two sub-tracks: fixed and open training conditions. The speaker-attribute automatic speech recognition (ASR) task aims to tackle the practical and challenging problem of identifying "who spoke what at when". To facilitate reproducible research in this field, we offer a comprehensive overview of the dataset, rules, evaluation metrics, and baseline systems. Furthermore, we will release a carefully curated test set, comprising approximately 10 hours of audio, according to the timeline. The new test set is designed to enable researchers to validate and compare their models' performance and advance the state of the art in this area.
Building on the success of the previous M2MeT challenge, we are excited to propose the M2MeT2.0 challenge as an ASRU 2023 challenge special session. In the original M2MeT challenge, the evaluation metric was speaker-independent, which meant that the transcription could be determined, but not the corresponding speaker. To address this limitation and further advance the current multi-talker ASR system towards practicality, the M2MeT2.0 challenge proposes the speaker-attributed ASR task with two sub-tracks: fixed and open training conditions. The speaker-attribute automatic speech recognition (ASR) task aims to tackle the practical and challenging problem of identifying "who spoke what at when". To facilitate reproducible research in this field, we offer a comprehensive overview of the dataset, rules, evaluation metrics, and baseline systems. Furthermore, we will release a carefully curated test set, comprising approximately 10 hours of audio, according to the timeline. The new test set is designed to enable researchers to validate and compare their models' performance and advance the state of the art in this area.
## Timeline(AOE Time)
- $ April~29, 2023: $ Challenge and registration open.

File diff suppressed because one or more lines are too long

View File

@ -6,7 +6,7 @@
## 快速开始
首先需要安装FunASR和ModelScope. ([installation](https://alibaba-damo-academy.github.io/FunASR/en/installation.html))
基线系统有训练和测试两个脚本`run.sh` 是用于训练基线系统并在M2MeT的验证与测试集上评估的 `run_m2met_2023_infer.sh` 用于此次竞赛预备开放的全新测试集上测试同时生成符合竞赛最终提交格式的文件。
基线系统有训练和测试两个脚本,`run.sh`是用于训练基线系统并在M2MeT的验证与测试集上评估的而`run_m2met_2023_infer.sh`用于此次竞赛预备开放的全新测试集上测试同时生成符合竞赛最终提交格式的文件。
在运行 `run.sh`前,需要自行下载并解压[AliMeeting](http://www.openslr.org/119/)数据集并放置于`./dataset`目录下:
```shell
dataset
@ -16,7 +16,8 @@ dataset
|—— Test_Ali_near
|—— Train_Ali_far
|—— Train_Ali_near
在运行 `run_m2met_2023_infer.sh`前, 需要将测试集`Test_2023_Ali_far`仅包含音频将于6.16发布)放置于`./dataset`目录下。然后将主办方提供的`wav.scp``wav_raw.scp``segments``utt2spk`和`spk2utt`放置于`./data/Test_2023_Ali_far`目录下。
```
在运行`run_m2met_2023_infer.sh`前, 需要将测试集`Test_2023_Ali_far`仅包含音频将于6.16发布)放置于`./dataset`目录下。然后将主办方提供的`wav.scp``wav_raw.scp``segments``utt2spk`和`spk2utt`放置于`./data/Test_2023_Ali_far`目录下。
```shell
data/Test_2023_Ali_far
|—— wav.scp

View File

@ -7,7 +7,7 @@
IASSP2022 M2MeT挑战的侧重点是会议场景它包括两个赛道说话人日记和多说话人自动语音识别。前者涉及识别“谁在什么时候说了话”而后者旨在同时识别来自多个说话人的语音语音重叠和各种噪声带来了巨大的技术困难。
在上一届M2MeT成功举办的基础上我们将在ASRU2023上继续举办M2MeT2.0挑战赛。在上一届M2MeT挑战赛中评估指标是说话人无关的我们只能得到识别文本而不能确定相应的说话人。
在上一届M2MeT成功举办的基础上我们将在ASRU 2023上继续举办M2MeT2.0挑战赛。在上一届M2MeT挑战赛中评估指标是说话人无关的我们只能得到识别文本而不能确定相应的说话人。
为了解决这一局限性并将现在的多说话人语音识别系统推向实用化M2MeT2.0挑战赛将在说话人相关的人物上评估并且同时设立限定数据与不限定数据两个子赛道。通过将语音归属于特定的说话人这项任务旨在提高多说话人ASR系统在真实世界环境中的准确性和适用性。
我们对数据集、规则、基线系统和评估方法进行了详细介绍以进一步促进多说话人语音识别领域研究的发展。此外我们将根据时间表发布一个全新的测试集包括大约10小时的音频。

View File

@ -1 +1 @@
Search.setIndex({"docnames": ["index", "\u57fa\u7ebf", "\u6570\u636e\u96c6", "\u7b80\u4ecb", "\u7ec4\u59d4\u4f1a", "\u8054\u7cfb\u65b9\u5f0f", "\u89c4\u5219", "\u8d5b\u9053\u8bbe\u7f6e\u4e0e\u8bc4\u4f30"], "filenames": ["index.rst", "\u57fa\u7ebf.md", "\u6570\u636e\u96c6.md", "\u7b80\u4ecb.md", "\u7ec4\u59d4\u4f1a.md", "\u8054\u7cfb\u65b9\u5f0f.md", "\u89c4\u5219.md", "\u8d5b\u9053\u8bbe\u7f6e\u4e0e\u8bc4\u4f30.md"], "titles": ["ASRU 2023 \u591a\u901a\u9053\u591a\u65b9\u4f1a\u8bae\u8f6c\u5f55\u6311\u6218 2.0", "\u57fa\u7ebf", "\u6570\u636e\u96c6", "\u7b80\u4ecb", "\u7ec4\u59d4\u4f1a", "\u8054\u7cfb\u65b9\u5f0f", "\u7ade\u8d5b\u89c4\u5219", "\u8d5b\u9053\u8bbe\u7f6e\u4e0e\u8bc4\u4f30"], "terms": {"m2met": [0, 1, 3, 5, 7], "asru2023": [0, 3], "m2met2": [0, 3, 5, 7], "funasr": 1, "sa": 1, "asr": [1, 3, 7], "speakerencod": 1, "modelscop": [1, 7], "instal": 1, "run": 1, "sh": 1, "run_m2met_2023_inf": 1, "alimeet": [1, 5, 7], "dataset": 1, "eval_ali_far": 1, "eval_ali_near": 1, "test_ali_far": 1, "test_ali_near": 1, "train_ali_far": 1, "train_ali_near": 1, "test_2023_ali_far": 1, "16": [1, 3], "wav": 1, "scp": 1, "wav_raw": 1, "segment": 1, "utt2spk": 1, "spk2utt": 1, "data": 1, "shell": 1, "aishel": [2, 7], "cn": [2, 4, 7], "celeb": [2, 7], "test": [2, 6, 7], "2023": [2, 3, 6, 7], "118": 2, "75": 2, "104": 2, "train": 2, "eval": [2, 6], "10": [2, 3, 7], "212": 2, "15": 2, "30": 2, "456": 2, "25": 2, "13": 2, "55": 2, "42": 2, "27": 2, "34": 2, "76": 2, "20": [2, 3], "textgrid": 2, "id": 2, "openslr": 2, "baselin": 2, "automat": 3, "speech": 3, "recognit": 3, "speaker": 3, "diariz": 3, "rich": 3, "transcript": 3, "evalu": 3, "chime": 3, "comput": 3, "hear": 3, "in": 3, "multisourc": 3, "environ": 3, "misp": 3, "multimod": 3, "inform": 3, "base": 3, "process": 3, "multi": 3, "channel": 3, "parti": 3, "meet": 3, "iassp2022": 3, "29": 3, "11": 3, "22": 3, "26": 3, "session": 3, "12": 3, "asru": 3, "workshop": 3, "challeng": 3, "lxie": 4, "nwpu": 4, "edu": 4, "kong": 4, "aik": 4, "lee": 4, "star": 4, "kongaik": 4, "ieee": 4, "org": 4, "zhiji": 4, "yzj": 4, "alibaba": 4, "inc": 4, "com": [4, 5], "sli": 4, "zsl": 4, "yanminqian": 4, "sjtu": 4, "zhuc": 4, "microsoft": 4, "wujian": 4, "ceo": 4, "buhui": 4, "aishelldata": 4, "gmail": 5, "cpcer": [6, 7], "las": 6, "rnnt": 6, "transform": 6, "aishell4": 7, "vad": 7, "cer": 7, "ins": 7, "sub": 7, "del": 7, "text": 7, "frac": 7, "mathcal": 7, "n_": 7, "total": 7, "time": 7, "100": 7, "hug": 7, "face": 7}, "objects": {}, "objtypes": {}, "objnames": {}, "titleterms": {"asru": 0, "2023": 0, "alimeet": 2, "aoe": 3}, "envversion": {"sphinx.domains.c": 2, "sphinx.domains.changeset": 1, "sphinx.domains.citation": 1, "sphinx.domains.cpp": 8, "sphinx.domains.index": 1, "sphinx.domains.javascript": 2, "sphinx.domains.math": 2, "sphinx.domains.python": 3, "sphinx.domains.rst": 2, "sphinx.domains.std": 2, "sphinx": 57}, "alltitles": {"ASRU 2023 \u591a\u901a\u9053\u591a\u65b9\u4f1a\u8bae\u8f6c\u5f55\u6311\u6218 2.0": [[0, "asru-2023-2-0"]], "\u76ee\u5f55:": [[0, null]], "\u57fa\u7ebf": [[1, "id1"]], "\u57fa\u7ebf\u6982\u8ff0": [[1, "id2"]], "\u5feb\u901f\u5f00\u59cb": [[1, "id3"]], "\u57fa\u7ebf\u7ed3\u679c": [[1, "id4"]], "\u6570\u636e\u96c6": [[2, "id1"]], "\u6570\u636e\u96c6\u6982\u8ff0": [[2, "id2"]], "Alimeeting\u6570\u636e\u96c6\u4ecb\u7ecd": [[2, "alimeeting"]], "\u83b7\u53d6\u6570\u636e": [[2, "id3"]], "\u7b80\u4ecb": [[3, "id1"]], "\u7ade\u8d5b\u4ecb\u7ecd": [[3, "id2"]], "\u65f6\u95f4\u5b89\u6392(AOE\u65f6\u95f4)": [[3, "aoe"]], "\u7ade\u8d5b\u62a5\u540d": [[3, "id3"]], "\u7ec4\u59d4\u4f1a": [[4, "id1"]], "\u8054\u7cfb\u65b9\u5f0f": [[5, "id1"]], "\u7ade\u8d5b\u89c4\u5219": [[6, "id1"]], "\u8d5b\u9053\u8bbe\u7f6e\u4e0e\u8bc4\u4f30": [[7, "id1"]], "\u8bf4\u8bdd\u4eba\u76f8\u5173\u7684\u8bed\u97f3\u8bc6\u522b": [[7, "id2"]], "\u8bc4\u4f30\u65b9\u6cd5": [[7, "id3"]], "\u5b50\u8d5b\u9053\u8bbe\u7f6e": [[7, "id4"]], "\u5b50\u8d5b\u9053\u4e00 (\u9650\u5b9a\u8bad\u7ec3\u6570\u636e):": [[7, "id5"]], "\u5b50\u8d5b\u9053\u4e8c (\u5f00\u653e\u8bad\u7ec3\u6570\u636e):": [[7, "id6"]]}, "indexentries": {}})
Search.setIndex({"docnames": ["index", "\u57fa\u7ebf", "\u6570\u636e\u96c6", "\u7b80\u4ecb", "\u7ec4\u59d4\u4f1a", "\u8054\u7cfb\u65b9\u5f0f", "\u89c4\u5219", "\u8d5b\u9053\u8bbe\u7f6e\u4e0e\u8bc4\u4f30"], "filenames": ["index.rst", "\u57fa\u7ebf.md", "\u6570\u636e\u96c6.md", "\u7b80\u4ecb.md", "\u7ec4\u59d4\u4f1a.md", "\u8054\u7cfb\u65b9\u5f0f.md", "\u89c4\u5219.md", "\u8d5b\u9053\u8bbe\u7f6e\u4e0e\u8bc4\u4f30.md"], "titles": ["ASRU 2023 \u591a\u901a\u9053\u591a\u65b9\u4f1a\u8bae\u8f6c\u5f55\u6311\u6218 2.0", "\u57fa\u7ebf", "\u6570\u636e\u96c6", "\u7b80\u4ecb", "\u7ec4\u59d4\u4f1a", "\u8054\u7cfb\u65b9\u5f0f", "\u7ade\u8d5b\u89c4\u5219", "\u8d5b\u9053\u8bbe\u7f6e\u4e0e\u8bc4\u4f30"], "terms": {"m2met": [0, 1, 3, 5, 7], "asru2023": [0, 3], "m2met2": [0, 3, 5, 7], "funasr": 1, "sa": 1, "asr": [1, 3, 7], "speakerencod": 1, "modelscop": [1, 7], "instal": 1, "run": 1, "sh": 1, "run_m2met_2023_inf": 1, "alimeet": [1, 5, 7], "dataset": 1, "eval_ali_far": 1, "eval_ali_near": 1, "test_ali_far": 1, "test_ali_near": 1, "train_ali_far": 1, "train_ali_near": 1, "test_2023_ali_far": 1, "16": [1, 3], "wav": 1, "scp": 1, "wav_raw": 1, "segment": 1, "utt2spk": 1, "spk2utt": 1, "data": 1, "aishel": [2, 7], "cn": [2, 4, 7], "celeb": [2, 7], "test": [2, 6, 7], "2023": [2, 3, 6, 7], "118": 2, "75": 2, "104": 2, "train": 2, "eval": [2, 6], "10": [2, 3, 7], "212": 2, "15": 2, "30": 2, "456": 2, "25": 2, "13": 2, "55": 2, "42": 2, "27": 2, "34": 2, "76": 2, "20": [2, 3], "textgrid": 2, "id": 2, "openslr": 2, "baselin": 2, "automat": 3, "speech": 3, "recognit": 3, "speaker": 3, "diariz": 3, "rich": 3, "transcript": 3, "evalu": 3, "chime": 3, "comput": 3, "hear": 3, "in": 3, "multisourc": 3, "environ": 3, "misp": 3, "multimod": 3, "inform": 3, "base": 3, "process": 3, "multi": 3, "channel": 3, "parti": 3, "meet": 3, "iassp2022": 3, "asru": 3, "29": 3, "11": 3, "22": 3, "26": 3, "session": 3, "12": 3, "workshop": 3, "challeng": 3, "lxie": 4, "nwpu": 4, "edu": 4, "kong": 4, "aik": 4, "lee": 4, "star": 4, "kongaik": 4, "ieee": 4, "org": 4, "zhiji": 4, "yzj": 4, "alibaba": 4, "inc": 4, "com": [4, 5], "sli": 4, "zsl": 4, "yanminqian": 4, "sjtu": 4, "zhuc": 4, "microsoft": 4, "wujian": 4, "ceo": 4, "buhui": 4, "aishelldata": 4, "gmail": 5, "cpcer": [6, 7], "las": 6, "rnnt": 6, "transform": 6, "aishell4": 7, "vad": 7, "cer": 7, "ins": 7, "sub": 7, "del": 7, "text": 7, "frac": 7, "mathcal": 7, "n_": 7, "total": 7, "time": 7, "100": 7, "hug": 7, "face": 7}, "objects": {}, "objtypes": {}, "objnames": {}, "titleterms": {"asru": 0, "2023": 0, "alimeet": 2, "aoe": 3}, "envversion": {"sphinx.domains.c": 2, "sphinx.domains.changeset": 1, "sphinx.domains.citation": 1, "sphinx.domains.cpp": 8, "sphinx.domains.index": 1, "sphinx.domains.javascript": 2, "sphinx.domains.math": 2, "sphinx.domains.python": 3, "sphinx.domains.rst": 2, "sphinx.domains.std": 2, "sphinx": 57}, "alltitles": {"ASRU 2023 \u591a\u901a\u9053\u591a\u65b9\u4f1a\u8bae\u8f6c\u5f55\u6311\u6218 2.0": [[0, "asru-2023-2-0"]], "\u76ee\u5f55:": [[0, null]], "\u57fa\u7ebf": [[1, "id1"]], "\u57fa\u7ebf\u6982\u8ff0": [[1, "id2"]], "\u5feb\u901f\u5f00\u59cb": [[1, "id3"]], "\u57fa\u7ebf\u7ed3\u679c": [[1, "id4"]], "\u6570\u636e\u96c6": [[2, "id1"]], "\u6570\u636e\u96c6\u6982\u8ff0": [[2, "id2"]], "Alimeeting\u6570\u636e\u96c6\u4ecb\u7ecd": [[2, "alimeeting"]], "\u83b7\u53d6\u6570\u636e": [[2, "id3"]], "\u7b80\u4ecb": [[3, "id1"]], "\u7ade\u8d5b\u4ecb\u7ecd": [[3, "id2"]], "\u65f6\u95f4\u5b89\u6392(AOE\u65f6\u95f4)": [[3, "aoe"]], "\u7ade\u8d5b\u62a5\u540d": [[3, "id3"]], "\u7ec4\u59d4\u4f1a": [[4, "id1"]], "\u8054\u7cfb\u65b9\u5f0f": [[5, "id1"]], "\u7ade\u8d5b\u89c4\u5219": [[6, "id1"]], "\u8d5b\u9053\u8bbe\u7f6e\u4e0e\u8bc4\u4f30": [[7, "id1"]], "\u8bf4\u8bdd\u4eba\u76f8\u5173\u7684\u8bed\u97f3\u8bc6\u522b": [[7, "id2"]], "\u8bc4\u4f30\u65b9\u6cd5": [[7, "id3"]], "\u5b50\u8d5b\u9053\u8bbe\u7f6e": [[7, "id4"]], "\u5b50\u8d5b\u9053\u4e00 (\u9650\u5b9a\u8bad\u7ec3\u6570\u636e):": [[7, "id5"]], "\u5b50\u8d5b\u9053\u4e8c (\u5f00\u653e\u8bad\u7ec3\u6570\u636e):": [[7, "id6"]]}, "indexentries": {}})

View File

@ -133,7 +133,7 @@
<section id="id3">
<h2>快速开始<a class="headerlink" href="#id3" title="此标题的永久链接"></a></h2>
<p>首先需要安装FunASR和ModelScope. (<a class="reference external" href="https://alibaba-damo-academy.github.io/FunASR/en/installation.html">installation</a>)<br />
基线系统有训练和测试两个脚本<code class="docutils literal notranslate"><span class="pre">run.sh</span></code> 是用于训练基线系统并在M2MeT的验证与测试集上评估的 <code class="docutils literal notranslate"><span class="pre">run_m2met_2023_infer.sh</span></code> 用于此次竞赛预备开放的全新测试集上测试同时生成符合竞赛最终提交格式的文件。
基线系统有训练和测试两个脚本,<code class="docutils literal notranslate"><span class="pre">run.sh</span></code>是用于训练基线系统并在M2MeT的验证与测试集上评估的<code class="docutils literal notranslate"><span class="pre">run_m2met_2023_infer.sh</span></code>用于此次竞赛预备开放的全新测试集上测试同时生成符合竞赛最终提交格式的文件。
在运行 <code class="docutils literal notranslate"><span class="pre">run.sh</span></code>前,需要自行下载并解压<a class="reference external" href="http://www.openslr.org/119/">AliMeeting</a>数据集并放置于<code class="docutils literal notranslate"><span class="pre">./dataset</span></code>目录下:</p>
<div class="highlight-shell notranslate"><div class="highlight"><pre><span></span>dataset
<span class="p">|</span>——<span class="w"> </span>Eval_Ali_far
@ -142,9 +142,10 @@
<span class="p">|</span>——<span class="w"> </span>Test_Ali_near
<span class="p">|</span>——<span class="w"> </span>Train_Ali_far
<span class="p">|</span>——<span class="w"> </span>Train_Ali_near
在运行<span class="w"> </span><span class="sb">`</span>run_m2met_2023_infer.sh<span class="sb">`</span>前,<span class="w"> </span>需要将测试集<span class="sb">`</span>Test_2023_Ali_far<span class="sb">`</span>仅包含音频将于6.16发布)放置于<span class="sb">`</span>./dataset<span class="sb">`</span>目录下。然后将主办方提供的<span class="sb">`</span>wav.scp<span class="sb">`</span><span class="sb">`</span>wav_raw.scp<span class="sb">`</span><span class="sb">`</span>segments<span class="sb">`</span><span class="sb">`</span>utt2spk<span class="sb">`</span><span class="sb">`</span>spk2utt<span class="sb">`</span>放置于<span class="sb">`</span>./data/Test_2023_Ali_far<span class="sb">`</span>目录下。
<span class="sb">```</span>shell
data/Test_2023_Ali_far
</pre></div>
</div>
<p>在运行<code class="docutils literal notranslate"><span class="pre">run_m2met_2023_infer.sh</span></code>前, 需要将测试集<code class="docutils literal notranslate"><span class="pre">Test_2023_Ali_far</span></code>仅包含音频将于6.16发布)放置于<code class="docutils literal notranslate"><span class="pre">./dataset</span></code>目录下。然后将主办方提供的<code class="docutils literal notranslate"><span class="pre">wav.scp</span></code><code class="docutils literal notranslate"><span class="pre">wav_raw.scp</span></code><code class="docutils literal notranslate"><span class="pre">segments</span></code><code class="docutils literal notranslate"><span class="pre">utt2spk</span></code><code class="docutils literal notranslate"><span class="pre">spk2utt</span></code>放置于<code class="docutils literal notranslate"><span class="pre">./data/Test_2023_Ali_far</span></code>目录下。</p>
<div class="highlight-shell notranslate"><div class="highlight"><pre><span></span>data/Test_2023_Ali_far
<span class="p">|</span>——<span class="w"> </span>wav.scp
<span class="p">|</span>——<span class="w"> </span>wav_raw.scp
<span class="p">|</span>——<span class="w"> </span>segments

View File

@ -131,7 +131,7 @@
<p>语音识别Automatic Speech Recognition、说话人日志Speaker Diarization等语音处理技术的最新发展激发了众多智能语音的广泛应用。然而会议场景由于其复杂的声学条件和不同的讲话风格包括重叠的讲话、不同数量的发言者、大会议室的远场信号以及环境噪声和混响仍然属于一项极具挑战性的任务。</p>
<p>为了推动会议场景语音识别的发展,已经有很多相关的挑战赛,如 Rich Transcription evaluation 和 CHIMEComputational Hearing in Multisource Environments 挑战赛。最新的CHIME挑战赛关注于远距离自动语音识别和开发能在各种不同拓扑结构的阵列和应用场景中通用的系统。然而不同语言之间的差异限制了非英语会议转录的进展。MISPMultimodal Information Based Speech Processing和M2MeTMulti-Channel Multi-Party Meeting Transcription挑战赛为推动普通话会议场景语音识别做出了贡献。MISP挑战赛侧重于用视听多模态的方法解决日常家庭环境中的远距离多麦克风信号处理问题而M2MeT挑战则侧重于解决离线会议室中会议转录的语音重叠问题。</p>
<p>IASSP2022 M2MeT挑战的侧重点是会议场景它包括两个赛道说话人日记和多说话人自动语音识别。前者涉及识别“谁在什么时候说了话”而后者旨在同时识别来自多个说话人的语音语音重叠和各种噪声带来了巨大的技术困难。</p>
<p>在上一届M2MeT成功举办的基础上我们将在ASRU2023上继续举办M2MeT2.0挑战赛。在上一届M2MeT挑战赛中评估指标是说话人无关的我们只能得到识别文本而不能确定相应的说话人。
<p>在上一届M2MeT成功举办的基础上我们将在ASRU 2023上继续举办M2MeT2.0挑战赛。在上一届M2MeT挑战赛中评估指标是说话人无关的我们只能得到识别文本而不能确定相应的说话人。
为了解决这一局限性并将现在的多说话人语音识别系统推向实用化M2MeT2.0挑战赛将在说话人相关的人物上评估并且同时设立限定数据与不限定数据两个子赛道。通过将语音归属于特定的说话人这项任务旨在提高多说话人ASR系统在真实世界环境中的准确性和适用性。
我们对数据集、规则、基线系统和评估方法进行了详细介绍以进一步促进多说话人语音识别领域研究的发展。此外我们将根据时间表发布一个全新的测试集包括大约10小时的音频。</p>
</section>