mirror of
https://github.com/modelscope/FunASR
synced 2025-09-15 14:48:36 +08:00
add m2met2 docs cn version
This commit is contained in:
parent
bd6ed1d7af
commit
e09d17de60
2
.github/workflows/main.yml
vendored
2
.github/workflows/main.yml
vendored
@ -24,7 +24,7 @@ jobs:
|
||||
- uses: ammaraskar/sphinx-action@master
|
||||
with:
|
||||
docs-folder: "docs_m2met2/"
|
||||
pre-build-command: "pip install jinja2 sphinx_rtd_theme myst_parser"
|
||||
pre-build-command: "pip install jinja2 sphinx_rtd_theme myst-parser"
|
||||
|
||||
- name: deploy copy
|
||||
if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/dev_wjm' || github.ref == 'refs/heads/dev_lyh'
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
# Baseline
|
||||
## Overview
|
||||
We provide an end-to-end sa-asr baseline conducted on [FunASR](https://github.com/alibaba-damo-academy/FunASR) as a receipe. The model architecture is shown in Figure 3. The SpeakerEncoder is initialized with a pre-trained [speaker verification model](https://modelscope.cn/models/damo/speech_xvector_sv-zh-cn-cnceleb-16k-spk3465-pytorch/summary) from [ModelScope](https://modelscope.cn/home). This speaker verification model is also be used to extract the speaker embedding in the speaker profile.
|
||||
We provide an end-to-end sa-asr baseline conducted on [FunASR](https://github.com/alibaba-damo-academy/FunASR) as a receipe. The model architecture is shown in Figure 2. The SpeakerEncoder is initialized with a pre-trained [speaker verification model](https://modelscope.cn/models/damo/speech_xvector_sv-zh-cn-cnceleb-16k-spk3465-pytorch/summary) from [ModelScope](https://modelscope.cn/home). This speaker verification model is also be used to extract the speaker embedding in the speaker profile.
|
||||
|
||||

|
||||
|
||||
|
||||
@ -17,7 +17,7 @@ Building on the success of the M2MeT challenge, we are pleased to announce the M
|
||||
|
||||
## Guidelines
|
||||
|
||||
Potential participants from both academia and industry should send an email to **m2met.alimeeting@gmail.com** to register to the challenge before or by May 5 with the following requirements:
|
||||
Potential participants from both academia and industry should send an email to **m2met.alimeeting@gmail.com** to register to the challenge before or by May 5, 2023 with the following requirements:
|
||||
|
||||
|
||||
- Email subject: [ASRU2023 M2MeT2.0 Challenge Registration] – Team Name - Participating
|
||||
|
||||
@ -7,7 +7,7 @@ ASRU 2023 MULTI-CHANNEL MULTI-PARTY MEETING TRANSCRIPTION CHALLENGE 2.0 (M2MeT2.
|
||||
==================================================================================
|
||||
Building on the success of the M2MeT challenge, we are pleased to announce the M2MeT2.0 challenge as an ASRU2023 Signal Processing Grand Challenge.
|
||||
To further advance the current multi-talker ASR system to practicality, the M2MeT2.0 challenge proposes the speaker-attribute ASR task with two sub-tracks performing in fixed and open training conditions.
|
||||
We provide a detailed introduction of the dataset, rules, evaluation methods, and baseline systems to further promote reproducible research in this field.
|
||||
We provide a detailed introduction of the dataset, rules, baseline systems, and evaluation methods to further promote reproducible research in this field.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
20
docs_m2met2_cn/Makefile
Normal file
20
docs_m2met2_cn/Makefile
Normal file
@ -0,0 +1,20 @@
|
||||
# Minimal makefile for Sphinx documentation
|
||||
#
|
||||
|
||||
# You can set these variables from the command line, and also
|
||||
# from the environment for the first two.
|
||||
SPHINXOPTS ?=
|
||||
SPHINXBUILD ?= sphinx-build
|
||||
SOURCEDIR = .
|
||||
BUILDDIR = _build
|
||||
|
||||
# Put it first so that "make" without argument is like "make help".
|
||||
help:
|
||||
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
||||
|
||||
.PHONY: help Makefile
|
||||
|
||||
# Catch-all target: route all unknown targets to Sphinx using the new
|
||||
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
|
||||
%: Makefile
|
||||
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
||||
39
docs_m2met2_cn/conf.py
Normal file
39
docs_m2met2_cn/conf.py
Normal file
@ -0,0 +1,39 @@
|
||||
# Configuration file for the Sphinx documentation builder.
|
||||
#
|
||||
# For the full list of built-in configuration values, see the documentation:
|
||||
# https://www.sphinx-doc.org/en/master/usage/configuration.html
|
||||
|
||||
# -- Project information -----------------------------------------------------
|
||||
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
|
||||
|
||||
project = 'm2met2'
|
||||
copyright = '2023, Speech Lab, Alibaba Group; Audio, Speech and Language Processing Group, Northwestern Polytechnical University'
|
||||
author = 'Speech Lab, Alibaba Group; Audio, Speech and Language Processing Group, Northwestern Polytechnical University'
|
||||
|
||||
# -- General configuration ---------------------------------------------------
|
||||
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
|
||||
extensions = [
|
||||
'myst_parser',
|
||||
'sphinx_rtd_theme',
|
||||
]
|
||||
|
||||
myst_enable_extensions = [
|
||||
"colon_fence",
|
||||
"deflist",
|
||||
"dollarmath",
|
||||
]
|
||||
|
||||
myst_heading_anchors = 2
|
||||
myst_highlight_code_blocks=True
|
||||
myst_update_mathjax=False
|
||||
templates_path = ['_templates']
|
||||
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
|
||||
|
||||
language = 'zh_CN'
|
||||
|
||||
# -- Options for HTML output -------------------------------------------------
|
||||
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
|
||||
|
||||
html_theme = 'sphinx_rtd_theme'
|
||||
html_static_path = ['_static']
|
||||
|
||||
BIN
docs_m2met2_cn/images/baseline_result.png
Normal file
BIN
docs_m2met2_cn/images/baseline_result.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 144 KiB |
BIN
docs_m2met2_cn/images/dataset_detail.png
Normal file
BIN
docs_m2met2_cn/images/dataset_detail.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 502 KiB |
BIN
docs_m2met2_cn/images/meeting_room.png
Normal file
BIN
docs_m2met2_cn/images/meeting_room.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 610 KiB |
BIN
docs_m2met2_cn/images/sa_asr_arch.png
Normal file
BIN
docs_m2met2_cn/images/sa_asr_arch.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 742 KiB |
28
docs_m2met2_cn/index.rst
Normal file
28
docs_m2met2_cn/index.rst
Normal file
@ -0,0 +1,28 @@
|
||||
.. m2met2 documentation master file, created by
|
||||
sphinx-quickstart on Wed Apr 12 17:49:45 2023.
|
||||
You can adapt this file completely to your liking, but it should at least
|
||||
contain the root `toctree` directive.
|
||||
|
||||
ASRU 2023 多通道多方会议转录挑战 2.0
|
||||
==================================================================================
|
||||
在上一届M2MET成功举办的基础上,我们将在ASRU2023上继续举办M2MET2.0挑战赛。
|
||||
为了将现在的多说话人语音识别系统推向实用化,M2MET2.0挑战赛将在说话人相关的人物上评估,并且同时设立限定数据与不限定数据两个子赛道。
|
||||
我们对数据集、规则、基线系统和评估方法进行了详细介绍,以进一步促进多说话人语音识别领域研究的发展。
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: 目录:
|
||||
|
||||
./简介
|
||||
./数据集
|
||||
./赛道设置与评估
|
||||
./基线
|
||||
./规则
|
||||
./组委会
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
||||
* :ref:`search`
|
||||
35
docs_m2met2_cn/make.bat
Normal file
35
docs_m2met2_cn/make.bat
Normal file
@ -0,0 +1,35 @@
|
||||
@ECHO OFF
|
||||
|
||||
pushd %~dp0
|
||||
|
||||
REM Command file for Sphinx documentation
|
||||
|
||||
if "%SPHINXBUILD%" == "" (
|
||||
set SPHINXBUILD=sphinx-build
|
||||
)
|
||||
set SOURCEDIR=.
|
||||
set BUILDDIR=_build
|
||||
|
||||
%SPHINXBUILD% >NUL 2>NUL
|
||||
if errorlevel 9009 (
|
||||
echo.
|
||||
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
|
||||
echo.installed, then set the SPHINXBUILD environment variable to point
|
||||
echo.to the full path of the 'sphinx-build' executable. Alternatively you
|
||||
echo.may add the Sphinx directory to PATH.
|
||||
echo.
|
||||
echo.If you don't have Sphinx installed, grab it from
|
||||
echo.https://www.sphinx-doc.org/
|
||||
exit /b 1
|
||||
)
|
||||
|
||||
if "%1" == "" goto help
|
||||
|
||||
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
|
||||
goto end
|
||||
|
||||
:help
|
||||
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
|
||||
|
||||
:end
|
||||
popd
|
||||
12
docs_m2met2_cn/基线.md
Normal file
12
docs_m2met2_cn/基线.md
Normal file
@ -0,0 +1,12 @@
|
||||
# 基线
|
||||
## 基线概述
|
||||
我们提供一个在[FunASR](https://github.com/alibaba-damo-academy/FunASR)上实现的端到端SA-ASR系统作为基线。该模型的结构如图3所示。SpeakerEncoder用[ModelScope](https://modelscope.cn/home)中预先训练好的[说话人确认模型](https://modelscope.cn/models/damo/speech_xvector_sv-zh-cn-cnceleb-16k-spk3465-pytorch/summary)作为初始化。这个说话人确认模型也被用来提取说话人档案中的说话人嵌入。
|
||||
|
||||

|
||||
|
||||
## 快速开始
|
||||
#TODO: fill with the README.md of the baseline
|
||||
|
||||
## 基线结果
|
||||
基线系统的结果如表3所示。在训练期间,说话人档案采用了真实说话人嵌入。然而由于在评估过程中缺乏真实说话人标签,因此使用了由额外的谱聚类提供的说话人特征。同时我们还提供了在评估和测试集上使用真实说话人档案的结果,以显示说话人档案准确性的影响。
|
||||

|
||||
24
docs_m2met2_cn/数据集.md
Normal file
24
docs_m2met2_cn/数据集.md
Normal file
@ -0,0 +1,24 @@
|
||||
# 数据集
|
||||
## 数据集概述
|
||||
在限定数据集条件下,训练数据集仅限于三个公开的语料库,即AliMeeting、AISHELL-4和CN-Celeb。为了评估参赛者提交的模型的性能,我们将发布一个新的测试集(Test-2023)用于打分和排名。下面我们将详细描述AliMeeting数据集和Test-2023测试集。
|
||||
|
||||
## Alimeeting数据集介绍
|
||||
AliMeeting总共包含118.75小时的语音数据,包括104.75小时的训练集(Train)、4小时的验证集(Eval)和10小时的测试集(Test)。训练集和验证集分别包含212场和8场会议,其中每场会议由多个说话人进行15到30分钟的讨论。训练和验证集中参与会议的总人数分别为456人和25人,并且参会的男女比例人数均衡。
|
||||
|
||||
该数据集收集于13个不同的会议室,按照大小规格分为小型、中型和大型三种,房间面积从8到55平方米不等。不同房间具有不同的布局和声学特性,每个房间的详细参数也将发送给参与者。会议场地的墙体材料类型包括水泥、玻璃等。会议场地的家具包括沙发、电视、黑板、风扇、空调、植物等。在录制过程中,麦克风阵列放置于桌上,多个说话人围坐在桌边进行自然对话。麦克风阵列离说话人距离约0.3到5.0米之间。所有说话人的母语均是汉语,并且说的都是普通话,没有浓重的口音。在会议录制期间可能会产生各种室内的噪音,包括键盘声、开门/关门声、风扇声、气泡声等。所有说话人在会议的录制期间均保持相同位置,不发生走动。训练集和验证集的说话人没有重复。图1展示了一个会议室的布局以及麦克风的拓扑结构。
|
||||
|
||||

|
||||
|
||||
每场会议的说话人数量从2到4人不等。同时为了覆盖各种内容的会议场景,我们选择了多种会议主题,包括医疗、教育、商业、组织管理、工业生产等不同内容的例会。训练集和验证集的平均语音重叠率分别为42.27\%和34.76\%。AliMeeting训练集和验证集的详细信息见表1。表2显示了训练集和验证集中不同发言者人数会议的语音重叠率和会议数量。
|
||||
|
||||

|
||||
Test-2023测试集由20场会议组成,这些会议是在与AliMeeting数据集相同的声学环境下录制的。Test-2023测试集中的每个会议环节由2到4个参与者组成并且与AliMeeting测试集的配置相似。
|
||||
|
||||
我们还使用耳机麦克风记录了每个说话人的近场音频信号,并确保只转录对应说话人自己的语音。需要注意的是,麦克风阵列记录的远场音频和耳机麦克风记录的近场音频在时间上是同步的。每场会议的所有抄本均以TextGrid格式存储,内容包括会议的时长、说话人信息(说话人数量、说话人ID、性别等)、每个说话人的片段总数、每个片段的时间戳和转录内容。
|
||||
|
||||
|
||||
## 获取数据
|
||||
以上提到的三个训练集均可以在[OpenSLR](https://openslr.org/resources.php)下载. 参赛者可以使用下方链接直接下载. 针对AliMeeting数据集,比赛提供的baseline中包含了完整的数据处理流程。
|
||||
- [AliMeeting](https://openslr.org/119/)
|
||||
- [AISHELL-4](https://openslr.org/111/)
|
||||
- [CN-Celeb](https://openslr.org/82/)
|
||||
27
docs_m2met2_cn/简介.md
Normal file
27
docs_m2met2_cn/简介.md
Normal file
@ -0,0 +1,27 @@
|
||||
# 简介
|
||||
## 竞赛介绍
|
||||
语音识别(Automatic Speech Recognition)、说话人日志(Speaker Diarization)等语音处理技术的最新发展激发了众多智能语音的广泛应用。会议场景是语音技术应用中最有价值、同时也是最具挑战性的场景之一。因为这样的场景包含了丰富的讲话风格和复杂的声学条件,需要考虑到重叠语音、数量未知的说话人、大型会议室中的远场信号、噪音和混响等挑战。
|
||||
|
||||
为了推动会议场景语音识别的发展,已经有很多相关的挑战赛,如 Rich Transcription evaluation 和 CHIME(Computational Hearing in Multisource Environments) 挑战赛。然而不同语言之间的差异限制了非英语会议转录的进展。MISP(Multimodal Information Based Speech Processing)和M2MeT(Multi-Channel Multi-Party Meeting Transcription)挑战赛为推动普通话会议场景语音识别做出了贡献。MISP挑战赛侧重于用视听多模态的方法解决日常家庭环境中的远距离多麦克风信号处理问题,而M2MeT挑战则侧重于解决离线会议室中会议转录的语音重叠问题。
|
||||
|
||||
在上一届M2MET成功举办的基础上,我们将在ASRU2023上继续举办M2MET2.0挑战赛。在上一届M2MET挑战赛中,评估指标是说话人无关的,我们只能得到识别文本,而不能确定相应的说话人。
|
||||
为了将现在的多说话人语音识别系统推向实用化,M2MET2.0挑战赛将在说话人相关的人物上评估,并且同时设立限定数据与不限定数据两个子赛道。
|
||||
我们对数据集、规则、基线系统和评估方法进行了详细介绍,以进一步促进多说话人语音识别领域研究的发展。主办方将选择前三名论文并将其纳入ASRU2023论文集。
|
||||
|
||||
|
||||
## 时间安排(AOE时间)
|
||||
|
||||
- $ 2023.5.5: $ 参赛者注册截止
|
||||
- $ 2023.6.9: $ 测试集数据发布
|
||||
- $ 2023.6.13: $ 最终结果提交截止
|
||||
- $ 2023.6.19: $ 评估结果和排名发布
|
||||
- $ 2023.7.3: $ 论文提交截止
|
||||
- $ 2023.7.10: $ 最终版论文提交截止
|
||||
|
||||
## 竞赛报名
|
||||
|
||||
来自学术界和工业界的有意向参赛者均应在2023年5月5日前向 **m2met.alimeeting@gmail.com** 发送邮件,按照以下要求注册参加挑战赛:
|
||||
- 主题: [ICASSP2022 M2MeT2.0 Challenge Registration] – 团队名(英文或者拼音)- 参与的子赛道;
|
||||
- 提供团队名称、隶属关系、参与的赛道、团队队长以及联系人信息(团队人数不限定);
|
||||
|
||||
主办方将在3个工作日内通过电子邮件通知符合条件的参赛团队,团队必须遵守将在挑战网站上发布的挑战规则。
|
||||
1
docs_m2met2_cn/组委会.md
Normal file
1
docs_m2met2_cn/组委会.md
Normal file
@ -0,0 +1 @@
|
||||
# 组委会
|
||||
16
docs_m2met2_cn/规则.md
Normal file
16
docs_m2met2_cn/规则.md
Normal file
@ -0,0 +1,16 @@
|
||||
# 竞赛规则
|
||||
所有参赛者都应遵守以下规则:
|
||||
|
||||
- 允许在原始训练数据集上进行数据增强,包括但不限于添加噪声或混响、速度扰动和音调变化。
|
||||
|
||||
- 严格禁止以任何形式使用测试数据集,包括但不限于使用测试数据集微调或训练模型。
|
||||
|
||||
- 允许多系统融合,但不鼓励使用具有相同结构仅参数不同的子系统融合。
|
||||
|
||||
- 如果两个系统的测试cpCER相同,则计算复杂度较低的系统将被认定为更优。
|
||||
|
||||
- 如果使用强制对齐模型获得了逐帧分类标签,则必须使用相应子赛道允许的数据对强制对齐模型进行训练。
|
||||
|
||||
- 端到端方法中允许使用浅层融合语言模型,模型可以选择LAS、RNNT和Transformer等,但浅层融合语言模型的训练数据只能来自于允许的训练数据集的转录抄本。
|
||||
|
||||
- 最终解释权属于主办方。如遇特殊情况,主办方将协调解释。
|
||||
15
docs_m2met2_cn/赛道设置与评估.md
Normal file
15
docs_m2met2_cn/赛道设置与评估.md
Normal file
@ -0,0 +1,15 @@
|
||||
# 赛道设置与评估
|
||||
## 说话人相关的语音识别 (主赛道)
|
||||
说话人相关的ASR任务需要从重叠的语音中识别每个说话人的语音,并为识别内容分配一个说话人标签。在本次竞赛中AliMeeting、Aishell4和Cn-Celeb数据集可作为受限数据源。在M2MeT挑战赛中使用的AliMeeting数据集包含训练、评估和测试集,在M2MET2.0可以在训练和评估中使用。此外,一个包含约10小时会议数据的新的Test-2023集将根据赛程安排发布并用于挑战赛的评分和排名。值得注意的是,组织者将不提供耳机的近场音频、转录以及真实时间戳。主办方将不再提供每个说话人的真实时间戳,而是在Test-2023集上提供包含多个说话人的片段。这些片段可以通过一个简单的vad模型获得。
|
||||
|
||||
## 评估方法
|
||||
使用串联最优排序字符错误率(cpCER)指标来评估说话人相关的ASR系统的准确性。cpCER的计算包括三个步骤。首先,将一场会议中每个说话人的参考和假设转录按时间顺序串联起来。其次,计算真实标签和预测输出之间的字符错误率(CER),并对所有可能的说话人排列组合重复这一过程。最后,选择CER最低的排列组合作为该时段的cpCER。CER是通过将ASR输出转化为参考抄本所需的插入(Ins)、替换(Sub)和删除(Del)的字符总数除以参考抄本的字符总数得到的。具体来说,CER的计算方法是:
|
||||
|
||||
$$ \text{CER} = \frac {\mathcal N_{\text{Ins}} + \mathcal N_{\text{Sub}} + \mathcal N_{\text{Del}} }{\mathcal N_{\text{Total}}} \times 100\%, $$
|
||||
|
||||
其中 $\mathcal N_{\text{Ins}}$ , $\mathcal N_{\text{Sub}}$ , $\mathcal N_{\text{Del}}$ 是三种错误的字符数, $\mathcal N_{\text{Total}}$ 是字符总数.
|
||||
## 子赛道设置
|
||||
### 子赛道一 (限定训练数据):
|
||||
参与者只能使用限定数据构建两个系统,严禁使用额外数据。参赛者在系统构建过程中仅能使用AliMeeting、AISHELL-4和CN Celeb。参赛者可以使用[Hugging Face](https://huggingface.co/models)以及[ModelScope](https://www.modelscope.cn/models)上提供的开源预训练模型,并且需要在最终的系统描述文档中详细列出使用的预训练模型名称以及链接。
|
||||
### 子赛道二 (开放训练数据):
|
||||
除了限定数据外,参与者可以使用任何公开可用、私人录制和模拟仿真的数据集。但是,参与者必须清楚地列出使用的数据。如果使用模拟仿真数据,请详细描述数据模拟的方案。
|
||||
Loading…
Reference in New Issue
Block a user