mirror of
https://github.com/modelscope/FunASR
synced 2025-09-15 14:48:36 +08:00
remove Indices and tables
This commit is contained in:
parent
9817785c66
commit
4306f2c49c
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@ -135,8 +135,8 @@
|
||||
</section>
|
||||
<section id="baseline-results">
|
||||
<h2>Baseline results<a class="headerlink" href="#baseline-results" title="Permalink to this heading">¶</a></h2>
|
||||
<p>The results of the baseline system are shown in Table 3. The speaker profile adopts the oracle speaker embedding during training. However, due to the lack of oracle speaker label during evaluation, the speaker profile provided by an additional spectral clustering is used. Meanwhile, the results of using the oracle speaker profile on Eval and Test Set are also provided to show the impact of speaker profile accuracy.
|
||||
<img alt="baseline result" src="_images/baseline_result.png" /></p>
|
||||
<p>The results of the baseline system are shown in Table 3. The speaker profile adopts the oracle speaker embedding during training. However, due to the lack of oracle speaker label during evaluation, the speaker profile provided by an additional spectral clustering is used. Meanwhile, the results of using the oracle speaker profile on Eval and Test Set are also provided to show the impact of speaker profile accuracy.</p>
|
||||
<p><img alt="baseline result" src="_images/baseline_result.png" /></p>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
|
||||
@ -128,10 +128,9 @@
|
||||
<section id="call-for-participation">
|
||||
<h2>Call for participation<a class="headerlink" href="#call-for-participation" title="Permalink to this heading">¶</a></h2>
|
||||
<p>Automatic speech recognition (ASR) and speaker diarization have made significant strides in recent years, resulting in a surge of speech technology applications across various domains. However, meetings present unique challenges to speech technologies due to their complex acoustic conditions and diverse speaking styles, including overlapping speech, variable numbers of speakers, far-field signals in large conference rooms, and environmental noise and reverberation.</p>
|
||||
<p>Over the years, several challenges have been organized to advance the development of meeting transcription, including the Rich Transcription evaluation and Computational Hearing in Multisource Environments (CHIME) challenges. The latest iteration of the CHIME challenge has a particular focus on distant automatic speech recognition (ASR) and developing systems that can generalize across various array topologies and application scenarios. However, while progress has been made in English meeting transcription, language differences remain a significant barrier to achieving comparable results in non-English languages, such as Mandarin.</p>
|
||||
<p>The Multimodal Information Based Speech Processing (MISP) and Multi-Channel Multi-Party Meeting Transcription (M2MeT) challenges have been instrumental in advancing Mandarin meeting transcription. The MISP challenge seeks to address the problem of audio-visual distant multi-microphone signal processing in everyday home environments, while the M2MeT challenge focuses on tackling the speech overlap issue in offline meeting rooms.</p>
|
||||
<p>The ICASSP2022 M2MeT challenge focuses on meeting scenarios, and it comprises two main tasks: speaker diarization and multi-speaker automatic speech recognition (ASR). The former involves identifying who spoke when in the meeting, while the latter aims to transcribe speech from multiple speakers simultaneously, which poses significant technical difficulties due to overlapping speech and acoustic interferences.</p>
|
||||
<p>Building on the success of the previous M2MeT challenge, we are excited to propose the M2MeT2.0 challenge as an ASRU2023 challenge special session. In the original M2MeT challenge, the evaluation metric was speaker-independent, which meant that the transcription could be determined, but not the corresponding speaker. To address this limitation and further advance the current multi-talker ASR system towards practicality, the M2MeT2.0 challenge proposes the speaker-attributed ASR task with two sub-tracks: fixed and open training conditions. By attributing speech to specific speakers, this task aims to improve the accuracy and applicability of multi-talker ASR systems in real-world settings. The challenge provides detailed datasets, rules, evaluation methods, and baseline systems to facilitate reproducible research in this field. The speaker-attribute automatic speech recognition (ASR) task aims to tackle the practical and challenging problem of identifying “who spoke what at when”. To facilitate reproducible research in this field, we offer a comprehensive overview of the dataset, rules, evaluation metrics, and baseline systems. Furthermore, we will release a carefully curated test set, comprising approximately 10 hours of audio, according to the timeline. The new test set is designed to enable researchers to validate and compare their models’ performance and advance the state of the art in this area.</p>
|
||||
<p>Over the years, several challenges have been organized to advance the development of meeting transcription, including the Rich Transcription evaluation and Computational Hearing in Multisource Environments (CHIME) challenges. The latest iteration of the CHIME challenge has a particular focus on distant automatic speech recognition and developing systems that can generalize across various array topologies and application scenarios. However, while progress has been made in English meeting transcription, language differences remain a significant barrier to achieving comparable results in non-English languages, such as Mandarin. The Multimodal Information Based Speech Processing (MISP) and Multi-Channel Multi-Party Meeting Transcription (M2MeT) challenges have been instrumental in advancing Mandarin meeting transcription. The MISP challenge seeks to address the problem of audio-visual distant multi-microphone signal processing in everyday home environments, while the M2MeT challenge focuses on tackling the speech overlap issue in offline meeting rooms.</p>
|
||||
<p>The ICASSP2022 M2MeT challenge focuses on meeting scenarios, and it comprises two main tasks: speaker diarization and multi-speaker automatic speech recognition. The former involves identifying who spoke when in the meeting, while the latter aims to transcribe speech from multiple speakers simultaneously, which poses significant technical difficulties due to overlapping speech and acoustic interferences.</p>
|
||||
<p>Building on the success of the previous M2MeT challenge, we are excited to propose the M2MeT2.0 challenge as an ASRU2023 challenge special session. In the original M2MeT challenge, the evaluation metric was speaker-independent, which meant that the transcription could be determined, but not the corresponding speaker. To address this limitation and further advance the current multi-talker ASR system towards practicality, the M2MeT2.0 challenge proposes the speaker-attributed ASR task with two sub-tracks: fixed and open training conditions. The speaker-attribute automatic speech recognition (ASR) task aims to tackle the practical and challenging problem of identifying “who spoke what at when”. To facilitate reproducible research in this field, we offer a comprehensive overview of the dataset, rules, evaluation metrics, and baseline systems. Furthermore, we will release a carefully curated test set, comprising approximately 10 hours of audio, according to the timeline. The new test set is designed to enable researchers to validate and compare their models’ performance and advance the state of the art in this area.</p>
|
||||
</section>
|
||||
<section id="timeline-aoe-time">
|
||||
<h2>Timeline(AOE Time)<a class="headerlink" href="#timeline-aoe-time" title="Permalink to this heading">¶</a></h2>
|
||||
@ -147,7 +146,7 @@
|
||||
</section>
|
||||
<section id="guidelines">
|
||||
<h2>Guidelines<a class="headerlink" href="#guidelines" title="Permalink to this heading">¶</a></h2>
|
||||
<p>Possible improved version: Interested participants, whether from academia or industry, must register for the challenge by completing a Google form, which will be available here. The deadline for registration is May 5, 2023.</p>
|
||||
<p>Interested participants, whether from academia or industry, must register for the challenge by completing a Google form, which will be available here. The deadline for registration is May 5, 2023.</p>
|
||||
<p>Within three working days, the challenge organizer will send email invitations to eligible teams to participate in the challenge. All qualified teams are required to adhere to the challenge rules, which will be published on the challenge page. Prior to the ranking release time, each participant must submit a system description document detailing their approach and methods. The organizer will select the top three submissions to be included in the ASRU2023 Proceedings.</p>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
@ -127,36 +127,27 @@
|
||||
<p><em><strong>Lei Xie, Professor, Northwestern Polytechnical University, China</strong></em></p>
|
||||
<p>Email: <a class="reference external" href="mailto:lxie%40nwpu.edu.cn">lxie<span>@</span>nwpu<span>.</span>edu<span>.</span>cn</a></p>
|
||||
<a class="reference internal image-reference" href="_images/lxie.jpeg"><img alt="lxie" src="_images/lxie.jpeg" style="width: 20%;" /></a>
|
||||
<p>Lei Xie received the Ph.D. degree in computer science from Northwestern Polytechnical University, Xi’an, China, in 2004. From 2001 to 2002, he was with the Department of Electronics and Information Processing, Vrije Universiteit Brussel (VUB), Brussels, Belgium, as a Visiting Scientist. From 2004 to 2006, he was a Senior Research Associate with the Center for Media Technology, School of Creative Media, City University of Hong Kong, Hong Kong, China. From 2006 to 2007, he was a Postdoctoral Fellow with the Human-Computer Communications Laboratory (HCCL), The Chinese University of Hong Kong, Hong Kong, China. He is currently a Professor with School of Computer Science, Northwestern Polytechnical University, Xian, China and leads the Audio, Speech and Language Processing Group (ASLP@NPU). He has published over 200 papers in referred journals and conferences, such as IEEE/ACM Transactions on Audio, Speech and Language Processing, IEEE Transactions on Multimedia, Interspeech, ICASSP, ASRU, ACL and ACM Multimedia. He has achieved several best paper awards in flagship conferences. His current research interests include general topics in speech and language processing, multimedia, and human-computer interaction. Dr. Xie is currently an associate editor (AE) of IEEE/ACM Trans. on Audio, Speech and language Processing. He has actively served as Chairs in many conferences and technical committees. He serves as an IEEE Speech and Language Processing
|
||||
Technical Committee Member.</p>
|
||||
<p><em><strong>Kong Aik Lee, Senior Scientist at Institute for Infocomm Research, A*Star, Singapore</strong></em></p>
|
||||
<p>Email: <a class="reference external" href="mailto:kongaik.lee%40ieee.org">kongaik<span>.</span>lee<span>@</span>ieee<span>.</span>org</a></p>
|
||||
<a class="reference internal image-reference" href="_images/kong.png"><img alt="kong" src="_images/kong.png" style="width: 20%;" /></a>
|
||||
<p>Kong Aik Lee started off him career as a researcher, then a team leader and a strategic planning manager, at the Institute Infocomm Research, A*STAR, Singapore, working on speaker and language recognition research. From 2018 to 2020, he spent two and a half years in NEC Corporation, Japan, focusing very much on voice biometrics and multi-modal biometrics products. He is proud to work with a great team on voice biometrics featured on NEC Bio-Idiom platform. He returned to Singapore in July 2020, and now leading the speech and audio analytics research at the Institute for Infocomm Research, as a Senior Scientist and PI. He also serve as an Editor for Elsevier Computer Speech and Language (since 2016), and was an Associate Editor for IEEE/ACM Transactions on Audio, Speech and Language Processing (2017 - 2021), and am an elected member of IEEE Speech and Language Technical Committee (2019 - 2021).</p>
|
||||
<p><em><strong>Zhijie Yan, Principal Engineer at Alibaba, China</strong></em>
|
||||
Email: <a class="reference external" href="mailto:zhijie.yzj%40alibaba-inc.com">zhijie<span>.</span>yzj<span>@</span>alibaba-inc<span>.</span>com</a></p>
|
||||
<a class="reference internal image-reference" href="_images/zhijie.jpg"><img alt="zhijie" src="_images/zhijie.jpg" style="width: 20%;" /></a>
|
||||
<p>Zhijie Yan holds a PhD from the University of Science and Technology of China, and is a senior member of the Institute of Electrical and Electronics Engineers (IEEE). He is also an expert reviewer of top academic conferences and journals in the speech field. His research fields include speech recognition, speech synthesis, voiceprints, and speech interaction. His research results are applied in speech services provided by Alibaba Group and Ant Financial. He was awarded the title of “One of the Top 100 Grassroots Scientists” by the China Association for Science and Technology.</p>
|
||||
<p><em><strong>Shiliang Zhang, Senior Engineer at Alibaba, China</strong></em>
|
||||
Email: <a class="reference external" href="mailto:sly.zsl%40alibaba-inc.com">sly<span>.</span>zsl<span>@</span>alibaba-inc<span>.</span>com</a></p>
|
||||
<a class="reference internal image-reference" href="_images/zsl.JPG"><img alt="zsl" src="_images/zsl.JPG" style="width: 20%;" /></a>
|
||||
<p>Shiliang Zhang graduated with a Ph.D. from the University of Science and Technology of China in 2017. His research areas mainly include speech recognition, natural language understanding, and machine learning. Currently, he has published over 40 papers in mainstream academic journals and conferences in the fields of speech and machine learning, and has applied for dozens of patents. After obtaining his doctorate degree, he joined the Alibaba Intelligent Speech team. He is currently leading the direction of speech recognition and fundamental technology at DAMO Academy’s speech laboratory.</p>
|
||||
<p><em><strong>Yanmin Qian, Professor, Shanghai Jiao Tong University, China</strong></em></p>
|
||||
<p>Email: <a class="reference external" href="mailto:yanminqian%40sjtu.edu.cn">yanminqian<span>@</span>sjtu<span>.</span>edu<span>.</span>cn</a></p>
|
||||
<a class="reference internal image-reference" href="_images/qian.jpeg"><img alt="qian" src="_images/qian.jpeg" style="width: 20%;" /></a>
|
||||
<p>Yanmin Qian received the B.S. degree from the Department of Electronic and Information Engineering,Huazhong University of Science and Technology, Wuhan, China, in 2007, and the Ph.D. degree from the Department of Electronic Engineering, Tsinghua University, Beijing, China, in 2012. Since 2013, he has been with the Department of Computer Science and Engineering, Shanghai Jiao Tong University (SJTU), Shanghai, China, where he is currently an Associate Professor. From 2015 to 2016, he also worked as an Associate Research in the Speech Group, Cambridge University Engineering Department, Cambridge, U.K. He is a senior member of IEEE and a member of ISCA, and one of the founding members of Kaldi Speech Recognition Toolkit. He has published more than 110 papers on speech and language processing with 4000+ citations, including the top conference: ICASSP, INTERSPEECH and ASRU. His current research interests include the acoustic and language modeling in speech recognition, speaker and language recognition, key word spotting, and multimedia signal processing.</p>
|
||||
<p><em><strong>Zhuo Chen, Applied Scientist in Microsoft, USA</strong></em></p>
|
||||
<p>Email: <a class="reference external" href="mailto:zhuc%40microsoft.com">zhuc<span>@</span>microsoft<span>.</span>com</a></p>
|
||||
<a class="reference internal image-reference" href="_images/chenzhuo.jpg"><img alt="chenzhuo" src="_images/chenzhuo.jpg" style="width: 20%;" /></a>
|
||||
<p>Zhuo Chen received the Ph.D. degree from Columbia University, New York, NY, USA, in 2017. He is currently a Principal Applied Data Scientist with Microsoft. He has authored or coauthored more than 80 papers in peer-reviewed journals and conferences with around 6000 citations. He is a reviewer or technical committee member for more than ten journals and conferences. His research interests include automatic conversation recognition, speech separation, diarisation, and speaker information extraction. He actively participated in the academic events and challenges, and won several awards. Meanwhile, he contributed to open-sourced datasets, such as WSJ0-2mix, LibriCSS, and AISHELL-4, that have been main benchmark datasets for multi-speaker processing research. In 2020, he was the Team Leader in 2020 Jelinek workshop, leading more than 30 researchers and students to push the state of the art in conversation transcription.</p>
|
||||
<p><em><strong>Jian Wu, Applied Scientist in Microsoft, USA</strong></em></p>
|
||||
<p>Email: <a class="reference external" href="mailto:wujian%40microsoft.com">wujian<span>@</span>microsoft<span>.</span>com</a></p>
|
||||
<a class="reference internal image-reference" href="_images/wujian.jpg"><img alt="wujian" src="_images/wujian.jpg" style="width: 20%;" /></a>
|
||||
<p>Jian Wu received a master degree from Northwestern Polytechnical University, Xi’an, China, in 2020 and currently he is a Applied Scientist in Microsoft, USA. His research interests cover multi-channel signal processing, robust and multi-talker speech recognition, speech enhancement, dereverberation and separation. He has around 30 conference publications with a total citation over 1200. He participated in several challenges such as CHiME5, DNS 2020 and FFSVC 2020 and contributed to the open-sourced datasets including LibriCSS and AISHELL-4. He is also a reviewer for several journals and conferences such as ICASSP, SLT, TASLP and SPL.</p>
|
||||
<p><em><strong>Hui Bu, CEO, AISHELL foundation, China</strong></em></p>
|
||||
<p>Email: <a class="reference external" href="mailto:buhui%40aishelldata.com">buhui<span>@</span>aishelldata<span>.</span>com</a></p>
|
||||
<a class="reference internal image-reference" href="_images/buhui.jpeg"><img alt="buhui" src="_images/buhui.jpeg" style="width: 20%;" /></a>
|
||||
<p>Hui Bu received his master degree in the Artificial Intelligence Laboratory of Korea University in 2014. He is the founder and the CEO of AISHELL and AISHELL foundation. He participated in the release of AISHELL 1 & 2 & 3 & 4, DMASH and HI-MIA open source database project and is the co-founder of China Kaldi offline Technology Forum.</p>
|
||||
</section>
|
||||
|
||||
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 742 KiB After Width: | Height: | Size: 991 KiB |
@ -9,4 +9,5 @@ We will release an E2E SA-ASR~\cite{kanda21b_interspeech} baseline conducted on
|
||||
|
||||
## Baseline results
|
||||
The results of the baseline system are shown in Table 3. The speaker profile adopts the oracle speaker embedding during training. However, due to the lack of oracle speaker label during evaluation, the speaker profile provided by an additional spectral clustering is used. Meanwhile, the results of using the oracle speaker profile on Eval and Test Set are also provided to show the impact of speaker profile accuracy.
|
||||
|
||||

|
||||
@ -2,13 +2,11 @@
|
||||
## Call for participation
|
||||
Automatic speech recognition (ASR) and speaker diarization have made significant strides in recent years, resulting in a surge of speech technology applications across various domains. However, meetings present unique challenges to speech technologies due to their complex acoustic conditions and diverse speaking styles, including overlapping speech, variable numbers of speakers, far-field signals in large conference rooms, and environmental noise and reverberation.
|
||||
|
||||
Over the years, several challenges have been organized to advance the development of meeting transcription, including the Rich Transcription evaluation and Computational Hearing in Multisource Environments (CHIME) challenges. The latest iteration of the CHIME challenge has a particular focus on distant automatic speech recognition (ASR) and developing systems that can generalize across various array topologies and application scenarios. However, while progress has been made in English meeting transcription, language differences remain a significant barrier to achieving comparable results in non-English languages, such as Mandarin.
|
||||
Over the years, several challenges have been organized to advance the development of meeting transcription, including the Rich Transcription evaluation and Computational Hearing in Multisource Environments (CHIME) challenges. The latest iteration of the CHIME challenge has a particular focus on distant automatic speech recognition and developing systems that can generalize across various array topologies and application scenarios. However, while progress has been made in English meeting transcription, language differences remain a significant barrier to achieving comparable results in non-English languages, such as Mandarin. The Multimodal Information Based Speech Processing (MISP) and Multi-Channel Multi-Party Meeting Transcription (M2MeT) challenges have been instrumental in advancing Mandarin meeting transcription. The MISP challenge seeks to address the problem of audio-visual distant multi-microphone signal processing in everyday home environments, while the M2MeT challenge focuses on tackling the speech overlap issue in offline meeting rooms.
|
||||
|
||||
The Multimodal Information Based Speech Processing (MISP) and Multi-Channel Multi-Party Meeting Transcription (M2MeT) challenges have been instrumental in advancing Mandarin meeting transcription. The MISP challenge seeks to address the problem of audio-visual distant multi-microphone signal processing in everyday home environments, while the M2MeT challenge focuses on tackling the speech overlap issue in offline meeting rooms.
|
||||
The ICASSP2022 M2MeT challenge focuses on meeting scenarios, and it comprises two main tasks: speaker diarization and multi-speaker automatic speech recognition. The former involves identifying who spoke when in the meeting, while the latter aims to transcribe speech from multiple speakers simultaneously, which poses significant technical difficulties due to overlapping speech and acoustic interferences.
|
||||
|
||||
The ICASSP2022 M2MeT challenge focuses on meeting scenarios, and it comprises two main tasks: speaker diarization and multi-speaker automatic speech recognition (ASR). The former involves identifying who spoke when in the meeting, while the latter aims to transcribe speech from multiple speakers simultaneously, which poses significant technical difficulties due to overlapping speech and acoustic interferences.
|
||||
|
||||
Building on the success of the previous M2MeT challenge, we are excited to propose the M2MeT2.0 challenge as an ASRU2023 challenge special session. In the original M2MeT challenge, the evaluation metric was speaker-independent, which meant that the transcription could be determined, but not the corresponding speaker. To address this limitation and further advance the current multi-talker ASR system towards practicality, the M2MeT2.0 challenge proposes the speaker-attributed ASR task with two sub-tracks: fixed and open training conditions. By attributing speech to specific speakers, this task aims to improve the accuracy and applicability of multi-talker ASR systems in real-world settings. The challenge provides detailed datasets, rules, evaluation methods, and baseline systems to facilitate reproducible research in this field. The speaker-attribute automatic speech recognition (ASR) task aims to tackle the practical and challenging problem of identifying "who spoke what at when". To facilitate reproducible research in this field, we offer a comprehensive overview of the dataset, rules, evaluation metrics, and baseline systems. Furthermore, we will release a carefully curated test set, comprising approximately 10 hours of audio, according to the timeline. The new test set is designed to enable researchers to validate and compare their models' performance and advance the state of the art in this area.
|
||||
Building on the success of the previous M2MeT challenge, we are excited to propose the M2MeT2.0 challenge as an ASRU2023 challenge special session. In the original M2MeT challenge, the evaluation metric was speaker-independent, which meant that the transcription could be determined, but not the corresponding speaker. To address this limitation and further advance the current multi-talker ASR system towards practicality, the M2MeT2.0 challenge proposes the speaker-attributed ASR task with two sub-tracks: fixed and open training conditions. The speaker-attribute automatic speech recognition (ASR) task aims to tackle the practical and challenging problem of identifying "who spoke what at when". To facilitate reproducible research in this field, we offer a comprehensive overview of the dataset, rules, evaluation metrics, and baseline systems. Furthermore, we will release a carefully curated test set, comprising approximately 10 hours of audio, according to the timeline. The new test set is designed to enable researchers to validate and compare their models' performance and advance the state of the art in this area.
|
||||
|
||||
## Timeline(AOE Time)
|
||||
|
||||
@ -22,6 +20,6 @@ Building on the success of the previous M2MeT challenge, we are excited to propo
|
||||
|
||||
## Guidelines
|
||||
|
||||
Possible improved version: Interested participants, whether from academia or industry, must register for the challenge by completing a Google form, which will be available here. The deadline for registration is May 5, 2023.
|
||||
Interested participants, whether from academia or industry, must register for the challenge by completing a Google form, which will be available here. The deadline for registration is May 5, 2023.
|
||||
|
||||
Within three working days, the challenge organizer will send email invitations to eligible teams to participate in the challenge. All qualified teams are required to adhere to the challenge rules, which will be published on the challenge page. Prior to the ranking release time, each participant must submit a system description document detailing their approach and methods. The organizer will select the top three submissions to be included in the ASRU2023 Proceedings.
|
||||
|
||||
@ -5,8 +5,6 @@ Email: [lxie@nwpu.edu.cn](mailto:lxie@nwpu.edu.cn)
|
||||
|
||||
<img src="images/lxie.jpeg" alt="lxie" width="20%">
|
||||
|
||||
Lei Xie received the Ph.D. degree in computer science from Northwestern Polytechnical University, Xi'an, China, in 2004. From 2001 to 2002, he was with the Department of Electronics and Information Processing, Vrije Universiteit Brussel (VUB), Brussels, Belgium, as a Visiting Scientist. From 2004 to 2006, he was a Senior Research Associate with the Center for Media Technology, School of Creative Media, City University of Hong Kong, Hong Kong, China. From 2006 to 2007, he was a Postdoctoral Fellow with the Human-Computer Communications Laboratory (HCCL), The Chinese University of Hong Kong, Hong Kong, China. He is currently a Professor with School of Computer Science, Northwestern Polytechnical University, Xian, China and leads the Audio, Speech and Language Processing Group (ASLP@NPU). He has published over 200 papers in referred journals and conferences, such as IEEE/ACM Transactions on Audio, Speech and Language Processing, IEEE Transactions on Multimedia, Interspeech, ICASSP, ASRU, ACL and ACM Multimedia. He has achieved several best paper awards in flagship conferences. His current research interests include general topics in speech and language processing, multimedia, and human-computer interaction. Dr. Xie is currently an associate editor (AE) of IEEE/ACM Trans. on Audio, Speech and language Processing. He has actively served as Chairs in many conferences and technical committees. He serves as an IEEE Speech and Language Processing
|
||||
Technical Committee Member.
|
||||
|
||||
***Kong Aik Lee, Senior Scientist at Institute for Infocomm Research, A\*Star, Singapore***
|
||||
|
||||
@ -14,55 +12,37 @@ Email: [kongaik.lee@ieee.org](mailto:kongaik.lee@ieee.org)
|
||||
|
||||
<img src="images/kong.png" alt="kong" width="20%">
|
||||
|
||||
Kong Aik Lee started off him career as a researcher, then a team leader and a strategic planning manager, at the Institute Infocomm Research, A*STAR, Singapore, working on speaker and language recognition research. From 2018 to 2020, he spent two and a half years in NEC Corporation, Japan, focusing very much on voice biometrics and multi-modal biometrics products. He is proud to work with a great team on voice biometrics featured on NEC Bio-Idiom platform. He returned to Singapore in July 2020, and now leading the speech and audio analytics research at the Institute for Infocomm Research, as a Senior Scientist and PI. He also serve as an Editor for Elsevier Computer Speech and Language (since 2016), and was an Associate Editor for IEEE/ACM Transactions on Audio, Speech and Language Processing (2017 - 2021), and am an elected member of IEEE Speech and Language Technical Committee (2019 - 2021).
|
||||
|
||||
***Zhijie Yan, Principal Engineer at Alibaba, China***
|
||||
Email: [zhijie.yzj@alibaba-inc.com](mailto:zhijie.yzj@alibaba-inc.com)
|
||||
|
||||
<img src="images/zhijie.jpg" alt="zhijie" width="20%">
|
||||
|
||||
Zhijie Yan holds a PhD from the University of Science and Technology of China, and is a senior member of the Institute of Electrical and Electronics Engineers (IEEE). He is also an expert reviewer of top academic conferences and journals in the speech field. His research fields include speech recognition, speech synthesis, voiceprints, and speech interaction. His research results are applied in speech services provided by Alibaba Group and Ant Financial. He was awarded the title of "One of the Top 100 Grassroots Scientists" by the China Association for Science and Technology.
|
||||
|
||||
|
||||
***Shiliang Zhang, Senior Engineer at Alibaba, China***
|
||||
Email: [sly.zsl@alibaba-inc.com](mailto:sly.zsl@alibaba-inc.com)
|
||||
|
||||
<img src="images/zsl.JPG" alt="zsl" width="20%">
|
||||
|
||||
Shiliang Zhang graduated with a Ph.D. from the University of Science and Technology of China in 2017. His research areas mainly include speech recognition, natural language understanding, and machine learning. Currently, he has published over 40 papers in mainstream academic journals and conferences in the fields of speech and machine learning, and has applied for dozens of patents. After obtaining his doctorate degree, he joined the Alibaba Intelligent Speech team. He is currently leading the direction of speech recognition and fundamental technology at DAMO Academy's speech laboratory.
|
||||
|
||||
|
||||
|
||||
***Yanmin Qian, Professor, Shanghai Jiao Tong University, China***
|
||||
|
||||
Email: [yanminqian@sjtu.edu.cn](mailto:yanminqian@sjtu.edu.cn)
|
||||
|
||||
<img src="images/qian.jpeg" alt="qian" width="20%">
|
||||
|
||||
Yanmin Qian received the B.S. degree from the Department of Electronic and Information Engineering,Huazhong University of Science and Technology, Wuhan, China, in 2007, and the Ph.D. degree from the Department of Electronic Engineering, Tsinghua University, Beijing, China, in 2012. Since 2013, he has been with the Department of Computer Science and Engineering, Shanghai Jiao Tong University (SJTU), Shanghai, China, where he is currently an Associate Professor. From 2015 to 2016, he also worked as an Associate Research in the Speech Group, Cambridge University Engineering Department, Cambridge, U.K. He is a senior member of IEEE and a member of ISCA, and one of the founding members of Kaldi Speech Recognition Toolkit. He has published more than 110 papers on speech and language processing with 4000+ citations, including the top conference: ICASSP, INTERSPEECH and ASRU. His current research interests include the acoustic and language modeling in speech recognition, speaker and language recognition, key word spotting, and multimedia signal processing.
|
||||
|
||||
|
||||
|
||||
***Zhuo Chen, Applied Scientist in Microsoft, USA***
|
||||
|
||||
Email: [zhuc@microsoft.com](mailto:zhuc@microsoft.com)
|
||||
|
||||
<img src="images/chenzhuo.jpg" alt="chenzhuo" width="20%">
|
||||
|
||||
Zhuo Chen received the Ph.D. degree from Columbia University, New York, NY, USA, in 2017. He is currently a Principal Applied Data Scientist with Microsoft. He has authored or coauthored more than 80 papers in peer-reviewed journals and conferences with around 6000 citations. He is a reviewer or technical committee member for more than ten journals and conferences. His research interests include automatic conversation recognition, speech separation, diarisation, and speaker information extraction. He actively participated in the academic events and challenges, and won several awards. Meanwhile, he contributed to open-sourced datasets, such as WSJ0-2mix, LibriCSS, and AISHELL-4, that have been main benchmark datasets for multi-speaker processing research. In 2020, he was the Team Leader in 2020 Jelinek workshop, leading more than 30 researchers and students to push the state of the art in conversation transcription.
|
||||
|
||||
***Jian Wu, Applied Scientist in Microsoft, USA***
|
||||
|
||||
Email: [wujian@microsoft.com](mailto:wujian@microsoft.com)
|
||||
|
||||
<img src="images/wujian.jpg" alt="wujian" width="20%">
|
||||
|
||||
Jian Wu received a master degree from Northwestern Polytechnical University, Xi'an, China, in 2020 and currently he is a Applied Scientist in Microsoft, USA. His research interests cover multi-channel signal processing, robust and multi-talker speech recognition, speech enhancement, dereverberation and separation. He has around 30 conference publications with a total citation over 1200. He participated in several challenges such as CHiME5, DNS 2020 and FFSVC 2020 and contributed to the open-sourced datasets including LibriCSS and AISHELL-4. He is also a reviewer for several journals and conferences such as ICASSP, SLT, TASLP and SPL.
|
||||
|
||||
***Hui Bu, CEO, AISHELL foundation, China***
|
||||
|
||||
Email: [buhui@aishelldata.com](mailto:buhui@aishelldata.com)
|
||||
|
||||
<img src="images/buhui.jpeg" alt="buhui" width="20%">
|
||||
|
||||
Hui Bu received his master degree in the Artificial Intelligence Laboratory of Korea University in 2014. He is the founder and the CEO of AISHELL and AISHELL foundation. He participated in the release of AISHELL 1 \& 2 \& 3 \& 4, DMASH and HI-MIA open source database project and is the co-founder of China Kaldi offline Technology Forum.
|
||||
|
||||
@ -20,10 +20,3 @@ To facilitate reproducible research, we provide a comprehensive overview of the
|
||||
./Rules
|
||||
./Organizers
|
||||
./Contact
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
||||
* :ref:`search`
|
||||
@ -135,14 +135,6 @@ To facilitate reproducible research, we provide a comprehensive overview of the
|
||||
<li class="toctree-l1"><a class="reference internal" href="Contact.html">Contact</a></li>
|
||||
</ul>
|
||||
</div>
|
||||
</section>
|
||||
<section id="indices-and-tables">
|
||||
<h1>Indices and tables<a class="headerlink" href="#indices-and-tables" title="Permalink to this heading">¶</a></h1>
|
||||
<ul class="simple">
|
||||
<li><p><a class="reference internal" href="genindex.html"><span class="std std-ref">Index</span></a></p></li>
|
||||
<li><p><a class="reference internal" href="py-modindex.html"><span class="std std-ref">Module Index</span></a></p></li>
|
||||
<li><p><a class="reference internal" href="search.html"><span class="std std-ref">Search Page</span></a></p></li>
|
||||
</ul>
|
||||
</section>
|
||||
|
||||
|
||||
|
||||
File diff suppressed because one or more lines are too long
@ -20,10 +20,3 @@ To facilitate reproducible research, we provide a comprehensive overview of the
|
||||
./Rules
|
||||
./Organizers
|
||||
./Contact
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
||||
* :ref:`search`
|
||||
Binary file not shown.
Binary file not shown.
@ -20,10 +20,3 @@ ASRU 2023 多通道多方会议转录挑战 2.0
|
||||
./规则
|
||||
./组委会
|
||||
./联系方式
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
||||
* :ref:`search`
|
||||
|
||||
@ -136,14 +136,6 @@
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html">联系方式</a></li>
|
||||
</ul>
|
||||
</div>
|
||||
</section>
|
||||
<section id="indices-and-tables">
|
||||
<h1>Indices and tables<a class="headerlink" href="#indices-and-tables" title="此标题的永久链接">¶</a></h1>
|
||||
<ul class="simple">
|
||||
<li><p><a class="reference internal" href="genindex.html"><span class="std std-ref">索引</span></a></p></li>
|
||||
<li><p><a class="reference internal" href="py-modindex.html"><span class="std std-ref">模块索引</span></a></p></li>
|
||||
<li><p><a class="reference internal" href="search.html"><span class="std std-ref">搜索页面</span></a></p></li>
|
||||
</ul>
|
||||
</section>
|
||||
|
||||
|
||||
|
||||
@ -1 +1 @@
|
||||
Search.setIndex({"docnames": ["index", "\u57fa\u7ebf", "\u6570\u636e\u96c6", "\u7b80\u4ecb", "\u7ec4\u59d4\u4f1a", "\u8054\u7cfb\u65b9\u5f0f", "\u89c4\u5219", "\u8d5b\u9053\u8bbe\u7f6e\u4e0e\u8bc4\u4f30"], "filenames": ["index.rst", "\u57fa\u7ebf.md", "\u6570\u636e\u96c6.md", "\u7b80\u4ecb.md", "\u7ec4\u59d4\u4f1a.md", "\u8054\u7cfb\u65b9\u5f0f.md", "\u89c4\u5219.md", "\u8d5b\u9053\u8bbe\u7f6e\u4e0e\u8bc4\u4f30.md"], "titles": ["ASRU 2023 \u591a\u901a\u9053\u591a\u65b9\u4f1a\u8bae\u8f6c\u5f55\u6311\u6218 2.0", "\u57fa\u7ebf", "\u6570\u636e\u96c6", "\u7b80\u4ecb", "\u7ec4\u59d4\u4f1a", "\u8054\u7cfb\u65b9\u5f0f", "\u7ade\u8d5b\u89c4\u5219", "\u8d5b\u9053\u8bbe\u7f6e\u4e0e\u8bc4\u4f30"], "terms": {"m2met": [0, 3, 5, 7], "asru2023": [0, 3], "m2met2": [0, 3, 5, 7], "contact": [], "funasr": 1, "sa": 1, "asr": [1, 3, 7], "speakerencod": 1, "modelscop": [1, 7], "todo": 1, "fill": 1, "with": 1, "the": 1, "readm": 1, "md": 1, "of": 1, "baselin": [1, 2], "aishel": [2, 7], "cn": [2, 4, 7], "celeb": [2, 7], "test": [2, 6, 7], "2023": [2, 3, 6, 7], "118": 2, "75": 2, "104": 2, "train": 2, "eval": [2, 6], "10": [2, 3, 7], "212": 2, "15": 2, "30": 2, "456": 2, "25": 2, "13": [2, 3], "55": 2, "42": 2, "27": 2, "34": 2, "76": 2, "20": 2, "textgrid": 2, "id": 2, "openslr": 2, "automat": 3, "speech": 3, "recognit": 3, "speaker": 3, "diariz": 3, "rich": 3, "transcript": 3, "evalu": 3, "chime": 3, "comput": 3, "hear": 3, "in": 3, "multisourc": 3, "environ": 3, "misp": 3, "multimod": 3, "inform": 3, "base": 3, "process": 3, "multi": 3, "channel": 3, "parti": 3, "meet": 3, "assp2022": 3, "19": 3, "12": 3, "asru": 3, "workshop": 3, "lxie": 4, "nwpu": 4, "edu": 4, "kong": 4, "aik": 4, "lee": 4, "star": 4, "kongaik": 4, "ieee": 4, "org": 4, "zhiji": 4, "yzj": 4, "alibaba": 4, "inc": 4, "com": [4, 5], "sli": 4, "zsl": 4, "yanminqian": 4, "sjtu": 4, "zhuc": 4, "microsoft": 4, "wujian": 4, "ceo": 4, "buhui": 4, "aishelldata": 4, "alimeet": [5, 7], "gmail": 5, "cpcer": [6, 7], "las": 6, "rnnt": 6, "transform": 6, "aishell4": 7, "vad": 7, "cer": 7, "ins": 7, "sub": 7, "del": 7, "text": 7, "frac": 7, "mathcal": 7, "n_": 7, "total": 7, "time": 7, "100": 7, "hug": 7, "face": 7}, "objects": {}, "objtypes": {}, "objnames": {}, "titleterms": {"asru": 0, "2023": 0, "indic": 0, "and": 0, "tabl": 0, "alimeet": 2, "aoe": 3, "contact": []}, "envversion": {"sphinx.domains.c": 2, "sphinx.domains.changeset": 1, "sphinx.domains.citation": 1, "sphinx.domains.cpp": 8, "sphinx.domains.index": 1, "sphinx.domains.javascript": 2, "sphinx.domains.math": 2, "sphinx.domains.python": 3, "sphinx.domains.rst": 2, "sphinx.domains.std": 2, "sphinx": 57}, "alltitles": {"ASRU 2023 \u591a\u901a\u9053\u591a\u65b9\u4f1a\u8bae\u8f6c\u5f55\u6311\u6218 2.0": [[0, "asru-2023-2-0"]], "\u76ee\u5f55:": [[0, null]], "Indices and tables": [[0, "indices-and-tables"]], "\u57fa\u7ebf": [[1, "id1"]], "\u57fa\u7ebf\u6982\u8ff0": [[1, "id2"]], "\u5feb\u901f\u5f00\u59cb": [[1, "id3"]], "\u57fa\u7ebf\u7ed3\u679c": [[1, "id4"]], "\u6570\u636e\u96c6": [[2, "id1"]], "\u6570\u636e\u96c6\u6982\u8ff0": [[2, "id2"]], "Alimeeting\u6570\u636e\u96c6\u4ecb\u7ecd": [[2, "alimeeting"]], "\u83b7\u53d6\u6570\u636e": [[2, "id3"]], "\u7b80\u4ecb": [[3, "id1"]], "\u7ade\u8d5b\u4ecb\u7ecd": [[3, "id2"]], "\u65f6\u95f4\u5b89\u6392(AOE\u65f6\u95f4)": [[3, "aoe"]], "\u7ade\u8d5b\u62a5\u540d": [[3, "id3"]], "\u7ade\u8d5b\u89c4\u5219": [[6, "id1"]], "\u8d5b\u9053\u8bbe\u7f6e\u4e0e\u8bc4\u4f30": [[7, "id1"]], "\u8bf4\u8bdd\u4eba\u76f8\u5173\u7684\u8bed\u97f3\u8bc6\u522b (\u4e3b\u8d5b\u9053)": [[7, "id2"]], "\u8bc4\u4f30\u65b9\u6cd5": [[7, "id3"]], "\u5b50\u8d5b\u9053\u8bbe\u7f6e": [[7, "id4"]], "\u5b50\u8d5b\u9053\u4e00 (\u9650\u5b9a\u8bad\u7ec3\u6570\u636e):": [[7, "id5"]], "\u5b50\u8d5b\u9053\u4e8c (\u5f00\u653e\u8bad\u7ec3\u6570\u636e):": [[7, "id6"]], "\u7ec4\u59d4\u4f1a": [[4, "id1"]], "\u8054\u7cfb\u65b9\u5f0f": [[5, "id1"]]}, "indexentries": {}})
|
||||
Search.setIndex({"docnames": ["index", "\u57fa\u7ebf", "\u6570\u636e\u96c6", "\u7b80\u4ecb", "\u7ec4\u59d4\u4f1a", "\u8054\u7cfb\u65b9\u5f0f", "\u89c4\u5219", "\u8d5b\u9053\u8bbe\u7f6e\u4e0e\u8bc4\u4f30"], "filenames": ["index.rst", "\u57fa\u7ebf.md", "\u6570\u636e\u96c6.md", "\u7b80\u4ecb.md", "\u7ec4\u59d4\u4f1a.md", "\u8054\u7cfb\u65b9\u5f0f.md", "\u89c4\u5219.md", "\u8d5b\u9053\u8bbe\u7f6e\u4e0e\u8bc4\u4f30.md"], "titles": ["ASRU 2023 \u591a\u901a\u9053\u591a\u65b9\u4f1a\u8bae\u8f6c\u5f55\u6311\u6218 2.0", "\u57fa\u7ebf", "\u6570\u636e\u96c6", "\u7b80\u4ecb", "\u7ec4\u59d4\u4f1a", "\u8054\u7cfb\u65b9\u5f0f", "\u7ade\u8d5b\u89c4\u5219", "\u8d5b\u9053\u8bbe\u7f6e\u4e0e\u8bc4\u4f30"], "terms": {"m2met": [0, 3, 5, 7], "asru2023": [0, 3], "m2met2": [0, 3, 5, 7], "contact": [], "funasr": 1, "sa": 1, "asr": [1, 3, 7], "speakerencod": 1, "modelscop": [1, 7], "todo": 1, "fill": 1, "with": 1, "the": 1, "readm": 1, "md": 1, "of": 1, "baselin": [1, 2], "aishel": [2, 7], "cn": [2, 4, 7], "celeb": [2, 7], "test": [2, 6, 7], "2023": [2, 3, 6, 7], "118": 2, "75": 2, "104": 2, "train": 2, "eval": [2, 6], "10": [2, 3, 7], "212": 2, "15": 2, "30": 2, "456": 2, "25": 2, "13": [2, 3], "55": 2, "42": 2, "27": 2, "34": 2, "76": 2, "20": 2, "textgrid": 2, "id": 2, "openslr": 2, "automat": 3, "speech": 3, "recognit": 3, "speaker": 3, "diariz": 3, "rich": 3, "transcript": 3, "evalu": 3, "chime": 3, "comput": 3, "hear": 3, "in": 3, "multisourc": 3, "environ": 3, "misp": 3, "multimod": 3, "inform": 3, "base": 3, "process": 3, "multi": 3, "channel": 3, "parti": 3, "meet": 3, "assp2022": 3, "19": 3, "12": 3, "asru": 3, "workshop": 3, "lxie": 4, "nwpu": 4, "edu": 4, "kong": 4, "aik": 4, "lee": 4, "star": 4, "kongaik": 4, "ieee": 4, "org": 4, "zhiji": 4, "yzj": 4, "alibaba": 4, "inc": 4, "com": [4, 5], "sli": 4, "zsl": 4, "yanminqian": 4, "sjtu": 4, "zhuc": 4, "microsoft": 4, "wujian": 4, "ceo": 4, "buhui": 4, "aishelldata": 4, "alimeet": [5, 7], "gmail": 5, "cpcer": [6, 7], "las": 6, "rnnt": 6, "transform": 6, "aishell4": 7, "vad": 7, "cer": 7, "ins": 7, "sub": 7, "del": 7, "text": 7, "frac": 7, "mathcal": 7, "n_": 7, "total": 7, "time": 7, "100": 7, "hug": 7, "face": 7}, "objects": {}, "objtypes": {}, "objnames": {}, "titleterms": {"asru": 0, "2023": 0, "indic": [], "and": [], "tabl": [], "alimeet": 2, "aoe": 3, "contact": []}, "envversion": {"sphinx.domains.c": 2, "sphinx.domains.changeset": 1, "sphinx.domains.citation": 1, "sphinx.domains.cpp": 8, "sphinx.domains.index": 1, "sphinx.domains.javascript": 2, "sphinx.domains.math": 2, "sphinx.domains.python": 3, "sphinx.domains.rst": 2, "sphinx.domains.std": 2, "sphinx": 57}, "alltitles": {"\u8054\u7cfb\u65b9\u5f0f": [[5, "id1"]], "\u57fa\u7ebf": [[1, "id1"]], "\u57fa\u7ebf\u6982\u8ff0": [[1, "id2"]], "\u5feb\u901f\u5f00\u59cb": [[1, "id3"]], "\u57fa\u7ebf\u7ed3\u679c": [[1, "id4"]], "\u6570\u636e\u96c6": [[2, "id1"]], "\u6570\u636e\u96c6\u6982\u8ff0": [[2, "id2"]], "Alimeeting\u6570\u636e\u96c6\u4ecb\u7ecd": [[2, "alimeeting"]], "\u83b7\u53d6\u6570\u636e": [[2, "id3"]], "\u7b80\u4ecb": [[3, "id1"]], "\u7ade\u8d5b\u4ecb\u7ecd": [[3, "id2"]], "\u65f6\u95f4\u5b89\u6392(AOE\u65f6\u95f4)": [[3, "aoe"]], "\u7ade\u8d5b\u62a5\u540d": [[3, "id3"]], "\u7ec4\u59d4\u4f1a": [[4, "id1"]], "\u7ade\u8d5b\u89c4\u5219": [[6, "id1"]], "\u8d5b\u9053\u8bbe\u7f6e\u4e0e\u8bc4\u4f30": [[7, "id1"]], "\u8bf4\u8bdd\u4eba\u76f8\u5173\u7684\u8bed\u97f3\u8bc6\u522b (\u4e3b\u8d5b\u9053)": [[7, "id2"]], "\u8bc4\u4f30\u65b9\u6cd5": [[7, "id3"]], "\u5b50\u8d5b\u9053\u8bbe\u7f6e": [[7, "id4"]], "\u5b50\u8d5b\u9053\u4e00 (\u9650\u5b9a\u8bad\u7ec3\u6570\u636e):": [[7, "id5"]], "\u5b50\u8d5b\u9053\u4e8c (\u5f00\u653e\u8bad\u7ec3\u6570\u636e):": [[7, "id6"]], "ASRU 2023 \u591a\u901a\u9053\u591a\u65b9\u4f1a\u8bae\u8f6c\u5f55\u6311\u6218 2.0": [[0, "asru-2023-2-0"]], "\u76ee\u5f55:": [[0, null]]}, "indexentries": {}})
|
||||
@ -102,7 +102,7 @@
|
||||
</li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E8%A7%84%E5%88%99.html">竞赛规则</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E7%BB%84%E5%A7%94%E4%BC%9A.html">组委会</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html">Contact</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html">联系方式</a></li>
|
||||
</ul>
|
||||
|
||||
|
||||
|
||||
@ -102,7 +102,7 @@
|
||||
</li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E8%A7%84%E5%88%99.html">竞赛规则</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E7%BB%84%E5%A7%94%E4%BC%9A.html">组委会</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html">Contact</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html">联系方式</a></li>
|
||||
</ul>
|
||||
|
||||
|
||||
|
||||
@ -103,7 +103,7 @@
|
||||
</li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E8%A7%84%E5%88%99.html">竞赛规则</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E7%BB%84%E5%A7%94%E4%BC%9A.html">组委会</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html">Contact</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html">联系方式</a></li>
|
||||
</ul>
|
||||
|
||||
|
||||
|
||||
@ -27,7 +27,7 @@
|
||||
<script src="_static/translations.js"></script>
|
||||
<link rel="index" title="索引" href="genindex.html" />
|
||||
<link rel="search" title="搜索" href="search.html" />
|
||||
<link rel="next" title="Contact" href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html" />
|
||||
<link rel="next" title="联系方式" href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html" />
|
||||
<link rel="prev" title="竞赛规则" href="%E8%A7%84%E5%88%99.html" />
|
||||
|
||||
|
||||
@ -40,7 +40,7 @@
|
||||
<a href="genindex.html" title="总索引"
|
||||
accesskey="I">索引</a></li>
|
||||
<li class="right" >
|
||||
<a href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html" title="Contact"
|
||||
<a href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html" title="联系方式"
|
||||
accesskey="N">下一页</a> |</li>
|
||||
<li class="right" >
|
||||
<a href="%E8%A7%84%E5%88%99.html" title="竞赛规则"
|
||||
@ -102,7 +102,7 @@
|
||||
</li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E8%A7%84%E5%88%99.html">竞赛规则</a></li>
|
||||
<li class="toctree-l1 current"><a class="current reference internal" href="#">组委会</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html">Contact</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html">联系方式</a></li>
|
||||
</ul>
|
||||
|
||||
|
||||
@ -168,7 +168,7 @@
|
||||
</div>
|
||||
|
||||
<div class="pull-right">
|
||||
<a class="btn btn-default" href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html" title="下一章 (use the right arrow)">Contact</a>
|
||||
<a class="btn btn-default" href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html" title="下一章 (use the right arrow)">联系方式</a>
|
||||
</div>
|
||||
</div>
|
||||
<div class="clearer"></div>
|
||||
@ -183,7 +183,7 @@
|
||||
<a href="genindex.html" title="总索引"
|
||||
>索引</a></li>
|
||||
<li class="right" >
|
||||
<a href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html" title="Contact"
|
||||
<a href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html" title="联系方式"
|
||||
>下一页</a> |</li>
|
||||
<li class="right" >
|
||||
<a href="%E8%A7%84%E5%88%99.html" title="竞赛规则"
|
||||
|
||||
@ -102,7 +102,7 @@
|
||||
</li>
|
||||
<li class="toctree-l1 current"><a class="current reference internal" href="#">竞赛规则</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E7%BB%84%E5%A7%94%E4%BC%9A.html">组委会</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html">Contact</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html">联系方式</a></li>
|
||||
</ul>
|
||||
|
||||
|
||||
|
||||
@ -103,7 +103,7 @@
|
||||
</li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E8%A7%84%E5%88%99.html">竞赛规则</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E7%BB%84%E5%A7%94%E4%BC%9A.html">组委会</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html">Contact</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="%E8%81%94%E7%B3%BB%E6%96%B9%E5%BC%8F.html">联系方式</a></li>
|
||||
</ul>
|
||||
|
||||
|
||||
|
||||
@ -20,10 +20,3 @@ ASRU 2023 多通道多方会议转录挑战 2.0
|
||||
./规则
|
||||
./组委会
|
||||
./联系方式
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
||||
* :ref:`search`
|
||||
|
||||
Loading…
Reference in New Issue
Block a user