mirror of
https://github.com/modelscope/FunASR
synced 2025-09-15 14:48:36 +08:00
commit
f77c5803f4
3
.gitignore
vendored
3
.gitignore
vendored
@ -16,4 +16,5 @@ MaaS-lib
|
||||
.egg*
|
||||
dist
|
||||
build
|
||||
funasr.egg-info
|
||||
funasr.egg-info
|
||||
docs/_build
|
||||
12
README.md
12
README.md
@ -13,10 +13,10 @@
|
||||
| [**Highlights**](#highlights)
|
||||
| [**Installation**](#installation)
|
||||
| [**Docs**](https://alibaba-damo-academy.github.io/FunASR/en/index.html)
|
||||
| [**Tutorial**](https://github.com/alibaba-damo-academy/FunASR/wiki#funasr%E7%94%A8%E6%88%B7%E6%89%8B%E5%86%8C)
|
||||
| [**Tutorial_CN**](https://github.com/alibaba-damo-academy/FunASR/wiki#funasr%E7%94%A8%E6%88%B7%E6%89%8B%E5%86%8C)
|
||||
| [**Papers**](https://github.com/alibaba-damo-academy/FunASR#citations)
|
||||
| [**Runtime**](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime)
|
||||
| [**Model Zoo**](https://github.com/alibaba-damo-academy/FunASR/blob/main/docs/modelscope_models.md)
|
||||
| [**Model Zoo**](https://github.com/alibaba-damo-academy/FunASR/blob/main/docs/model_zoo/modelscope_models.md)
|
||||
| [**Contact**](#contact)
|
||||
| [**M2MET2.0 Challenge**](https://github.com/alibaba-damo-academy/FunASR#multi-channel-multi-party-meeting-transcription-20-m2met20-challenge)
|
||||
|
||||
@ -28,7 +28,7 @@ For the release notes, please ref to [news](https://github.com/alibaba-damo-acad
|
||||
|
||||
## Highlights
|
||||
- FunASR supports speech recognition(ASR), Multi-talker ASR, Voice Activity Detection(VAD), Punctuation Restoration, Language Models, Speaker Verification and Speaker diarization.
|
||||
- We have released large number of academic and industrial pretrained models on [ModelScope](https://www.modelscope.cn/models?page=1&tasks=auto-speech-recognition)
|
||||
- We have released large number of academic and industrial pretrained models on [ModelScope](https://www.modelscope.cn/models?page=1&tasks=auto-speech-recognition), ref to [Model Zoo](https://github.com/alibaba-damo-academy/FunASR/blob/main/docs/model_zoo/modelscope_models.md)
|
||||
- The pretrained model [Paraformer-large](https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary) obtains the best performance on many tasks in [SpeechIO leaderboard](https://github.com/SpeechColab/Leaderboard)
|
||||
- FunASR supplies a easy-to-use pipeline to finetune pretrained models from [ModelScope](https://www.modelscope.cn/models?page=1&tasks=auto-speech-recognition)
|
||||
- Compared to [Espnet](https://github.com/espnet/espnet) framework, the training speed of large-scale datasets in FunASR is much faster owning to the optimized dataloader.
|
||||
@ -60,12 +60,8 @@ pip install -U modelscope
|
||||
# pip install -U modelscope -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html -i https://mirror.sjtu.edu.cn/pypi/web/simple
|
||||
```
|
||||
|
||||
For more details, please ref to [installation](https://alibaba-damo-academy.github.io/FunASR/en/installation.html)
|
||||
For more details, please ref to [installation](https://alibaba-damo-academy.github.io/FunASR/en/installation/installation.html)
|
||||
|
||||
[//]: # ()
|
||||
[//]: # (## Usage)
|
||||
|
||||
[//]: # (For users who are new to FunASR and ModelScope, please refer to FunASR Docs([CN](https://alibaba-damo-academy.github.io/FunASR/cn/index.html) / [EN](https://alibaba-damo-academy.github.io/FunASR/en/index.html)))
|
||||
|
||||
## Contact
|
||||
|
||||
|
||||
19
docs/README.md
Normal file
19
docs/README.md
Normal file
@ -0,0 +1,19 @@
|
||||
# FunASR document generation
|
||||
|
||||
## Generate HTML
|
||||
For convenience, we provide users with the ability to generate local HTML manually.
|
||||
|
||||
First, you should install the following packages, which is required for building HTML:
|
||||
```sh
|
||||
conda activate funasr
|
||||
pip install requests sphinx nbsphinx sphinx_markdown_tables sphinx_rtd_theme recommonmark
|
||||
```
|
||||
|
||||
Then you can generate HTML manually.
|
||||
|
||||
```sh
|
||||
cd docs
|
||||
make html
|
||||
```
|
||||
|
||||
The generated files are all contained in the "FunASR/docs/_build" directory. You can access the FunASR documentation by simply opening the "html/index.html" file in your browser from this directory.
|
||||
@ -1,129 +1,3 @@
|
||||
# Speech Recognition
|
||||
Here we take "Training a paraformer model from scratch using the AISHELL-1 dataset" as an example to introduce how to use FunASR. According to this example, users can similarly employ other datasets (such as AISHELL-2 dataset, etc.) to train other models (such as conformer, transformer, etc.).
|
||||
|
||||
## Overall Introduction
|
||||
We provide a recipe `egs/aishell/paraformer/run.sh` for training a paraformer model on AISHELL-1 dataset. This recipe consists of five stages, supporting training on multiple GPUs and decoding by CPU or GPU. Before introducing each stage in detail, we first explain several parameters which should be set by users.
|
||||
- `CUDA_VISIBLE_DEVICES`: visible gpu list
|
||||
- `gpu_num`: the number of GPUs used for training
|
||||
- `gpu_inference`: whether to use GPUs for decoding
|
||||
- `njob`: for CPU decoding, indicating the total number of CPU jobs; for GPU decoding, indicating the number of jobs on each GPU
|
||||
- `data_aishell`: the raw path of AISHELL-1 dataset
|
||||
- `feats_dir`: the path for saving processed data
|
||||
- `nj`: the number of jobs for data preparation
|
||||
- `speed_perturb`: the range of speech perturbed
|
||||
- `exp_dir`: the path for saving experimental results
|
||||
- `tag`: the suffix of experimental result directory
|
||||
|
||||
## Stage 0: Data preparation
|
||||
This stage processes raw AISHELL-1 dataset `$data_aishell` and generates the corresponding `wav.scp` and `text` in `$feats_dir/data/xxx`. `xxx` means `train/dev/test`. Here we assume users have already downloaded AISHELL-1 dataset. If not, users can download data [here](https://www.openslr.org/33/) and set the path for `$data_aishell`. The examples of `wav.scp` and `text` are as follows:
|
||||
* `wav.scp`
|
||||
```
|
||||
BAC009S0002W0122 /nfs/ASR_DATA/AISHELL-1/data_aishell/wav/train/S0002/BAC009S0002W0122.wav
|
||||
BAC009S0002W0123 /nfs/ASR_DATA/AISHELL-1/data_aishell/wav/train/S0002/BAC009S0002W0123.wav
|
||||
BAC009S0002W0124 /nfs/ASR_DATA/AISHELL-1/data_aishell/wav/train/S0002/BAC009S0002W0124.wav
|
||||
...
|
||||
```
|
||||
* `text`
|
||||
```
|
||||
BAC009S0002W0122 而 对 楼 市 成 交 抑 制 作 用 最 大 的 限 购
|
||||
BAC009S0002W0123 也 成 为 地 方 政 府 的 眼 中 钉
|
||||
BAC009S0002W0124 自 六 月 底 呼 和 浩 特 市 率 先 宣 布 取 消 限 购 后
|
||||
...
|
||||
```
|
||||
These two files both have two columns, while the first column is wav ids and the second column is the corresponding wav paths/label tokens.
|
||||
|
||||
## Stage 1: Feature Generation
|
||||
This stage extracts FBank features from `wav.scp` and apply speed perturbation as data augmentation according to `speed_perturb`. Users can set `nj` to control the number of jobs for feature generation. The generated features are saved in `$feats_dir/dump/xxx/ark` and the corresponding `feats.scp` files are saved as `$feats_dir/dump/xxx/feats.scp`. An example of `feats.scp` can be seen as follows:
|
||||
* `feats.scp`
|
||||
```
|
||||
...
|
||||
BAC009S0002W0122_sp0.9 /nfs/funasr_data/aishell-1/dump/fbank/train/ark/feats.16.ark:592751055
|
||||
...
|
||||
```
|
||||
Note that samples in this file have already been shuffled randomly. This file contains two columns. The first column is wav ids while the second column is kaldi-ark feature paths. Besides, `speech_shape` and `text_shape` are also generated in this stage, denoting the speech feature shape and text length of each sample. The examples are shown as follows:
|
||||
* `speech_shape`
|
||||
```
|
||||
...
|
||||
BAC009S0002W0122_sp0.9 665,80
|
||||
...
|
||||
```
|
||||
* `text_shape`
|
||||
```
|
||||
...
|
||||
BAC009S0002W0122_sp0.9 15
|
||||
...
|
||||
```
|
||||
These two files have two columns. The first column is wav ids and the second column is the corresponding speech feature shape and text length.
|
||||
|
||||
## Stage 2: Dictionary Preparation
|
||||
This stage processes the dictionary, which is used as a mapping between label characters and integer indices during ASR training. The processed dictionary file is saved as `$feats_dir/data/$lang_toekn_list/$token_type/tokens.txt`. An example of `tokens.txt` is as follows:
|
||||
* `tokens.txt`
|
||||
```
|
||||
<blank>
|
||||
<s>
|
||||
</s>
|
||||
一
|
||||
丁
|
||||
...
|
||||
龚
|
||||
龟
|
||||
<unk>
|
||||
```
|
||||
* `<blank>`: indicates the blank token for CTC
|
||||
* `<s>`: indicates the start-of-sentence token
|
||||
* `</s>`: indicates the end-of-sentence token
|
||||
* `<unk>`: indicates the out-of-vocabulary token
|
||||
|
||||
## Stage 3: Training
|
||||
This stage achieves the training of the specified model. To start training, users should manually set `exp_dir`, `CUDA_VISIBLE_DEVICES` and `gpu_num`, which have already been explained above. By default, the best `$keep_nbest_models` checkpoints on validation dataset will be averaged to generate a better model and adopted for decoding.
|
||||
|
||||
* DDP Training
|
||||
|
||||
We support the DistributedDataParallel (DDP) training and the detail can be found [here](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html). To enable DDP training, please set `gpu_num` greater than 1. For example, if you set `CUDA_VISIBLE_DEVICES=0,1,5,6,7` and `gpu_num=3`, then the gpus with ids 0, 1 and 5 will be used for training.
|
||||
|
||||
* DataLoader
|
||||
|
||||
We support an optional iterable-style DataLoader based on [Pytorch Iterable-style DataPipes](https://pytorch.org/data/beta/torchdata.datapipes.iter.html) for large dataset and users can set `dataset_type=large` to enable it.
|
||||
|
||||
* Configuration
|
||||
|
||||
The parameters of the training, including model, optimization, dataset, etc., can be set by a YAML file in `conf` directory. Also, users can directly set the parameters in `run.sh` recipe. Please avoid to set the same parameters in both the YAML file and the recipe.
|
||||
|
||||
* Training Steps
|
||||
|
||||
We support two parameters to specify the training steps, namely `max_epoch` and `max_update`. `max_epoch` indicates the total training epochs while `max_update` indicates the total training steps. If these two parameters are specified at the same time, once the training reaches any one of these two parameters, the training will be stopped.
|
||||
|
||||
* Tensorboard
|
||||
|
||||
Users can use tensorboard to observe the loss, learning rate, etc. Please run the following command:
|
||||
```
|
||||
tensorboard --logdir ${exp_dir}/exp/${model_dir}/tensorboard/train
|
||||
```
|
||||
|
||||
## Stage 4: Decoding
|
||||
This stage generates the recognition results and calculates the `CER` to verify the performance of the trained model.
|
||||
|
||||
* Mode Selection
|
||||
|
||||
As we support paraformer, uniasr, conformer and other models in FunASR, a `mode` parameter should be specified as `asr/paraformer/uniasr` according to the trained model.
|
||||
|
||||
* Configuration
|
||||
|
||||
We support CTC decoding, attention decoding and hybrid CTC-attention decoding in FunASR, which can be specified by `ctc_weight` in a YAML file in `conf` directory. Specifically, `ctc_weight=1.0` indicates CTC decoding, `ctc_weight=0.0` indicates attention decoding, `0.0<ctc_weight<1.0` indicates hybrid CTC-attention decoding.
|
||||
|
||||
* CPU/GPU Decoding
|
||||
|
||||
We support CPU and GPU decoding in FunASR. For CPU decoding, you should set `gpu_inference=False` and set `njob` to specify the total number of CPU decoding jobs. For GPU decoding, you should set `gpu_inference=True`. You should also set `gpuid_list` to indicate which GPUs are used for decoding and `njobs` to indicate the number of decoding jobs on each GPU.
|
||||
|
||||
* Performance
|
||||
|
||||
We adopt `CER` to verify the performance. The results are in `$exp_dir/exp/$model_dir/$decoding_yaml_name/$average_model_name/$dset`, namely `text.cer` and `text.cer.txt`. `text.cer` saves the comparison between the recognized text and the reference text while `text.cer.txt` saves the final `CER` result. The following is an example of `text.cer`:
|
||||
* `text.cer`
|
||||
```
|
||||
...
|
||||
BAC009S0764W0213(nwords=11,cor=11,ins=0,del=0,sub=0) corr=100.00%,cer=0.00%
|
||||
ref: 构 建 良 好 的 旅 游 市 场 环 境
|
||||
res: 构 建 良 好 的 旅 游 市 场 环 境
|
||||
...
|
||||
```
|
||||
|
||||
Undo
|
||||
|
||||
@ -1,129 +1,2 @@
|
||||
# Punctuation Restoration
|
||||
Here we take "Training a paraformer model from scratch using the AISHELL-1 dataset" as an example to introduce how to use FunASR. According to this example, users can similarly employ other datasets (such as AISHELL-2 dataset, etc.) to train other models (such as conformer, transformer, etc.).
|
||||
|
||||
## Overall Introduction
|
||||
We provide a recipe `egs/aishell/paraformer/run.sh` for training a paraformer model on AISHELL-1 dataset. This recipe consists of five stages, supporting training on multiple GPUs and decoding by CPU or GPU. Before introducing each stage in detail, we first explain several parameters which should be set by users.
|
||||
- `CUDA_VISIBLE_DEVICES`: visible gpu list
|
||||
- `gpu_num`: the number of GPUs used for training
|
||||
- `gpu_inference`: whether to use GPUs for decoding
|
||||
- `njob`: for CPU decoding, indicating the total number of CPU jobs; for GPU decoding, indicating the number of jobs on each GPU
|
||||
- `data_aishell`: the raw path of AISHELL-1 dataset
|
||||
- `feats_dir`: the path for saving processed data
|
||||
- `nj`: the number of jobs for data preparation
|
||||
- `speed_perturb`: the range of speech perturbed
|
||||
- `exp_dir`: the path for saving experimental results
|
||||
- `tag`: the suffix of experimental result directory
|
||||
|
||||
## Stage 0: Data preparation
|
||||
This stage processes raw AISHELL-1 dataset `$data_aishell` and generates the corresponding `wav.scp` and `text` in `$feats_dir/data/xxx`. `xxx` means `train/dev/test`. Here we assume users have already downloaded AISHELL-1 dataset. If not, users can download data [here](https://www.openslr.org/33/) and set the path for `$data_aishell`. The examples of `wav.scp` and `text` are as follows:
|
||||
* `wav.scp`
|
||||
```
|
||||
BAC009S0002W0122 /nfs/ASR_DATA/AISHELL-1/data_aishell/wav/train/S0002/BAC009S0002W0122.wav
|
||||
BAC009S0002W0123 /nfs/ASR_DATA/AISHELL-1/data_aishell/wav/train/S0002/BAC009S0002W0123.wav
|
||||
BAC009S0002W0124 /nfs/ASR_DATA/AISHELL-1/data_aishell/wav/train/S0002/BAC009S0002W0124.wav
|
||||
...
|
||||
```
|
||||
* `text`
|
||||
```
|
||||
BAC009S0002W0122 而 对 楼 市 成 交 抑 制 作 用 最 大 的 限 购
|
||||
BAC009S0002W0123 也 成 为 地 方 政 府 的 眼 中 钉
|
||||
BAC009S0002W0124 自 六 月 底 呼 和 浩 特 市 率 先 宣 布 取 消 限 购 后
|
||||
...
|
||||
```
|
||||
These two files both have two columns, while the first column is wav ids and the second column is the corresponding wav paths/label tokens.
|
||||
|
||||
## Stage 1: Feature Generation
|
||||
This stage extracts FBank features from `wav.scp` and apply speed perturbation as data augmentation according to `speed_perturb`. Users can set `nj` to control the number of jobs for feature generation. The generated features are saved in `$feats_dir/dump/xxx/ark` and the corresponding `feats.scp` files are saved as `$feats_dir/dump/xxx/feats.scp`. An example of `feats.scp` can be seen as follows:
|
||||
* `feats.scp`
|
||||
```
|
||||
...
|
||||
BAC009S0002W0122_sp0.9 /nfs/funasr_data/aishell-1/dump/fbank/train/ark/feats.16.ark:592751055
|
||||
...
|
||||
```
|
||||
Note that samples in this file have already been shuffled randomly. This file contains two columns. The first column is wav ids while the second column is kaldi-ark feature paths. Besides, `speech_shape` and `text_shape` are also generated in this stage, denoting the speech feature shape and text length of each sample. The examples are shown as follows:
|
||||
* `speech_shape`
|
||||
```
|
||||
...
|
||||
BAC009S0002W0122_sp0.9 665,80
|
||||
...
|
||||
```
|
||||
* `text_shape`
|
||||
```
|
||||
...
|
||||
BAC009S0002W0122_sp0.9 15
|
||||
...
|
||||
```
|
||||
These two files have two columns. The first column is wav ids and the second column is the corresponding speech feature shape and text length.
|
||||
|
||||
## Stage 2: Dictionary Preparation
|
||||
This stage processes the dictionary, which is used as a mapping between label characters and integer indices during ASR training. The processed dictionary file is saved as `$feats_dir/data/$lang_toekn_list/$token_type/tokens.txt`. An example of `tokens.txt` is as follows:
|
||||
* `tokens.txt`
|
||||
```
|
||||
<blank>
|
||||
<s>
|
||||
</s>
|
||||
一
|
||||
丁
|
||||
...
|
||||
龚
|
||||
龟
|
||||
<unk>
|
||||
```
|
||||
* `<blank>`: indicates the blank token for CTC
|
||||
* `<s>`: indicates the start-of-sentence token
|
||||
* `</s>`: indicates the end-of-sentence token
|
||||
* `<unk>`: indicates the out-of-vocabulary token
|
||||
|
||||
## Stage 3: Training
|
||||
This stage achieves the training of the specified model. To start training, users should manually set `exp_dir`, `CUDA_VISIBLE_DEVICES` and `gpu_num`, which have already been explained above. By default, the best `$keep_nbest_models` checkpoints on validation dataset will be averaged to generate a better model and adopted for decoding.
|
||||
|
||||
* DDP Training
|
||||
|
||||
We support the DistributedDataParallel (DDP) training and the detail can be found [here](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html). To enable DDP training, please set `gpu_num` greater than 1. For example, if you set `CUDA_VISIBLE_DEVICES=0,1,5,6,7` and `gpu_num=3`, then the gpus with ids 0, 1 and 5 will be used for training.
|
||||
|
||||
* DataLoader
|
||||
|
||||
We support an optional iterable-style DataLoader based on [Pytorch Iterable-style DataPipes](https://pytorch.org/data/beta/torchdata.datapipes.iter.html) for large dataset and users can set `dataset_type=large` to enable it.
|
||||
|
||||
* Configuration
|
||||
|
||||
The parameters of the training, including model, optimization, dataset, etc., can be set by a YAML file in `conf` directory. Also, users can directly set the parameters in `run.sh` recipe. Please avoid to set the same parameters in both the YAML file and the recipe.
|
||||
|
||||
* Training Steps
|
||||
|
||||
We support two parameters to specify the training steps, namely `max_epoch` and `max_update`. `max_epoch` indicates the total training epochs while `max_update` indicates the total training steps. If these two parameters are specified at the same time, once the training reaches any one of these two parameters, the training will be stopped.
|
||||
|
||||
* Tensorboard
|
||||
|
||||
Users can use tensorboard to observe the loss, learning rate, etc. Please run the following command:
|
||||
```
|
||||
tensorboard --logdir ${exp_dir}/exp/${model_dir}/tensorboard/train
|
||||
```
|
||||
|
||||
## Stage 4: Decoding
|
||||
This stage generates the recognition results and calculates the `CER` to verify the performance of the trained model.
|
||||
|
||||
* Mode Selection
|
||||
|
||||
As we support paraformer, uniasr, conformer and other models in FunASR, a `mode` parameter should be specified as `asr/paraformer/uniasr` according to the trained model.
|
||||
|
||||
* Configuration
|
||||
|
||||
We support CTC decoding, attention decoding and hybrid CTC-attention decoding in FunASR, which can be specified by `ctc_weight` in a YAML file in `conf` directory. Specifically, `ctc_weight=1.0` indicates CTC decoding, `ctc_weight=0.0` indicates attention decoding, `0.0<ctc_weight<1.0` indicates hybrid CTC-attention decoding.
|
||||
|
||||
* CPU/GPU Decoding
|
||||
|
||||
We support CPU and GPU decoding in FunASR. For CPU decoding, you should set `gpu_inference=False` and set `njob` to specify the total number of CPU decoding jobs. For GPU decoding, you should set `gpu_inference=True`. You should also set `gpuid_list` to indicate which GPUs are used for decoding and `njobs` to indicate the number of decoding jobs on each GPU.
|
||||
|
||||
* Performance
|
||||
|
||||
We adopt `CER` to verify the performance. The results are in `$exp_dir/exp/$model_dir/$decoding_yaml_name/$average_model_name/$dset`, namely `text.cer` and `text.cer.txt`. `text.cer` saves the comparison between the recognized text and the reference text while `text.cer.txt` saves the final `CER` result. The following is an example of `text.cer`:
|
||||
* `text.cer`
|
||||
```
|
||||
...
|
||||
BAC009S0764W0213(nwords=11,cor=11,ins=0,del=0,sub=0) corr=100.00%,cer=0.00%
|
||||
ref: 构 建 良 好 的 旅 游 市 场 环 境
|
||||
res: 构 建 良 好 的 旅 游 市 场 环 境
|
||||
...
|
||||
```
|
||||
|
||||
Undo
|
||||
@ -1,129 +1,2 @@
|
||||
# Speaker Diarization
|
||||
Here we take "Training a paraformer model from scratch using the AISHELL-1 dataset" as an example to introduce how to use FunASR. According to this example, users can similarly employ other datasets (such as AISHELL-2 dataset, etc.) to train other models (such as conformer, transformer, etc.).
|
||||
|
||||
## Overall Introduction
|
||||
We provide a recipe `egs/aishell/paraformer/run.sh` for training a paraformer model on AISHELL-1 dataset. This recipe consists of five stages, supporting training on multiple GPUs and decoding by CPU or GPU. Before introducing each stage in detail, we first explain several parameters which should be set by users.
|
||||
- `CUDA_VISIBLE_DEVICES`: visible gpu list
|
||||
- `gpu_num`: the number of GPUs used for training
|
||||
- `gpu_inference`: whether to use GPUs for decoding
|
||||
- `njob`: for CPU decoding, indicating the total number of CPU jobs; for GPU decoding, indicating the number of jobs on each GPU
|
||||
- `data_aishell`: the raw path of AISHELL-1 dataset
|
||||
- `feats_dir`: the path for saving processed data
|
||||
- `nj`: the number of jobs for data preparation
|
||||
- `speed_perturb`: the range of speech perturbed
|
||||
- `exp_dir`: the path for saving experimental results
|
||||
- `tag`: the suffix of experimental result directory
|
||||
|
||||
## Stage 0: Data preparation
|
||||
This stage processes raw AISHELL-1 dataset `$data_aishell` and generates the corresponding `wav.scp` and `text` in `$feats_dir/data/xxx`. `xxx` means `train/dev/test`. Here we assume users have already downloaded AISHELL-1 dataset. If not, users can download data [here](https://www.openslr.org/33/) and set the path for `$data_aishell`. The examples of `wav.scp` and `text` are as follows:
|
||||
* `wav.scp`
|
||||
```
|
||||
BAC009S0002W0122 /nfs/ASR_DATA/AISHELL-1/data_aishell/wav/train/S0002/BAC009S0002W0122.wav
|
||||
BAC009S0002W0123 /nfs/ASR_DATA/AISHELL-1/data_aishell/wav/train/S0002/BAC009S0002W0123.wav
|
||||
BAC009S0002W0124 /nfs/ASR_DATA/AISHELL-1/data_aishell/wav/train/S0002/BAC009S0002W0124.wav
|
||||
...
|
||||
```
|
||||
* `text`
|
||||
```
|
||||
BAC009S0002W0122 而 对 楼 市 成 交 抑 制 作 用 最 大 的 限 购
|
||||
BAC009S0002W0123 也 成 为 地 方 政 府 的 眼 中 钉
|
||||
BAC009S0002W0124 自 六 月 底 呼 和 浩 特 市 率 先 宣 布 取 消 限 购 后
|
||||
...
|
||||
```
|
||||
These two files both have two columns, while the first column is wav ids and the second column is the corresponding wav paths/label tokens.
|
||||
|
||||
## Stage 1: Feature Generation
|
||||
This stage extracts FBank features from `wav.scp` and apply speed perturbation as data augmentation according to `speed_perturb`. Users can set `nj` to control the number of jobs for feature generation. The generated features are saved in `$feats_dir/dump/xxx/ark` and the corresponding `feats.scp` files are saved as `$feats_dir/dump/xxx/feats.scp`. An example of `feats.scp` can be seen as follows:
|
||||
* `feats.scp`
|
||||
```
|
||||
...
|
||||
BAC009S0002W0122_sp0.9 /nfs/funasr_data/aishell-1/dump/fbank/train/ark/feats.16.ark:592751055
|
||||
...
|
||||
```
|
||||
Note that samples in this file have already been shuffled randomly. This file contains two columns. The first column is wav ids while the second column is kaldi-ark feature paths. Besides, `speech_shape` and `text_shape` are also generated in this stage, denoting the speech feature shape and text length of each sample. The examples are shown as follows:
|
||||
* `speech_shape`
|
||||
```
|
||||
...
|
||||
BAC009S0002W0122_sp0.9 665,80
|
||||
...
|
||||
```
|
||||
* `text_shape`
|
||||
```
|
||||
...
|
||||
BAC009S0002W0122_sp0.9 15
|
||||
...
|
||||
```
|
||||
These two files have two columns. The first column is wav ids and the second column is the corresponding speech feature shape and text length.
|
||||
|
||||
## Stage 2: Dictionary Preparation
|
||||
This stage processes the dictionary, which is used as a mapping between label characters and integer indices during ASR training. The processed dictionary file is saved as `$feats_dir/data/$lang_toekn_list/$token_type/tokens.txt`. An example of `tokens.txt` is as follows:
|
||||
* `tokens.txt`
|
||||
```
|
||||
<blank>
|
||||
<s>
|
||||
</s>
|
||||
一
|
||||
丁
|
||||
...
|
||||
龚
|
||||
龟
|
||||
<unk>
|
||||
```
|
||||
* `<blank>`: indicates the blank token for CTC
|
||||
* `<s>`: indicates the start-of-sentence token
|
||||
* `</s>`: indicates the end-of-sentence token
|
||||
* `<unk>`: indicates the out-of-vocabulary token
|
||||
|
||||
## Stage 3: Training
|
||||
This stage achieves the training of the specified model. To start training, users should manually set `exp_dir`, `CUDA_VISIBLE_DEVICES` and `gpu_num`, which have already been explained above. By default, the best `$keep_nbest_models` checkpoints on validation dataset will be averaged to generate a better model and adopted for decoding.
|
||||
|
||||
* DDP Training
|
||||
|
||||
We support the DistributedDataParallel (DDP) training and the detail can be found [here](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html). To enable DDP training, please set `gpu_num` greater than 1. For example, if you set `CUDA_VISIBLE_DEVICES=0,1,5,6,7` and `gpu_num=3`, then the gpus with ids 0, 1 and 5 will be used for training.
|
||||
|
||||
* DataLoader
|
||||
|
||||
We support an optional iterable-style DataLoader based on [Pytorch Iterable-style DataPipes](https://pytorch.org/data/beta/torchdata.datapipes.iter.html) for large dataset and users can set `dataset_type=large` to enable it.
|
||||
|
||||
* Configuration
|
||||
|
||||
The parameters of the training, including model, optimization, dataset, etc., can be set by a YAML file in `conf` directory. Also, users can directly set the parameters in `run.sh` recipe. Please avoid to set the same parameters in both the YAML file and the recipe.
|
||||
|
||||
* Training Steps
|
||||
|
||||
We support two parameters to specify the training steps, namely `max_epoch` and `max_update`. `max_epoch` indicates the total training epochs while `max_update` indicates the total training steps. If these two parameters are specified at the same time, once the training reaches any one of these two parameters, the training will be stopped.
|
||||
|
||||
* Tensorboard
|
||||
|
||||
Users can use tensorboard to observe the loss, learning rate, etc. Please run the following command:
|
||||
```
|
||||
tensorboard --logdir ${exp_dir}/exp/${model_dir}/tensorboard/train
|
||||
```
|
||||
|
||||
## Stage 4: Decoding
|
||||
This stage generates the recognition results and calculates the `CER` to verify the performance of the trained model.
|
||||
|
||||
* Mode Selection
|
||||
|
||||
As we support paraformer, uniasr, conformer and other models in FunASR, a `mode` parameter should be specified as `asr/paraformer/uniasr` according to the trained model.
|
||||
|
||||
* Configuration
|
||||
|
||||
We support CTC decoding, attention decoding and hybrid CTC-attention decoding in FunASR, which can be specified by `ctc_weight` in a YAML file in `conf` directory. Specifically, `ctc_weight=1.0` indicates CTC decoding, `ctc_weight=0.0` indicates attention decoding, `0.0<ctc_weight<1.0` indicates hybrid CTC-attention decoding.
|
||||
|
||||
* CPU/GPU Decoding
|
||||
|
||||
We support CPU and GPU decoding in FunASR. For CPU decoding, you should set `gpu_inference=False` and set `njob` to specify the total number of CPU decoding jobs. For GPU decoding, you should set `gpu_inference=True`. You should also set `gpuid_list` to indicate which GPUs are used for decoding and `njobs` to indicate the number of decoding jobs on each GPU.
|
||||
|
||||
* Performance
|
||||
|
||||
We adopt `CER` to verify the performance. The results are in `$exp_dir/exp/$model_dir/$decoding_yaml_name/$average_model_name/$dset`, namely `text.cer` and `text.cer.txt`. `text.cer` saves the comparison between the recognized text and the reference text while `text.cer.txt` saves the final `CER` result. The following is an example of `text.cer`:
|
||||
* `text.cer`
|
||||
```
|
||||
...
|
||||
BAC009S0764W0213(nwords=11,cor=11,ins=0,del=0,sub=0) corr=100.00%,cer=0.00%
|
||||
ref: 构 建 良 好 的 旅 游 市 场 环 境
|
||||
res: 构 建 良 好 的 旅 游 市 场 环 境
|
||||
...
|
||||
```
|
||||
|
||||
Undo
|
||||
|
||||
@ -1,129 +1,2 @@
|
||||
# Speaker Verification
|
||||
Here we take "Training a paraformer model from scratch using the AISHELL-1 dataset" as an example to introduce how to use FunASR. According to this example, users can similarly employ other datasets (such as AISHELL-2 dataset, etc.) to train other models (such as conformer, transformer, etc.).
|
||||
|
||||
## Overall Introduction
|
||||
We provide a recipe `egs/aishell/paraformer/run.sh` for training a paraformer model on AISHELL-1 dataset. This recipe consists of five stages, supporting training on multiple GPUs and decoding by CPU or GPU. Before introducing each stage in detail, we first explain several parameters which should be set by users.
|
||||
- `CUDA_VISIBLE_DEVICES`: visible gpu list
|
||||
- `gpu_num`: the number of GPUs used for training
|
||||
- `gpu_inference`: whether to use GPUs for decoding
|
||||
- `njob`: for CPU decoding, indicating the total number of CPU jobs; for GPU decoding, indicating the number of jobs on each GPU
|
||||
- `data_aishell`: the raw path of AISHELL-1 dataset
|
||||
- `feats_dir`: the path for saving processed data
|
||||
- `nj`: the number of jobs for data preparation
|
||||
- `speed_perturb`: the range of speech perturbed
|
||||
- `exp_dir`: the path for saving experimental results
|
||||
- `tag`: the suffix of experimental result directory
|
||||
|
||||
## Stage 0: Data preparation
|
||||
This stage processes raw AISHELL-1 dataset `$data_aishell` and generates the corresponding `wav.scp` and `text` in `$feats_dir/data/xxx`. `xxx` means `train/dev/test`. Here we assume users have already downloaded AISHELL-1 dataset. If not, users can download data [here](https://www.openslr.org/33/) and set the path for `$data_aishell`. The examples of `wav.scp` and `text` are as follows:
|
||||
* `wav.scp`
|
||||
```
|
||||
BAC009S0002W0122 /nfs/ASR_DATA/AISHELL-1/data_aishell/wav/train/S0002/BAC009S0002W0122.wav
|
||||
BAC009S0002W0123 /nfs/ASR_DATA/AISHELL-1/data_aishell/wav/train/S0002/BAC009S0002W0123.wav
|
||||
BAC009S0002W0124 /nfs/ASR_DATA/AISHELL-1/data_aishell/wav/train/S0002/BAC009S0002W0124.wav
|
||||
...
|
||||
```
|
||||
* `text`
|
||||
```
|
||||
BAC009S0002W0122 而 对 楼 市 成 交 抑 制 作 用 最 大 的 限 购
|
||||
BAC009S0002W0123 也 成 为 地 方 政 府 的 眼 中 钉
|
||||
BAC009S0002W0124 自 六 月 底 呼 和 浩 特 市 率 先 宣 布 取 消 限 购 后
|
||||
...
|
||||
```
|
||||
These two files both have two columns, while the first column is wav ids and the second column is the corresponding wav paths/label tokens.
|
||||
|
||||
## Stage 1: Feature Generation
|
||||
This stage extracts FBank features from `wav.scp` and apply speed perturbation as data augmentation according to `speed_perturb`. Users can set `nj` to control the number of jobs for feature generation. The generated features are saved in `$feats_dir/dump/xxx/ark` and the corresponding `feats.scp` files are saved as `$feats_dir/dump/xxx/feats.scp`. An example of `feats.scp` can be seen as follows:
|
||||
* `feats.scp`
|
||||
```
|
||||
...
|
||||
BAC009S0002W0122_sp0.9 /nfs/funasr_data/aishell-1/dump/fbank/train/ark/feats.16.ark:592751055
|
||||
...
|
||||
```
|
||||
Note that samples in this file have already been shuffled randomly. This file contains two columns. The first column is wav ids while the second column is kaldi-ark feature paths. Besides, `speech_shape` and `text_shape` are also generated in this stage, denoting the speech feature shape and text length of each sample. The examples are shown as follows:
|
||||
* `speech_shape`
|
||||
```
|
||||
...
|
||||
BAC009S0002W0122_sp0.9 665,80
|
||||
...
|
||||
```
|
||||
* `text_shape`
|
||||
```
|
||||
...
|
||||
BAC009S0002W0122_sp0.9 15
|
||||
...
|
||||
```
|
||||
These two files have two columns. The first column is wav ids and the second column is the corresponding speech feature shape and text length.
|
||||
|
||||
## Stage 2: Dictionary Preparation
|
||||
This stage processes the dictionary, which is used as a mapping between label characters and integer indices during ASR training. The processed dictionary file is saved as `$feats_dir/data/$lang_toekn_list/$token_type/tokens.txt`. An example of `tokens.txt` is as follows:
|
||||
* `tokens.txt`
|
||||
```
|
||||
<blank>
|
||||
<s>
|
||||
</s>
|
||||
一
|
||||
丁
|
||||
...
|
||||
龚
|
||||
龟
|
||||
<unk>
|
||||
```
|
||||
* `<blank>`: indicates the blank token for CTC
|
||||
* `<s>`: indicates the start-of-sentence token
|
||||
* `</s>`: indicates the end-of-sentence token
|
||||
* `<unk>`: indicates the out-of-vocabulary token
|
||||
|
||||
## Stage 3: Training
|
||||
This stage achieves the training of the specified model. To start training, users should manually set `exp_dir`, `CUDA_VISIBLE_DEVICES` and `gpu_num`, which have already been explained above. By default, the best `$keep_nbest_models` checkpoints on validation dataset will be averaged to generate a better model and adopted for decoding.
|
||||
|
||||
* DDP Training
|
||||
|
||||
We support the DistributedDataParallel (DDP) training and the detail can be found [here](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html). To enable DDP training, please set `gpu_num` greater than 1. For example, if you set `CUDA_VISIBLE_DEVICES=0,1,5,6,7` and `gpu_num=3`, then the gpus with ids 0, 1 and 5 will be used for training.
|
||||
|
||||
* DataLoader
|
||||
|
||||
We support an optional iterable-style DataLoader based on [Pytorch Iterable-style DataPipes](https://pytorch.org/data/beta/torchdata.datapipes.iter.html) for large dataset and users can set `dataset_type=large` to enable it.
|
||||
|
||||
* Configuration
|
||||
|
||||
The parameters of the training, including model, optimization, dataset, etc., can be set by a YAML file in `conf` directory. Also, users can directly set the parameters in `run.sh` recipe. Please avoid to set the same parameters in both the YAML file and the recipe.
|
||||
|
||||
* Training Steps
|
||||
|
||||
We support two parameters to specify the training steps, namely `max_epoch` and `max_update`. `max_epoch` indicates the total training epochs while `max_update` indicates the total training steps. If these two parameters are specified at the same time, once the training reaches any one of these two parameters, the training will be stopped.
|
||||
|
||||
* Tensorboard
|
||||
|
||||
Users can use tensorboard to observe the loss, learning rate, etc. Please run the following command:
|
||||
```
|
||||
tensorboard --logdir ${exp_dir}/exp/${model_dir}/tensorboard/train
|
||||
```
|
||||
|
||||
## Stage 4: Decoding
|
||||
This stage generates the recognition results and calculates the `CER` to verify the performance of the trained model.
|
||||
|
||||
* Mode Selection
|
||||
|
||||
As we support paraformer, uniasr, conformer and other models in FunASR, a `mode` parameter should be specified as `asr/paraformer/uniasr` according to the trained model.
|
||||
|
||||
* Configuration
|
||||
|
||||
We support CTC decoding, attention decoding and hybrid CTC-attention decoding in FunASR, which can be specified by `ctc_weight` in a YAML file in `conf` directory. Specifically, `ctc_weight=1.0` indicates CTC decoding, `ctc_weight=0.0` indicates attention decoding, `0.0<ctc_weight<1.0` indicates hybrid CTC-attention decoding.
|
||||
|
||||
* CPU/GPU Decoding
|
||||
|
||||
We support CPU and GPU decoding in FunASR. For CPU decoding, you should set `gpu_inference=False` and set `njob` to specify the total number of CPU decoding jobs. For GPU decoding, you should set `gpu_inference=True`. You should also set `gpuid_list` to indicate which GPUs are used for decoding and `njobs` to indicate the number of decoding jobs on each GPU.
|
||||
|
||||
* Performance
|
||||
|
||||
We adopt `CER` to verify the performance. The results are in `$exp_dir/exp/$model_dir/$decoding_yaml_name/$average_model_name/$dset`, namely `text.cer` and `text.cer.txt`. `text.cer` saves the comparison between the recognized text and the reference text while `text.cer.txt` saves the final `CER` result. The following is an example of `text.cer`:
|
||||
* `text.cer`
|
||||
```
|
||||
...
|
||||
BAC009S0764W0213(nwords=11,cor=11,ins=0,del=0,sub=0) corr=100.00%,cer=0.00%
|
||||
ref: 构 建 良 好 的 旅 游 市 场 环 境
|
||||
res: 构 建 良 好 的 旅 游 市 场 环 境
|
||||
...
|
||||
```
|
||||
|
||||
Undo
|
||||
|
||||
@ -1,129 +1,2 @@
|
||||
# Voice Activity Detection
|
||||
Here we take "Training a paraformer model from scratch using the AISHELL-1 dataset" as an example to introduce how to use FunASR. According to this example, users can similarly employ other datasets (such as AISHELL-2 dataset, etc.) to train other models (such as conformer, transformer, etc.).
|
||||
|
||||
## Overall Introduction
|
||||
We provide a recipe `egs/aishell/paraformer/run.sh` for training a paraformer model on AISHELL-1 dataset. This recipe consists of five stages, supporting training on multiple GPUs and decoding by CPU or GPU. Before introducing each stage in detail, we first explain several parameters which should be set by users.
|
||||
- `CUDA_VISIBLE_DEVICES`: visible gpu list
|
||||
- `gpu_num`: the number of GPUs used for training
|
||||
- `gpu_inference`: whether to use GPUs for decoding
|
||||
- `njob`: for CPU decoding, indicating the total number of CPU jobs; for GPU decoding, indicating the number of jobs on each GPU
|
||||
- `data_aishell`: the raw path of AISHELL-1 dataset
|
||||
- `feats_dir`: the path for saving processed data
|
||||
- `nj`: the number of jobs for data preparation
|
||||
- `speed_perturb`: the range of speech perturbed
|
||||
- `exp_dir`: the path for saving experimental results
|
||||
- `tag`: the suffix of experimental result directory
|
||||
|
||||
## Stage 0: Data preparation
|
||||
This stage processes raw AISHELL-1 dataset `$data_aishell` and generates the corresponding `wav.scp` and `text` in `$feats_dir/data/xxx`. `xxx` means `train/dev/test`. Here we assume users have already downloaded AISHELL-1 dataset. If not, users can download data [here](https://www.openslr.org/33/) and set the path for `$data_aishell`. The examples of `wav.scp` and `text` are as follows:
|
||||
* `wav.scp`
|
||||
```
|
||||
BAC009S0002W0122 /nfs/ASR_DATA/AISHELL-1/data_aishell/wav/train/S0002/BAC009S0002W0122.wav
|
||||
BAC009S0002W0123 /nfs/ASR_DATA/AISHELL-1/data_aishell/wav/train/S0002/BAC009S0002W0123.wav
|
||||
BAC009S0002W0124 /nfs/ASR_DATA/AISHELL-1/data_aishell/wav/train/S0002/BAC009S0002W0124.wav
|
||||
...
|
||||
```
|
||||
* `text`
|
||||
```
|
||||
BAC009S0002W0122 而 对 楼 市 成 交 抑 制 作 用 最 大 的 限 购
|
||||
BAC009S0002W0123 也 成 为 地 方 政 府 的 眼 中 钉
|
||||
BAC009S0002W0124 自 六 月 底 呼 和 浩 特 市 率 先 宣 布 取 消 限 购 后
|
||||
...
|
||||
```
|
||||
These two files both have two columns, while the first column is wav ids and the second column is the corresponding wav paths/label tokens.
|
||||
|
||||
## Stage 1: Feature Generation
|
||||
This stage extracts FBank features from `wav.scp` and apply speed perturbation as data augmentation according to `speed_perturb`. Users can set `nj` to control the number of jobs for feature generation. The generated features are saved in `$feats_dir/dump/xxx/ark` and the corresponding `feats.scp` files are saved as `$feats_dir/dump/xxx/feats.scp`. An example of `feats.scp` can be seen as follows:
|
||||
* `feats.scp`
|
||||
```
|
||||
...
|
||||
BAC009S0002W0122_sp0.9 /nfs/funasr_data/aishell-1/dump/fbank/train/ark/feats.16.ark:592751055
|
||||
...
|
||||
```
|
||||
Note that samples in this file have already been shuffled randomly. This file contains two columns. The first column is wav ids while the second column is kaldi-ark feature paths. Besides, `speech_shape` and `text_shape` are also generated in this stage, denoting the speech feature shape and text length of each sample. The examples are shown as follows:
|
||||
* `speech_shape`
|
||||
```
|
||||
...
|
||||
BAC009S0002W0122_sp0.9 665,80
|
||||
...
|
||||
```
|
||||
* `text_shape`
|
||||
```
|
||||
...
|
||||
BAC009S0002W0122_sp0.9 15
|
||||
...
|
||||
```
|
||||
These two files have two columns. The first column is wav ids and the second column is the corresponding speech feature shape and text length.
|
||||
|
||||
## Stage 2: Dictionary Preparation
|
||||
This stage processes the dictionary, which is used as a mapping between label characters and integer indices during ASR training. The processed dictionary file is saved as `$feats_dir/data/$lang_toekn_list/$token_type/tokens.txt`. An example of `tokens.txt` is as follows:
|
||||
* `tokens.txt`
|
||||
```
|
||||
<blank>
|
||||
<s>
|
||||
</s>
|
||||
一
|
||||
丁
|
||||
...
|
||||
龚
|
||||
龟
|
||||
<unk>
|
||||
```
|
||||
* `<blank>`: indicates the blank token for CTC
|
||||
* `<s>`: indicates the start-of-sentence token
|
||||
* `</s>`: indicates the end-of-sentence token
|
||||
* `<unk>`: indicates the out-of-vocabulary token
|
||||
|
||||
## Stage 3: Training
|
||||
This stage achieves the training of the specified model. To start training, users should manually set `exp_dir`, `CUDA_VISIBLE_DEVICES` and `gpu_num`, which have already been explained above. By default, the best `$keep_nbest_models` checkpoints on validation dataset will be averaged to generate a better model and adopted for decoding.
|
||||
|
||||
* DDP Training
|
||||
|
||||
We support the DistributedDataParallel (DDP) training and the detail can be found [here](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html). To enable DDP training, please set `gpu_num` greater than 1. For example, if you set `CUDA_VISIBLE_DEVICES=0,1,5,6,7` and `gpu_num=3`, then the gpus with ids 0, 1 and 5 will be used for training.
|
||||
|
||||
* DataLoader
|
||||
|
||||
We support an optional iterable-style DataLoader based on [Pytorch Iterable-style DataPipes](https://pytorch.org/data/beta/torchdata.datapipes.iter.html) for large dataset and users can set `dataset_type=large` to enable it.
|
||||
|
||||
* Configuration
|
||||
|
||||
The parameters of the training, including model, optimization, dataset, etc., can be set by a YAML file in `conf` directory. Also, users can directly set the parameters in `run.sh` recipe. Please avoid to set the same parameters in both the YAML file and the recipe.
|
||||
|
||||
* Training Steps
|
||||
|
||||
We support two parameters to specify the training steps, namely `max_epoch` and `max_update`. `max_epoch` indicates the total training epochs while `max_update` indicates the total training steps. If these two parameters are specified at the same time, once the training reaches any one of these two parameters, the training will be stopped.
|
||||
|
||||
* Tensorboard
|
||||
|
||||
Users can use tensorboard to observe the loss, learning rate, etc. Please run the following command:
|
||||
```
|
||||
tensorboard --logdir ${exp_dir}/exp/${model_dir}/tensorboard/train
|
||||
```
|
||||
|
||||
## Stage 4: Decoding
|
||||
This stage generates the recognition results and calculates the `CER` to verify the performance of the trained model.
|
||||
|
||||
* Mode Selection
|
||||
|
||||
As we support paraformer, uniasr, conformer and other models in FunASR, a `mode` parameter should be specified as `asr/paraformer/uniasr` according to the trained model.
|
||||
|
||||
* Configuration
|
||||
|
||||
We support CTC decoding, attention decoding and hybrid CTC-attention decoding in FunASR, which can be specified by `ctc_weight` in a YAML file in `conf` directory. Specifically, `ctc_weight=1.0` indicates CTC decoding, `ctc_weight=0.0` indicates attention decoding, `0.0<ctc_weight<1.0` indicates hybrid CTC-attention decoding.
|
||||
|
||||
* CPU/GPU Decoding
|
||||
|
||||
We support CPU and GPU decoding in FunASR. For CPU decoding, you should set `gpu_inference=False` and set `njob` to specify the total number of CPU decoding jobs. For GPU decoding, you should set `gpu_inference=True`. You should also set `gpuid_list` to indicate which GPUs are used for decoding and `njobs` to indicate the number of decoding jobs on each GPU.
|
||||
|
||||
* Performance
|
||||
|
||||
We adopt `CER` to verify the performance. The results are in `$exp_dir/exp/$model_dir/$decoding_yaml_name/$average_model_name/$dset`, namely `text.cer` and `text.cer.txt`. `text.cer` saves the comparison between the recognized text and the reference text while `text.cer.txt` saves the final `CER` result. The following is an example of `text.cer`:
|
||||
* `text.cer`
|
||||
```
|
||||
...
|
||||
BAC009S0764W0213(nwords=11,cor=11,ins=0,del=0,sub=0) corr=100.00%,cer=0.00%
|
||||
ref: 构 建 良 好 的 旅 游 市 场 环 境
|
||||
res: 构 建 良 好 的 旅 游 市 场 环 境
|
||||
...
|
||||
```
|
||||
|
||||
Undo
|
||||
|
||||
@ -17,8 +17,8 @@ Overview
|
||||
:maxdepth: 1
|
||||
:caption: Installation
|
||||
|
||||
./installation.md
|
||||
./docker.md
|
||||
./installation/installation.md
|
||||
./installation/docker.md
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
@ -44,6 +44,7 @@ Overview
|
||||
./modelscope_pipeline/tp_pipeline.md
|
||||
./modelscope_pipeline/sv_pipeline.md
|
||||
./modelscope_pipeline/sd_pipeline.md
|
||||
./modelscope_pipeline/itn_pipeline.md
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
@ -56,8 +57,8 @@ Overview
|
||||
:maxdepth: 1
|
||||
:caption: Model Zoo
|
||||
|
||||
./modelscope_models.md
|
||||
./huggingface_models.md
|
||||
./model_zoo/modelscope_models.md
|
||||
./model_zoo/huggingface_models.md
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
@ -70,6 +71,7 @@ Overview
|
||||
./runtime/grpc_python.md
|
||||
./runtime/grpc_cpp.md
|
||||
./runtime/websocket_python.md
|
||||
./runtime/websocket_cpp.md
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
@ -84,25 +86,25 @@ Overview
|
||||
:maxdepth: 1
|
||||
:caption: Funasr Library
|
||||
|
||||
./build_task.md
|
||||
./reference/build_task.md
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: Papers
|
||||
|
||||
./papers.md
|
||||
./reference/papers.md
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: Application
|
||||
|
||||
./application.md
|
||||
./reference/application.md
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: FQA
|
||||
|
||||
./FQA.md
|
||||
./reference/FQA.md
|
||||
|
||||
|
||||
Indices and tables
|
||||
|
||||
@ -15,7 +15,8 @@ Here we provided several pretrained models on different datasets. The details of
|
||||
| [Paraformer-large-long](https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary) | CN & EN | Alibaba Speech Data (60000hours) | 8404 | 220M | Offline | Which ould deal with arbitrary length input wav |
|
||||
| [Paraformer-large-contextual](https://www.modelscope.cn/models/damo/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404/summary) | CN & EN | Alibaba Speech Data (60000hours) | 8404 | 220M | Offline | Which supports the hotword customization based on the incentive enhancement, and improves the recall and precision of hotwords. |
|
||||
| [Paraformer](https://modelscope.cn/models/damo/speech_paraformer_asr_nat-zh-cn-16k-common-vocab8358-tensorflow1/summary) | CN & EN | Alibaba Speech Data (50000hours) | 8358 | 68M | Offline | Duration of input wav <= 20s |
|
||||
| [Paraformer-online](https://www.modelscope.cn/models/damo/speech_paraformer_asr_nat-zh-cn-16k-common-vocab8404-online/summary) | CN & EN | Alibaba Speech Data (50000hours) | 8404 | 68M | Online | Which could deal with streaming input |
|
||||
| [Paraformer-online](https://www.modelscope.cn/models/damo/speech_paraformer_asr_nat-zh-cn-16k-common-vocab8404-online/summary) | CN & EN | Alibaba Speech Data (50000hours) | 8404 | 68M | Online | Which could deal with streaming input |
|
||||
| [Paraformer-large-online](https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online/summary) | CN & EN | Alibaba Speech Data (60000hours) | 8404 | 220M | Online | Which could deal with streaming input |
|
||||
| [Paraformer-tiny](https://www.modelscope.cn/models/damo/speech_paraformer-tiny-commandword_asr_nat-zh-cn-16k-vocab544-pytorch/summary) | CN | Alibaba Speech Data (200hours) | 544 | 5.2M | Offline | Lightweight Paraformer model which supports Mandarin command words recognition |
|
||||
| [Paraformer-aishell](https://www.modelscope.cn/models/damo/speech_paraformer_asr_nat-aishell1-pytorch/summary) | CN | AISHELL (178hours) | 4234 | 43M | Offline | |
|
||||
| [ParaformerBert-aishell](https://modelscope.cn/models/damo/speech_paraformerbert_asr_nat-zh-cn-16k-aishell1-vocab4234-pytorch/summary) | CN | AISHELL (178hours) | 4234 | 43M | Offline | |
|
||||
@ -25,13 +26,27 @@ Here we provided several pretrained models on different datasets. The details of
|
||||
|
||||
#### UniASR Models
|
||||
|
||||
| Model Name | Language | Training Data | Vocab Size | Parameter | Offline/Online | Notes |
|
||||
|:--------------------------------------------------------------------------------------------------------------------------------------:|:--------:|:--------------------------------:|:----------:|:---------:|:--------------:|:--------------------------------------------------------------------------------------------------------------------------------|
|
||||
| [UniASR](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-zh-cn-16k-common-vocab8358-tensorflow1-online/summary) | CN & EN | Alibaba Speech Data (60000hours) | 8358 | 100M | Online | UniASR streaming offline unifying models |
|
||||
| [UniASR-large](https://modelscope.cn/models/damo/speech_UniASR-large_asr_2pass-zh-cn-16k-common-vocab8358-tensorflow1-offline/summary) | CN & EN | Alibaba Speech Data (60000hours) | 8358 | 220M | Offline | UniASR streaming offline unifying models |
|
||||
| [UniASR Burmese](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-my-16k-common-vocab696-pytorch/summary) | Burmese | Alibaba Speech Data (? hours) | 696 | 95M | Online | UniASR streaming offline unifying models |
|
||||
| [UniASR Hebrew](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-he-16k-common-vocab1085-pytorch/summary) | Hebrew | Alibaba Speech Data (? hours) | 1085 | 95M | Online | UniASR streaming offline unifying models |
|
||||
| [UniASR Urdu](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-ur-16k-common-vocab877-pytorch/summary) | Urdu | Alibaba Speech Data (? hours) | 877 | 95M | Online | UniASR streaming offline unifying models |
|
||||
| Model Name | Language | Training Data | Vocab Size | Parameter | Offline/Online | Notes |
|
||||
|:-------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------:|:---------------------------------:|:----------:|:---------:|:--------------:|:--------------------------------------------------------------------------------------------------------------------------------|
|
||||
| [UniASR](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-zh-cn-16k-common-vocab8358-tensorflow1-online/summary) | CN & EN | Alibaba Speech Data (60000 hours) | 8358 | 100M | Online | UniASR streaming offline unifying models |
|
||||
| [UniASR-large](https://modelscope.cn/models/damo/speech_UniASR-large_asr_2pass-zh-cn-16k-common-vocab8358-tensorflow1-offline/summary) | CN & EN | Alibaba Speech Data (60000 hours) | 8358 | 220M | Offline | UniASR streaming offline unifying models |
|
||||
| [UniASR English](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-en-16k-common-vocab1080-tensorflow1-online/summary) | EN | Alibaba Speech Data (10000 hours) | 1080 | 95M | Online | UniASR streaming online unifying models |
|
||||
| [UniASR Russian](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-ru-16k-common-vocab1664-tensorflow1-online/summary) | RU | Alibaba Speech Data (5000 hours) | 1664 | 95M | Online | UniASR streaming online unifying models |
|
||||
| [UniASR Japanese](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-ja-16k-common-vocab93-tensorflow1-online/summary) | JA | Alibaba Speech Data (5000 hours) | 5977 | 95M | Online | UniASR streaming offline unifying models |
|
||||
| [UniASR Korean](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-ko-16k-common-vocab6400-tensorflow1-online/summary) | KO | Alibaba Speech Data (2000 hours) | 6400 | 95M | Online | UniASR streaming online unifying models |
|
||||
| [UniASR Cantonese (CHS)](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-cantonese-CHS-16k-common-vocab1468-tensorflow1-online/summary) | Cantonese (CHS) | Alibaba Speech Data (5000 hours) | 1468 | 95M | Online | UniASR streaming online unifying models |
|
||||
| [UniASR Indonesian](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-id-16k-common-vocab1067-tensorflow1-online/summary) | ID | Alibaba Speech Data (1000 hours) | 1067 | 95M | Online | UniASR streaming offline unifying models |
|
||||
| [UniASR Vietnamese](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-vi-16k-common-vocab1001-pytorch-online/summary) | VI | Alibaba Speech Data (1000 hours) | 1001 | 95M | Online | UniASR streaming offline unifying models |
|
||||
| [UniASR Spanish](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-es-16k-common-vocab3445-tensorflow1-online/summary) | ES | Alibaba Speech Data (1000 hours) | 3445 | 95M | Online | UniASR streaming online unifying models |
|
||||
| [UniASR Portuguese](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-pt-16k-common-vocab1617-tensorflow1-online/summary) | PT | Alibaba Speech Data (1000 hours) | 1617 | 95M | Online | UniASR streaming offline unifying models |
|
||||
| [UniASR French](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-fr-16k-common-vocab3472-tensorflow1-online/summary) | FR | Alibaba Speech Data (1000 hours) | 3472 | 95M | Online | UniASR streaming online unifying models |
|
||||
| [UniASR German](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-de-16k-common-vocab3690-tensorflow1-online/summary) | GE | Alibaba Speech Data (1000 hours) | 3690 | 95M | Online | UniASR streaming online unifying models |
|
||||
| [UniASR Persian](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-fa-16k-common-vocab1257-pytorch-online/summary) | FA | Alibaba Speech Data (1000 hours) | 1257 | 95M | Online | UniASR streaming offline unifying models |
|
||||
| [UniASR Burmese](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-my-16k-common-vocab696-pytorch/summary) | MY | Alibaba Speech Data (1000 hours) | 696 | 95M | Online | UniASR streaming offline unifying models |
|
||||
| [UniASR Hebrew](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-he-16k-common-vocab1085-pytorch/summary) | HE | Alibaba Speech Data (1000 hours) | 1085 | 95M | Online | UniASR streaming offline unifying models |
|
||||
| [UniASR Urdu](https://modelscope.cn/models/damo/speech_UniASR_asr_2pass-ur-16k-common-vocab877-pytorch/summary) | UR | Alibaba Speech Data (1000 hours) | 877 | 95M | Online | UniASR streaming offline unifying models |
|
||||
|
||||
|
||||
|
||||
#### Conformer Models
|
||||
|
||||
@ -39,6 +54,7 @@ Here we provided several pretrained models on different datasets. The details of
|
||||
|:----------------------------------------------------------------------------------------------------------------------:|:--------:|:---------------------:|:----------:|:---------:|:--------------:|:--------------------------------------------------------------------------------------------------------------------------------|
|
||||
| [Conformer](https://modelscope.cn/models/damo/speech_conformer_asr_nat-zh-cn-16k-aishell1-vocab4234-pytorch/summary) | CN | AISHELL (178hours) | 4234 | 44M | Offline | Duration of input wav <= 20s |
|
||||
| [Conformer](https://www.modelscope.cn/models/damo/speech_conformer_asr_nat-zh-cn-16k-aishell2-vocab5212-pytorch/summary) | CN | AISHELL-2 (1000hours) | 5212 | 44M | Offline | Duration of input wav <= 20s |
|
||||
| [Conformer](https://modelscope.cn/models/damo/speech_conformer_asr-en-16k-vocab4199-pytorch/summary) | EN | Alibaba Speech Data (10000hours) | 4199 | 220M | Offline | Duration of input wav <= 20s |
|
||||
|
||||
|
||||
#### RNN-T Models
|
||||
@ -92,3 +108,19 @@ Here we provided several pretrained models on different datasets. The details of
|
||||
| Model Name | Language | Training Data | Parameters | Notes |
|
||||
|:--------------------------------------------------------------------------------------------------:|:--------------:|:-------------------:|:----------:|:------|
|
||||
| [TP-Aligner](https://modelscope.cn/models/damo/speech_timestamp_prediction-v1-16k-offline/summary) | CN | Alibaba Speech Data (50000hours) | 37.8M | Timestamp prediction, Mandarin, middle size |
|
||||
|
||||
### Inverse Text Normalization (ITN) Models
|
||||
|
||||
| Model Name | Language | Parameters | Notes |
|
||||
|:----------------------------------------------------------------------------------------------------------------:|:--------:|:----------:|:-------------------------|
|
||||
| [English](https://modelscope.cn/models/damo/speech_inverse_text_processing_fun-text-processing-itn-en/summary) | EN | 1.54M | ITN, ASR post-processing |
|
||||
| [Russian](https://modelscope.cn/models/damo/speech_inverse_text_processing_fun-text-processing-itn-ru/summary) | RU | 17.79M | ITN, ASR post-processing |
|
||||
| [Japanese](https://modelscope.cn/models/damo/speech_inverse_text_processing_fun-text-processing-itn-ja/summary) | JA | 6.8M | ITN, ASR post-processing |
|
||||
| [Korean](https://modelscope.cn/models/damo/speech_inverse_text_processing_fun-text-processing-itn-ko/summary) | KO | 1.28M | ITN, ASR post-processing |
|
||||
| [Indonesian](https://modelscope.cn/models/damo/speech_inverse_text_processing_fun-text-processing-itn-id/summary) | ID | 2.06M | ITN, ASR post-processing |
|
||||
| [Vietnamese](https://modelscope.cn/models/damo/speech_inverse_text_processing_fun-text-processing-itn-vi/summary) | VI | 0.92M | ITN, ASR post-processing |
|
||||
| [Tagalog](https://modelscope.cn/models/damo/speech_inverse_text_processing_fun-text-processing-itn-tl/summary) | TL | 0.65M | ITN, ASR post-processing |
|
||||
| [Spanish](https://modelscope.cn/models/damo/speech_inverse_text_processing_fun-text-processing-itn-es/summary) | ES | 1.32M | ITN, ASR post-processing |
|
||||
| [Portuguese](https://modelscope.cn/models/damo/speech_inverse_text_processing_fun-text-processing-itn-pt/summary) | PT | 1.28M | ITN, ASR post-processing |
|
||||
| [French](https://modelscope.cn/models/damo/speech_inverse_text_processing_fun-text-processing-itn-fr/summary) | FR | 4.39M | ITN, ASR post-processing |
|
||||
| [German](https://modelscope.cn/models/damo/speech_inverse_text_processing_fun-text-processing-itn-de/summary)| GE | 3.95M | ITN, ASR post-processing |
|
||||
63
docs/modelscope_pipeline/itn_pipeline.md
Normal file
63
docs/modelscope_pipeline/itn_pipeline.md
Normal file
@ -0,0 +1,63 @@
|
||||
# Inverse Text Normalization (ITN)
|
||||
|
||||
> **Note**:
|
||||
> The modelscope pipeline supports all the models in [model zoo](https://modelscope.cn/models?page=1&tasks=inverse-text-processing&type=audio) to inference. Here we take the model of the Japanese ITN model as example to demonstrate the usage.
|
||||
|
||||
## Inference
|
||||
|
||||
### Quick start
|
||||
#### [Japanese ITN model](https://modelscope.cn/models/damo/speech_inverse_text_processing_fun-text-processing-itn-ja/summary)
|
||||
```python
|
||||
from modelscope.pipelines import pipeline
|
||||
from modelscope.utils.constant import Tasks
|
||||
|
||||
itn_inference_pipline = pipeline(
|
||||
task=Tasks.inverse_text_processing,
|
||||
model='damo/speech_inverse_text_processing_fun-text-processing-itn-ja',
|
||||
model_revision=None)
|
||||
|
||||
itn_result = itn_inference_pipline(text_in='百二十三')
|
||||
print(itn_result)
|
||||
# 123
|
||||
```
|
||||
- read text data directly.
|
||||
```python
|
||||
rec_result = inference_pipeline(text_in='一九九九年に誕生した同商品にちなみ、約三十年前、二十四歳の頃の幸四郎の写真を公開。')
|
||||
# 1999年に誕生した同商品にちなみ、約30年前、24歳の頃の幸四郎の写真を公開。
|
||||
```
|
||||
- text stored via url,example:https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_text/ja_itn_example.txt
|
||||
```python
|
||||
rec_result = inference_pipeline(text_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_text/ja_itn_example.txt')
|
||||
```
|
||||
|
||||
Full code of demo, please ref to [demo](https://github.com/alibaba-damo-academy/FunASR/tree/main/fun_text_processing/inverse_text_normalization)
|
||||
|
||||
### API-reference
|
||||
#### Define pipeline
|
||||
- `task`: `Tasks.inverse_text_processing`
|
||||
- `model`: model name in [model zoo](https://modelscope.cn/models?page=1&tasks=inverse-text-processing&type=audio), or model path in local disk
|
||||
- `output_dir`: `None` (Default), the output path of results if set
|
||||
- `model_revision`: `None` (Default), setting the model version
|
||||
|
||||
#### Infer pipeline
|
||||
- `text_in`: the input to decode, which could be:
|
||||
- text bytes, `e.g.`: "一九九九年に誕生した同商品にちなみ、約三十年前、二十四歳の頃の幸四郎の写真を公開。"
|
||||
- text file, `e.g.`: https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_text/ja_itn_example.txt
|
||||
In this case of `text file` input, `output_dir` must be set to save the output results
|
||||
|
||||
## Modify Your Own ITN Model
|
||||
The rule-based ITN code is open-sourced in [FunTextProcessing](https://github.com/alibaba-damo-academy/FunASR/tree/main/fun_text_processing), users can modify by their own grammar rules for different languages. Let's take Japanese as an example, users can add their own whitelist in ```FunASR/fun_text_processing/inverse_text_normalization/ja/data/whitelist.tsv```. After modified the grammar rules, the users can export and evaluate their own ITN models in local directory.
|
||||
|
||||
### Export ITN Model
|
||||
Export ITN model via ```FunASR/fun_text_processing/inverse_text_normalization/export_models.py```. An example to export ITN model to local folder is shown as below.
|
||||
```shell
|
||||
cd FunASR/fun_text_processing/inverse_text_normalization/
|
||||
python export_models.py --language ja --export_dir ./itn_models/
|
||||
```
|
||||
|
||||
### Evaluate ITN Model
|
||||
Users can evaluate their own ITN model in local directory via ```FunASR/fun_text_processing/inverse_text_normalization/inverse_normalize.py```. Here is an example:
|
||||
```shell
|
||||
cd FunASR/fun_text_processing/inverse_text_normalization/
|
||||
python inverse_normalize.py --input_file ja_itn_example.txt --cache_dir ./itn_models/ --output_file output.txt --language=ja
|
||||
```
|
||||
@ -1,112 +0,0 @@
|
||||
# Punctuation Restoration
|
||||
# Voice Activity Detection
|
||||
|
||||
> **Note**:
|
||||
> The modelscope pipeline supports all the models in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_models.html#pretrained-models-on-modelscope) to inference and finetune. Here we take the model of the punctuation model of CT-Transformer as example to demonstrate the usage.
|
||||
|
||||
## Inference
|
||||
|
||||
### Quick start
|
||||
#### [CT-Transformer model](https://www.modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/summary)
|
||||
```python
|
||||
from modelscope.pipelines import pipeline
|
||||
from modelscope.utils.constant import Tasks
|
||||
|
||||
inference_pipline = pipeline(
|
||||
task=Tasks.punctuation,
|
||||
model='damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch',
|
||||
model_revision=None)
|
||||
|
||||
rec_result = inference_pipline(text_in='example/punc_example.txt')
|
||||
print(rec_result)
|
||||
```
|
||||
- text二进制数据,例如:用户直接从文件里读出bytes数据
|
||||
```python
|
||||
rec_result = inference_pipline(text_in='我们都是木头人不会讲话不会动')
|
||||
```
|
||||
- text文件url,例如:https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_text/punc_example.txt
|
||||
```python
|
||||
rec_result = inference_pipline(text_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_text/punc_example.txt')
|
||||
```
|
||||
|
||||
#### [CT-Transformer Realtime model](https://www.modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vad_realtime-vocab272727/summary)
|
||||
```python
|
||||
from modelscope.pipelines import pipeline
|
||||
from modelscope.utils.constant import Tasks
|
||||
|
||||
inference_pipeline = pipeline(
|
||||
task=Tasks.punctuation,
|
||||
model='damo/punc_ct-transformer_zh-cn-common-vad_realtime-vocab272727',
|
||||
model_revision=None,
|
||||
)
|
||||
|
||||
inputs = "跨境河流是养育沿岸|人民的生命之源长期以来为帮助下游地区防灾减灾中方技术人员|在上游地区极为恶劣的自然条件下克服巨大困难甚至冒着生命危险|向印方提供汛期水文资料处理紧急事件中方重视印方在跨境河流问题上的关切|愿意进一步完善双方联合工作机制|凡是|中方能做的我们|都会去做而且会做得更好我请印度朋友们放心中国在上游的|任何开发利用都会经过科学|规划和论证兼顾上下游的利益"
|
||||
vads = inputs.split("|")
|
||||
rec_result_all="outputs:"
|
||||
param_dict = {"cache": []}
|
||||
for vad in vads:
|
||||
rec_result = inference_pipeline(text_in=vad, param_dict=param_dict)
|
||||
rec_result_all += rec_result['text']
|
||||
|
||||
print(rec_result_all)
|
||||
```
|
||||
Full code of demo, please ref to [demo](https://github.com/alibaba-damo-academy/FunASR/discussions/238)
|
||||
|
||||
|
||||
#### API-reference
|
||||
##### Define pipeline
|
||||
- `task`: `Tasks.punctuation`
|
||||
- `model`: model name in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_models.html#pretrained-models-on-modelscope), or model path in local disk
|
||||
- `ngpu`: `1` (Default), decoding on GPU. If ngpu=0, decoding on CPU
|
||||
- `output_dir`: `None` (Default), the output path of results if set
|
||||
- `model_revision`: `None` (Default), setting the model version
|
||||
|
||||
##### Infer pipeline
|
||||
- `text_in`: the input to decode, which could be:
|
||||
- text bytes, `e.g.`: "我们都是木头人不会讲话不会动"
|
||||
- text file, `e.g.`: example/punc_example.txt
|
||||
In this case of `text file` input, `output_dir` must be set to save the output results
|
||||
- `param_dict`: reserving the cache which is necessary in realtime mode.
|
||||
|
||||
### Inference with multi-thread CPUs or multi GPUs
|
||||
FunASR also offer recipes [egs_modelscope/punc/TEMPLATE/infer.sh](https://github.com/alibaba-damo-academy/FunASR/blob/main/egs_modelscope/punc/TEMPLATE/infer.sh) to decode with multi-thread CPUs, or multi GPUs. It is an offline recipe and only support offline model.
|
||||
|
||||
- Setting parameters in `infer.sh`
|
||||
- `model`: model name in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_models.html#pretrained-models-on-modelscope), or model path in local disk
|
||||
- `data_dir`: the dataset dir needs to include `punc.txt`
|
||||
- `output_dir`: output dir of the recognition results
|
||||
- `gpu_inference`: `true` (Default), whether to perform gpu decoding, set false for CPU inference
|
||||
- `gpuid_list`: `0,1` (Default), which gpu_ids are used to infer
|
||||
- `njob`: only used for CPU inference (`gpu_inference`=`false`), `64` (Default), the number of jobs for CPU decoding
|
||||
- `checkpoint_dir`: only used for infer finetuned models, the path dir of finetuned models
|
||||
- `checkpoint_name`: only used for infer finetuned models, `punc.pb` (Default), which checkpoint is used to infer
|
||||
|
||||
- Decode with multi GPUs:
|
||||
```shell
|
||||
bash infer.sh \
|
||||
--model "damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch" \
|
||||
--data_dir "./data/test" \
|
||||
--output_dir "./results" \
|
||||
--batch_size 64 \
|
||||
--gpu_inference true \
|
||||
--gpuid_list "0,1"
|
||||
```
|
||||
- Decode with multi-thread CPUs:
|
||||
```shell
|
||||
bash infer.sh \
|
||||
--model "damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch" \
|
||||
--data_dir "./data/test" \
|
||||
--output_dir "./results" \
|
||||
--gpu_inference false \
|
||||
--njob 64
|
||||
```
|
||||
|
||||
|
||||
## Finetune with pipeline
|
||||
|
||||
### Quick start
|
||||
|
||||
### Finetune with your data
|
||||
|
||||
## Inference with your finetuned model
|
||||
|
||||
1
docs/modelscope_pipeline/punc_pipeline.md
Symbolic link
1
docs/modelscope_pipeline/punc_pipeline.md
Symbolic link
@ -0,0 +1 @@
|
||||
../../egs_modelscope/punctuation/TEMPLATE/README.md
|
||||
@ -1,7 +1,7 @@
|
||||
# Quick Start
|
||||
|
||||
> **Note**:
|
||||
> The modelscope pipeline supports all the models in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_models.html#pretrained-models-on-modelscope) to inference and finetine. Here we take typic model as example to demonstrate the usage.
|
||||
> The modelscope pipeline supports all the models in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/model_zoo/modelscope_models.html#pretrained-models-on-modelscope) to inference and finetine. Here we take typic model as example to demonstrate the usage.
|
||||
|
||||
|
||||
## Inference with pipeline
|
||||
|
||||
1
docs/runtime/websocket_cpp.md
Symbolic link
1
docs/runtime/websocket_cpp.md
Symbolic link
@ -0,0 +1 @@
|
||||
../../funasr/runtime/websocket/readme.md
|
||||
79
egs/alimeeting/sa-asr/README.md
Normal file
79
egs/alimeeting/sa-asr/README.md
Normal file
@ -0,0 +1,79 @@
|
||||
# Get Started
|
||||
Speaker Attributed Automatic Speech Recognition (SA-ASR) is a task proposed to solve "who spoke what". Specifically, the goal of SA-ASR is not only to obtain multi-speaker transcriptions, but also to identify the corresponding speaker for each utterance. The method used in this example is referenced in the paper: [End-to-End Speaker-Attributed ASR with Transformer](https://www.isca-speech.org/archive/pdfs/interspeech_2021/kanda21b_interspeech.pdf).
|
||||
To run this receipe, first you need to install FunASR and ModelScope. ([installation](https://alibaba-damo-academy.github.io/FunASR/en/installation.html))
|
||||
There are two startup scripts, `run.sh` for training and evaluating on the old eval and test sets, and `run_m2met_2023_infer.sh` for inference on the new test set of the Multi-Channel Multi-Party Meeting Transcription 2.0 ([M2MET2.0](https://alibaba-damo-academy.github.io/FunASR/m2met2/index.html)) Challenge.
|
||||
Before running `run.sh`, you must manually download and unpack the [AliMeeting](http://www.openslr.org/119/) corpus and place it in the `./dataset` directory:
|
||||
```shell
|
||||
dataset
|
||||
|—— Eval_Ali_far
|
||||
|—— Eval_Ali_near
|
||||
|—— Test_Ali_far
|
||||
|—— Test_Ali_near
|
||||
|—— Train_Ali_far
|
||||
|—— Train_Ali_near
|
||||
```
|
||||
There are 18 stages in `run.sh`:
|
||||
```shell
|
||||
stage 1 - 5: Data preparation and processing.
|
||||
stage 6: Generate speaker profiles (Stage 6 takes a lot of time).
|
||||
stage 7 - 9: Language model training (Optional).
|
||||
stage 10 - 11: ASR training (SA-ASR requires loading the pre-trained ASR model).
|
||||
stage 12: SA-ASR training.
|
||||
stage 13 - 18: Inference and evaluation.
|
||||
```
|
||||
Before running `run_m2met_2023_infer.sh`, you need to place the new test set `Test_2023_Ali_far` (to be released after the challenge starts) in the `./dataset` directory, which contains only raw audios. Then put the given `wav.scp`, `wav_raw.scp`, `segments`, `utt2spk` and `spk2utt` in the `./data/Test_2023_Ali_far` directory.
|
||||
```shell
|
||||
data/Test_2023_Ali_far
|
||||
|—— wav.scp
|
||||
|—— wav_raw.scp
|
||||
|—— segments
|
||||
|—— utt2spk
|
||||
|—— spk2utt
|
||||
```
|
||||
There are 4 stages in `run_m2met_2023_infer.sh`:
|
||||
```shell
|
||||
stage 1: Data preparation and processing.
|
||||
stage 2: Generate speaker profiles for inference.
|
||||
stage 3: Inference.
|
||||
stage 4: Generation of SA-ASR results required for final submission.
|
||||
```
|
||||
# Format of Final Submission
|
||||
Finally, you need to submit a file called `text_spk_merge` with the following format:
|
||||
```shell
|
||||
Meeting_1 text_spk_1_A$text_spk_1_B$text_spk_1_C ...
|
||||
Meeting_2 text_spk_2_A$text_spk_2_B$text_spk_2_C ...
|
||||
...
|
||||
```
|
||||
Here, text_spk_1_A represents the full transcription of speaker_A of Meeting_1 (merged in chronological order), and $ represents the separator symbol. There's no need to worry about the speaker permutation as the optimal permutation will be computed in the end. For more information, please refer to the results generated after executing the baseline code.
|
||||
# Baseline Results
|
||||
The results of the baseline system are as follows. The baseline results include speaker independent character error rate (SI-CER) and concatenated minimum permutation character error rate (cpCER), the former is speaker independent and the latter is speaker dependent. The speaker profile adopts the oracle speaker embedding during training. However, due to the lack of oracle speaker label during evaluation, the speaker profile provided by an additional spectral clustering is used. Meanwhile, the results of using the oracle speaker profile on Eval and Test Set are also provided to show the impact of speaker profile accuracy.
|
||||
<table>
|
||||
<tr >
|
||||
<td rowspan="2"></td>
|
||||
<td colspan="2">SI-CER(%)</td>
|
||||
<td colspan="2">cpCER(%)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Eval</td>
|
||||
<td>Test</td>
|
||||
<td>Eval</td>
|
||||
<td>Test</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>oracle profile</td>
|
||||
<td>31.93</td>
|
||||
<td>32.75</td>
|
||||
<td>48.56</td>
|
||||
<td>53.33</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>cluster profile</td>
|
||||
<td>31.94</td>
|
||||
<td>32.77</td>
|
||||
<td>55.49</td>
|
||||
<td>58.17</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
# Reference
|
||||
N. Kanda, G. Ye, Y. Gaur, X. Wang, Z. Meng, Z. Chen, and T. Yoshioka, "End-to-end speaker-attributed ASR with transformer," in Interspeech. ISCA, 2021, pp. 4413–4417.
|
||||
1572
egs/alimeeting/sa-asr/asr_local.sh
Executable file
1572
egs/alimeeting/sa-asr/asr_local.sh
Executable file
File diff suppressed because it is too large
Load Diff
591
egs/alimeeting/sa-asr/asr_local_m2met_2023_infer.sh
Executable file
591
egs/alimeeting/sa-asr/asr_local_m2met_2023_infer.sh
Executable file
@ -0,0 +1,591 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Set bash to 'debug' mode, it will exit on :
|
||||
# -e 'error', -u 'undefined variable', -o ... 'error in pipeline', -x 'print commands',
|
||||
set -e
|
||||
set -u
|
||||
set -o pipefail
|
||||
|
||||
log() {
|
||||
local fname=${BASH_SOURCE[1]##*/}
|
||||
echo -e "$(date '+%Y-%m-%dT%H:%M:%S') (${fname}:${BASH_LINENO[0]}:${FUNCNAME[1]}) $*"
|
||||
}
|
||||
min() {
|
||||
local a b
|
||||
a=$1
|
||||
for b in "$@"; do
|
||||
if [ "${b}" -le "${a}" ]; then
|
||||
a="${b}"
|
||||
fi
|
||||
done
|
||||
echo "${a}"
|
||||
}
|
||||
SECONDS=0
|
||||
|
||||
# General configuration
|
||||
stage=1 # Processes starts from the specified stage.
|
||||
stop_stage=10000 # Processes is stopped at the specified stage.
|
||||
skip_data_prep=false # Skip data preparation stages.
|
||||
skip_train=false # Skip training stages.
|
||||
skip_eval=false # Skip decoding and evaluation stages.
|
||||
skip_upload=true # Skip packing and uploading stages.
|
||||
ngpu=1 # The number of gpus ("0" uses cpu, otherwise use gpu).
|
||||
num_nodes=1 # The number of nodes.
|
||||
nj=16 # The number of parallel jobs.
|
||||
inference_nj=16 # The number of parallel jobs in decoding.
|
||||
gpu_inference=false # Whether to perform gpu decoding.
|
||||
njob_infer=4
|
||||
dumpdir=dump2 # Directory to dump features.
|
||||
expdir=exp # Directory to save experiments.
|
||||
python=python3 # Specify python to execute espnet commands.
|
||||
device=0
|
||||
|
||||
# Data preparation related
|
||||
local_data_opts= # The options given to local/data.sh.
|
||||
|
||||
# Speed perturbation related
|
||||
speed_perturb_factors= # perturbation factors, e.g. "0.9 1.0 1.1" (separated by space).
|
||||
|
||||
# Feature extraction related
|
||||
feats_type=raw # Feature type (raw or fbank_pitch).
|
||||
audio_format=flac # Audio format: wav, flac, wav.ark, flac.ark (only in feats_type=raw).
|
||||
fs=16000 # Sampling rate.
|
||||
min_wav_duration=0.1 # Minimum duration in second.
|
||||
max_wav_duration=20 # Maximum duration in second.
|
||||
|
||||
# Tokenization related
|
||||
token_type=bpe # Tokenization type (char or bpe).
|
||||
nbpe=30 # The number of BPE vocabulary.
|
||||
bpemode=unigram # Mode of BPE (unigram or bpe).
|
||||
oov="<unk>" # Out of vocabulary symbol.
|
||||
blank="<blank>" # CTC blank symbol
|
||||
sos_eos="<sos/eos>" # sos and eos symbole
|
||||
bpe_input_sentence_size=100000000 # Size of input sentence for BPE.
|
||||
bpe_nlsyms= # non-linguistic symbols list, separated by a comma, for BPE
|
||||
bpe_char_cover=1.0 # character coverage when modeling BPE
|
||||
|
||||
# Language model related
|
||||
use_lm=true # Use language model for ASR decoding.
|
||||
lm_tag= # Suffix to the result dir for language model training.
|
||||
lm_exp= # Specify the direcotry path for LM experiment.
|
||||
# If this option is specified, lm_tag is ignored.
|
||||
lm_stats_dir= # Specify the direcotry path for LM statistics.
|
||||
lm_config= # Config for language model training.
|
||||
lm_args= # Arguments for language model training, e.g., "--max_epoch 10".
|
||||
# Note that it will overwrite args in lm config.
|
||||
use_word_lm=false # Whether to use word language model.
|
||||
num_splits_lm=1 # Number of splitting for lm corpus.
|
||||
# shellcheck disable=SC2034
|
||||
word_vocab_size=10000 # Size of word vocabulary.
|
||||
|
||||
# ASR model related
|
||||
asr_tag= # Suffix to the result dir for asr model training.
|
||||
asr_exp= # Specify the direcotry path for ASR experiment.
|
||||
# If this option is specified, asr_tag is ignored.
|
||||
sa_asr_exp=
|
||||
asr_stats_dir= # Specify the direcotry path for ASR statistics.
|
||||
asr_config= # Config for asr model training.
|
||||
sa_asr_config=
|
||||
asr_args= # Arguments for asr model training, e.g., "--max_epoch 10".
|
||||
# Note that it will overwrite args in asr config.
|
||||
feats_normalize=global_mvn # Normalizaton layer type.
|
||||
num_splits_asr=1 # Number of splitting for lm corpus.
|
||||
|
||||
# Decoding related
|
||||
inference_tag= # Suffix to the result dir for decoding.
|
||||
inference_config= # Config for decoding.
|
||||
inference_args= # Arguments for decoding, e.g., "--lm_weight 0.1".
|
||||
# Note that it will overwrite args in inference config.
|
||||
sa_asr_inference_tag=
|
||||
sa_asr_inference_args=
|
||||
|
||||
inference_lm=valid.loss.ave.pb # Language modle path for decoding.
|
||||
inference_asr_model=valid.acc.ave.pb # ASR model path for decoding.
|
||||
# e.g.
|
||||
# inference_asr_model=train.loss.best.pth
|
||||
# inference_asr_model=3epoch.pth
|
||||
# inference_asr_model=valid.acc.best.pth
|
||||
# inference_asr_model=valid.loss.ave.pth
|
||||
inference_sa_asr_model=valid.acc_spk.ave.pb
|
||||
download_model= # Download a model from Model Zoo and use it for decoding.
|
||||
|
||||
# [Task dependent] Set the datadir name created by local/data.sh
|
||||
train_set= # Name of training set.
|
||||
valid_set= # Name of validation set used for monitoring/tuning network training.
|
||||
test_sets= # Names of test sets. Multiple items (e.g., both dev and eval sets) can be specified.
|
||||
bpe_train_text= # Text file path of bpe training set.
|
||||
lm_train_text= # Text file path of language model training set.
|
||||
lm_dev_text= # Text file path of language model development set.
|
||||
lm_test_text= # Text file path of language model evaluation set.
|
||||
nlsyms_txt=none # Non-linguistic symbol list if existing.
|
||||
cleaner=none # Text cleaner.
|
||||
g2p=none # g2p method (needed if token_type=phn).
|
||||
lang=zh # The language type of corpus.
|
||||
score_opts= # The options given to sclite scoring
|
||||
local_score_opts= # The options given to local/score.sh.
|
||||
|
||||
help_message=$(cat << EOF
|
||||
Usage: $0 --train-set "<train_set_name>" --valid-set "<valid_set_name>" --test_sets "<test_set_names>"
|
||||
|
||||
Options:
|
||||
# General configuration
|
||||
--stage # Processes starts from the specified stage (default="${stage}").
|
||||
--stop_stage # Processes is stopped at the specified stage (default="${stop_stage}").
|
||||
--skip_data_prep # Skip data preparation stages (default="${skip_data_prep}").
|
||||
--skip_train # Skip training stages (default="${skip_train}").
|
||||
--skip_eval # Skip decoding and evaluation stages (default="${skip_eval}").
|
||||
--skip_upload # Skip packing and uploading stages (default="${skip_upload}").
|
||||
--ngpu # The number of gpus ("0" uses cpu, otherwise use gpu, default="${ngpu}").
|
||||
--num_nodes # The number of nodes (default="${num_nodes}").
|
||||
--nj # The number of parallel jobs (default="${nj}").
|
||||
--inference_nj # The number of parallel jobs in decoding (default="${inference_nj}").
|
||||
--gpu_inference # Whether to perform gpu decoding (default="${gpu_inference}").
|
||||
--dumpdir # Directory to dump features (default="${dumpdir}").
|
||||
--expdir # Directory to save experiments (default="${expdir}").
|
||||
--python # Specify python to execute espnet commands (default="${python}").
|
||||
--device # Which GPUs are use for local training (defalut="${device}").
|
||||
|
||||
# Data preparation related
|
||||
--local_data_opts # The options given to local/data.sh (default="${local_data_opts}").
|
||||
|
||||
# Speed perturbation related
|
||||
--speed_perturb_factors # speed perturbation factors, e.g. "0.9 1.0 1.1" (separated by space, default="${speed_perturb_factors}").
|
||||
|
||||
# Feature extraction related
|
||||
--feats_type # Feature type (raw, fbank_pitch or extracted, default="${feats_type}").
|
||||
--audio_format # Audio format: wav, flac, wav.ark, flac.ark (only in feats_type=raw, default="${audio_format}").
|
||||
--fs # Sampling rate (default="${fs}").
|
||||
--min_wav_duration # Minimum duration in second (default="${min_wav_duration}").
|
||||
--max_wav_duration # Maximum duration in second (default="${max_wav_duration}").
|
||||
|
||||
# Tokenization related
|
||||
--token_type # Tokenization type (char or bpe, default="${token_type}").
|
||||
--nbpe # The number of BPE vocabulary (default="${nbpe}").
|
||||
--bpemode # Mode of BPE (unigram or bpe, default="${bpemode}").
|
||||
--oov # Out of vocabulary symbol (default="${oov}").
|
||||
--blank # CTC blank symbol (default="${blank}").
|
||||
--sos_eos # sos and eos symbole (default="${sos_eos}").
|
||||
--bpe_input_sentence_size # Size of input sentence for BPE (default="${bpe_input_sentence_size}").
|
||||
--bpe_nlsyms # Non-linguistic symbol list for sentencepiece, separated by a comma. (default="${bpe_nlsyms}").
|
||||
--bpe_char_cover # Character coverage when modeling BPE (default="${bpe_char_cover}").
|
||||
|
||||
# Language model related
|
||||
--lm_tag # Suffix to the result dir for language model training (default="${lm_tag}").
|
||||
--lm_exp # Specify the direcotry path for LM experiment.
|
||||
# If this option is specified, lm_tag is ignored (default="${lm_exp}").
|
||||
--lm_stats_dir # Specify the direcotry path for LM statistics (default="${lm_stats_dir}").
|
||||
--lm_config # Config for language model training (default="${lm_config}").
|
||||
--lm_args # Arguments for language model training (default="${lm_args}").
|
||||
# e.g., --lm_args "--max_epoch 10"
|
||||
# Note that it will overwrite args in lm config.
|
||||
--use_word_lm # Whether to use word language model (default="${use_word_lm}").
|
||||
--word_vocab_size # Size of word vocabulary (default="${word_vocab_size}").
|
||||
--num_splits_lm # Number of splitting for lm corpus (default="${num_splits_lm}").
|
||||
|
||||
# ASR model related
|
||||
--asr_tag # Suffix to the result dir for asr model training (default="${asr_tag}").
|
||||
--asr_exp # Specify the direcotry path for ASR experiment.
|
||||
# If this option is specified, asr_tag is ignored (default="${asr_exp}").
|
||||
--asr_stats_dir # Specify the direcotry path for ASR statistics (default="${asr_stats_dir}").
|
||||
--asr_config # Config for asr model training (default="${asr_config}").
|
||||
--asr_args # Arguments for asr model training (default="${asr_args}").
|
||||
# e.g., --asr_args "--max_epoch 10"
|
||||
# Note that it will overwrite args in asr config.
|
||||
--feats_normalize # Normalizaton layer type (default="${feats_normalize}").
|
||||
--num_splits_asr # Number of splitting for lm corpus (default="${num_splits_asr}").
|
||||
|
||||
# Decoding related
|
||||
--inference_tag # Suffix to the result dir for decoding (default="${inference_tag}").
|
||||
--inference_config # Config for decoding (default="${inference_config}").
|
||||
--inference_args # Arguments for decoding (default="${inference_args}").
|
||||
# e.g., --inference_args "--lm_weight 0.1"
|
||||
# Note that it will overwrite args in inference config.
|
||||
--inference_lm # Language modle path for decoding (default="${inference_lm}").
|
||||
--inference_asr_model # ASR model path for decoding (default="${inference_asr_model}").
|
||||
--download_model # Download a model from Model Zoo and use it for decoding (default="${download_model}").
|
||||
|
||||
# [Task dependent] Set the datadir name created by local/data.sh
|
||||
--train_set # Name of training set (required).
|
||||
--valid_set # Name of validation set used for monitoring/tuning network training (required).
|
||||
--test_sets # Names of test sets.
|
||||
# Multiple items (e.g., both dev and eval sets) can be specified (required).
|
||||
--bpe_train_text # Text file path of bpe training set.
|
||||
--lm_train_text # Text file path of language model training set.
|
||||
--lm_dev_text # Text file path of language model development set (default="${lm_dev_text}").
|
||||
--lm_test_text # Text file path of language model evaluation set (default="${lm_test_text}").
|
||||
--nlsyms_txt # Non-linguistic symbol list if existing (default="${nlsyms_txt}").
|
||||
--cleaner # Text cleaner (default="${cleaner}").
|
||||
--g2p # g2p method (default="${g2p}").
|
||||
--lang # The language type of corpus (default=${lang}).
|
||||
--score_opts # The options given to sclite scoring (default="{score_opts}").
|
||||
--local_score_opts # The options given to local/score.sh (default="{local_score_opts}").
|
||||
EOF
|
||||
)
|
||||
|
||||
log "$0 $*"
|
||||
# Save command line args for logging (they will be lost after utils/parse_options.sh)
|
||||
run_args=$(python -m funasr.utils.cli_utils $0 "$@")
|
||||
. utils/parse_options.sh
|
||||
|
||||
if [ $# -ne 0 ]; then
|
||||
log "${help_message}"
|
||||
log "Error: No positional arguments are required."
|
||||
exit 2
|
||||
fi
|
||||
|
||||
. ./path.sh
|
||||
|
||||
|
||||
# Check required arguments
|
||||
[ -z "${train_set}" ] && { log "${help_message}"; log "Error: --train_set is required"; exit 2; };
|
||||
[ -z "${valid_set}" ] && { log "${help_message}"; log "Error: --valid_set is required"; exit 2; };
|
||||
[ -z "${test_sets}" ] && { log "${help_message}"; log "Error: --test_sets is required"; exit 2; };
|
||||
|
||||
# Check feature type
|
||||
if [ "${feats_type}" = raw ]; then
|
||||
data_feats=${dumpdir}/raw
|
||||
elif [ "${feats_type}" = fbank_pitch ]; then
|
||||
data_feats=${dumpdir}/fbank_pitch
|
||||
elif [ "${feats_type}" = fbank ]; then
|
||||
data_feats=${dumpdir}/fbank
|
||||
elif [ "${feats_type}" == extracted ]; then
|
||||
data_feats=${dumpdir}/extracted
|
||||
else
|
||||
log "${help_message}"
|
||||
log "Error: not supported: --feats_type ${feats_type}"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
# Use the same text as ASR for bpe training if not specified.
|
||||
[ -z "${bpe_train_text}" ] && bpe_train_text="${data_feats}/${train_set}/text"
|
||||
# Use the same text as ASR for lm training if not specified.
|
||||
[ -z "${lm_train_text}" ] && lm_train_text="${data_feats}/${train_set}/text"
|
||||
# Use the same text as ASR for lm training if not specified.
|
||||
[ -z "${lm_dev_text}" ] && lm_dev_text="${data_feats}/${valid_set}/text"
|
||||
# Use the text of the 1st evaldir if lm_test is not specified
|
||||
[ -z "${lm_test_text}" ] && lm_test_text="${data_feats}/${test_sets%% *}/text"
|
||||
|
||||
# Check tokenization type
|
||||
if [ "${lang}" != noinfo ]; then
|
||||
token_listdir=data/${lang}_token_list
|
||||
else
|
||||
token_listdir=data/token_list
|
||||
fi
|
||||
bpedir="${token_listdir}/bpe_${bpemode}${nbpe}"
|
||||
bpeprefix="${bpedir}"/bpe
|
||||
bpemodel="${bpeprefix}".model
|
||||
bpetoken_list="${bpedir}"/tokens.txt
|
||||
chartoken_list="${token_listdir}"/char/tokens.txt
|
||||
# NOTE: keep for future development.
|
||||
# shellcheck disable=SC2034
|
||||
wordtoken_list="${token_listdir}"/word/tokens.txt
|
||||
|
||||
if [ "${token_type}" = bpe ]; then
|
||||
token_list="${bpetoken_list}"
|
||||
elif [ "${token_type}" = char ]; then
|
||||
token_list="${chartoken_list}"
|
||||
bpemodel=none
|
||||
elif [ "${token_type}" = word ]; then
|
||||
token_list="${wordtoken_list}"
|
||||
bpemodel=none
|
||||
else
|
||||
log "Error: not supported --token_type '${token_type}'"
|
||||
exit 2
|
||||
fi
|
||||
if ${use_word_lm}; then
|
||||
log "Error: Word LM is not supported yet"
|
||||
exit 2
|
||||
|
||||
lm_token_list="${wordtoken_list}"
|
||||
lm_token_type=word
|
||||
else
|
||||
lm_token_list="${token_list}"
|
||||
lm_token_type="${token_type}"
|
||||
fi
|
||||
|
||||
|
||||
# Set tag for naming of model directory
|
||||
if [ -z "${asr_tag}" ]; then
|
||||
if [ -n "${asr_config}" ]; then
|
||||
asr_tag="$(basename "${asr_config}" .yaml)_${feats_type}"
|
||||
else
|
||||
asr_tag="train_${feats_type}"
|
||||
fi
|
||||
if [ "${lang}" != noinfo ]; then
|
||||
asr_tag+="_${lang}_${token_type}"
|
||||
else
|
||||
asr_tag+="_${token_type}"
|
||||
fi
|
||||
if [ "${token_type}" = bpe ]; then
|
||||
asr_tag+="${nbpe}"
|
||||
fi
|
||||
# Add overwritten arg's info
|
||||
if [ -n "${asr_args}" ]; then
|
||||
asr_tag+="$(echo "${asr_args}" | sed -e "s/--/\_/g" -e "s/[ |=/]//g")"
|
||||
fi
|
||||
if [ -n "${speed_perturb_factors}" ]; then
|
||||
asr_tag+="_sp"
|
||||
fi
|
||||
fi
|
||||
if [ -z "${lm_tag}" ]; then
|
||||
if [ -n "${lm_config}" ]; then
|
||||
lm_tag="$(basename "${lm_config}" .yaml)"
|
||||
else
|
||||
lm_tag="train"
|
||||
fi
|
||||
if [ "${lang}" != noinfo ]; then
|
||||
lm_tag+="_${lang}_${lm_token_type}"
|
||||
else
|
||||
lm_tag+="_${lm_token_type}"
|
||||
fi
|
||||
if [ "${lm_token_type}" = bpe ]; then
|
||||
lm_tag+="${nbpe}"
|
||||
fi
|
||||
# Add overwritten arg's info
|
||||
if [ -n "${lm_args}" ]; then
|
||||
lm_tag+="$(echo "${lm_args}" | sed -e "s/--/\_/g" -e "s/[ |=/]//g")"
|
||||
fi
|
||||
fi
|
||||
|
||||
# The directory used for collect-stats mode
|
||||
if [ -z "${asr_stats_dir}" ]; then
|
||||
if [ "${lang}" != noinfo ]; then
|
||||
asr_stats_dir="${expdir}/asr_stats_${feats_type}_${lang}_${token_type}"
|
||||
else
|
||||
asr_stats_dir="${expdir}/asr_stats_${feats_type}_${token_type}"
|
||||
fi
|
||||
if [ "${token_type}" = bpe ]; then
|
||||
asr_stats_dir+="${nbpe}"
|
||||
fi
|
||||
if [ -n "${speed_perturb_factors}" ]; then
|
||||
asr_stats_dir+="_sp"
|
||||
fi
|
||||
fi
|
||||
if [ -z "${lm_stats_dir}" ]; then
|
||||
if [ "${lang}" != noinfo ]; then
|
||||
lm_stats_dir="${expdir}/lm_stats_${lang}_${lm_token_type}"
|
||||
else
|
||||
lm_stats_dir="${expdir}/lm_stats_${lm_token_type}"
|
||||
fi
|
||||
if [ "${lm_token_type}" = bpe ]; then
|
||||
lm_stats_dir+="${nbpe}"
|
||||
fi
|
||||
fi
|
||||
# The directory used for training commands
|
||||
if [ -z "${asr_exp}" ]; then
|
||||
asr_exp="${expdir}/asr_${asr_tag}"
|
||||
fi
|
||||
if [ -z "${lm_exp}" ]; then
|
||||
lm_exp="${expdir}/lm_${lm_tag}"
|
||||
fi
|
||||
|
||||
|
||||
if [ -z "${inference_tag}" ]; then
|
||||
if [ -n "${inference_config}" ]; then
|
||||
inference_tag="$(basename "${inference_config}" .yaml)"
|
||||
else
|
||||
inference_tag=inference
|
||||
fi
|
||||
# Add overwritten arg's info
|
||||
if [ -n "${inference_args}" ]; then
|
||||
inference_tag+="$(echo "${inference_args}" | sed -e "s/--/\_/g" -e "s/[ |=]//g")"
|
||||
fi
|
||||
if "${use_lm}"; then
|
||||
inference_tag+="_lm_$(basename "${lm_exp}")_$(echo "${inference_lm}" | sed -e "s/\//_/g" -e "s/\.[^.]*$//g")"
|
||||
fi
|
||||
inference_tag+="_asr_model_$(echo "${inference_asr_model}" | sed -e "s/\//_/g" -e "s/\.[^.]*$//g")"
|
||||
fi
|
||||
|
||||
if [ -z "${sa_asr_inference_tag}" ]; then
|
||||
if [ -n "${inference_config}" ]; then
|
||||
sa_asr_inference_tag="$(basename "${inference_config}" .yaml)"
|
||||
else
|
||||
sa_asr_inference_tag=sa_asr_inference
|
||||
fi
|
||||
# Add overwritten arg's info
|
||||
if [ -n "${sa_asr_inference_args}" ]; then
|
||||
sa_asr_inference_tag+="$(echo "${sa_asr_inference_args}" | sed -e "s/--/\_/g" -e "s/[ |=]//g")"
|
||||
fi
|
||||
if "${use_lm}"; then
|
||||
sa_asr_inference_tag+="_lm_$(basename "${lm_exp}")_$(echo "${inference_lm}" | sed -e "s/\//_/g" -e "s/\.[^.]*$//g")"
|
||||
fi
|
||||
sa_asr_inference_tag+="_asr_model_$(echo "${inference_sa_asr_model}" | sed -e "s/\//_/g" -e "s/\.[^.]*$//g")"
|
||||
fi
|
||||
|
||||
train_cmd="run.pl"
|
||||
cuda_cmd="run.pl"
|
||||
decode_cmd="run.pl"
|
||||
|
||||
# ========================== Main stages start from here. ==========================
|
||||
|
||||
if ! "${skip_data_prep}"; then
|
||||
|
||||
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
|
||||
if [ "${feats_type}" = raw ]; then
|
||||
log "Stage 1: Format wav.scp: data/ -> ${data_feats}"
|
||||
|
||||
# ====== Recreating "wav.scp" ======
|
||||
# Kaldi-wav.scp, which can describe the file path with unix-pipe, like "cat /some/path |",
|
||||
# shouldn't be used in training process.
|
||||
# "format_wav_scp.sh" dumps such pipe-style-wav to real audio file
|
||||
# and it can also change the audio-format and sampling rate.
|
||||
# If nothing is need, then format_wav_scp.sh does nothing:
|
||||
# i.e. the input file format and rate is same as the output.
|
||||
|
||||
for dset in "${test_sets}" ; do
|
||||
|
||||
_suf=""
|
||||
|
||||
local/copy_data_dir.sh --validate_opts --non-print data/"${dset}" "${data_feats}${_suf}/${dset}"
|
||||
|
||||
rm -f ${data_feats}${_suf}/${dset}/{segments,wav.scp,reco2file_and_channel,reco2dur}
|
||||
_opts=
|
||||
if [ -e data/"${dset}"/segments ]; then
|
||||
# "segments" is used for splitting wav files which are written in "wav".scp
|
||||
# into utterances. The file format of segments:
|
||||
# <segment_id> <record_id> <start_time> <end_time>
|
||||
# "e.g. call-861225-A-0050-0065 call-861225-A 5.0 6.5"
|
||||
# Where the time is written in seconds.
|
||||
_opts+="--segments data/${dset}/segments "
|
||||
fi
|
||||
# shellcheck disable=SC2086
|
||||
local/format_wav_scp.sh --nj "${nj}" --cmd "${train_cmd}" \
|
||||
--audio-format "${audio_format}" --fs "${fs}" ${_opts} \
|
||||
"data/${dset}/wav.scp" "${data_feats}${_suf}/${dset}"
|
||||
|
||||
echo "${feats_type}" > "${data_feats}${_suf}/${dset}/feats_type"
|
||||
done
|
||||
|
||||
else
|
||||
log "Error: not supported: --feats_type ${feats_type}"
|
||||
exit 2
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
|
||||
log "Stage 2: Generate speaker profile by spectral-cluster"
|
||||
mkdir -p "profile_log"
|
||||
for dset in "${test_sets}"; do
|
||||
# generate cluster_profile with spectral-cluster directly (for infering and without oracle information)
|
||||
python local/gen_cluster_profile_infer.py "${data_feats}/${dset}" "data/${dset}" 0.996 0.815 &> "profile_log/gen_cluster_profile_infer_${dset}.log"
|
||||
log "Successfully generate cluster profile for ${dset} (${data_feats}/${dset}/cluster_profile_infer.scp)"
|
||||
done
|
||||
fi
|
||||
|
||||
else
|
||||
log "Skip the stages for data preparation"
|
||||
fi
|
||||
|
||||
|
||||
# ========================== Data preparation is done here. ==========================
|
||||
|
||||
if ! "${skip_eval}"; then
|
||||
|
||||
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
|
||||
log "Stage 3: Decoding SA-ASR (cluster profile): training_dir=${sa_asr_exp}"
|
||||
|
||||
if ${gpu_inference}; then
|
||||
_cmd="${cuda_cmd}"
|
||||
inference_nj=$[${ngpu}*${njob_infer}]
|
||||
_ngpu=1
|
||||
|
||||
else
|
||||
_cmd="${decode_cmd}"
|
||||
inference_nj=$njob_infer
|
||||
_ngpu=0
|
||||
fi
|
||||
|
||||
_opts=
|
||||
if [ -n "${inference_config}" ]; then
|
||||
_opts+="--config ${inference_config} "
|
||||
fi
|
||||
if "${use_lm}"; then
|
||||
if "${use_word_lm}"; then
|
||||
_opts+="--word_lm_train_config ${lm_exp}/config.yaml "
|
||||
_opts+="--word_lm_file ${lm_exp}/${inference_lm} "
|
||||
else
|
||||
_opts+="--lm_train_config ${lm_exp}/config.yaml "
|
||||
_opts+="--lm_file ${lm_exp}/${inference_lm} "
|
||||
fi
|
||||
fi
|
||||
|
||||
# 2. Generate run.sh
|
||||
log "Generate '${sa_asr_exp}/${sa_asr_inference_tag}.cluster/run.sh'. You can resume the process from stage 17 using this script"
|
||||
mkdir -p "${sa_asr_exp}/${sa_asr_inference_tag}.cluster"; echo "${run_args} --stage 17 \"\$@\"; exit \$?" > "${sa_asr_exp}/${sa_asr_inference_tag}.cluster/run.sh"; chmod +x "${sa_asr_exp}/${sa_asr_inference_tag}.cluster/run.sh"
|
||||
|
||||
for dset in ${test_sets}; do
|
||||
_data="${data_feats}/${dset}"
|
||||
_dir="${sa_asr_exp}/${sa_asr_inference_tag}.cluster/${dset}"
|
||||
_logdir="${_dir}/logdir"
|
||||
mkdir -p "${_logdir}"
|
||||
|
||||
_feats_type="$(<${_data}/feats_type)"
|
||||
if [ "${_feats_type}" = raw ]; then
|
||||
_scp=wav.scp
|
||||
if [[ "${audio_format}" == *ark* ]]; then
|
||||
_type=kaldi_ark
|
||||
else
|
||||
_type=sound
|
||||
fi
|
||||
else
|
||||
_scp=feats.scp
|
||||
_type=kaldi_ark
|
||||
fi
|
||||
|
||||
# 1. Split the key file
|
||||
key_file=${_data}/${_scp}
|
||||
split_scps=""
|
||||
_nj=$(min "${inference_nj}" "$(<${key_file} wc -l)")
|
||||
for n in $(seq "${_nj}"); do
|
||||
split_scps+=" ${_logdir}/keys.${n}.scp"
|
||||
done
|
||||
# shellcheck disable=SC2086
|
||||
utils/split_scp.pl "${key_file}" ${split_scps}
|
||||
|
||||
# 2. Submit decoding jobs
|
||||
log "Decoding started... log: '${_logdir}/sa_asr_inference.*.log'"
|
||||
# shellcheck disable=SC2086
|
||||
${_cmd} --gpu "${_ngpu}" --max-jobs-run "${_nj}" JOB=1:"${_nj}" "${_logdir}"/asr_inference.JOB.log \
|
||||
python -m funasr.bin.asr_inference_launch \
|
||||
--batch_size 1 \
|
||||
--mc True \
|
||||
--nbest 1 \
|
||||
--ngpu "${_ngpu}" \
|
||||
--njob ${njob_infer} \
|
||||
--gpuid_list ${device} \
|
||||
--data_path_and_name_and_type "${_data}/${_scp},speech,${_type}" \
|
||||
--data_path_and_name_and_type "${_data}/cluster_profile_infer.scp,profile,npy" \
|
||||
--key_file "${_logdir}"/keys.JOB.scp \
|
||||
--allow_variable_data_keys true \
|
||||
--asr_train_config "${sa_asr_exp}"/config.yaml \
|
||||
--asr_model_file "${sa_asr_exp}"/"${inference_sa_asr_model}" \
|
||||
--output_dir "${_logdir}"/output.JOB \
|
||||
--mode sa_asr \
|
||||
${_opts}
|
||||
|
||||
# 3. Concatenates the output files from each jobs
|
||||
for f in token token_int score text text_id; do
|
||||
for i in $(seq "${_nj}"); do
|
||||
cat "${_logdir}/output.${i}/1best_recog/${f}"
|
||||
done | LC_ALL=C sort -k1 >"${_dir}/${f}"
|
||||
done
|
||||
done
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 4 ] && [ ${stop_stage} -ge 4 ]; then
|
||||
log "Stage 4: Generate SA-ASR results (cluster profile)"
|
||||
|
||||
for dset in ${test_sets}; do
|
||||
_dir="${sa_asr_exp}/${sa_asr_inference_tag}.cluster/${dset}"
|
||||
|
||||
python local/process_text_spk_merge.py ${_dir}
|
||||
done
|
||||
|
||||
fi
|
||||
|
||||
else
|
||||
log "Skip the evaluation stages"
|
||||
fi
|
||||
|
||||
|
||||
log "Successfully finished. [elapsed=${SECONDS}s]"
|
||||
6
egs/alimeeting/sa-asr/conf/decode_asr_rnn.yaml
Normal file
6
egs/alimeeting/sa-asr/conf/decode_asr_rnn.yaml
Normal file
@ -0,0 +1,6 @@
|
||||
beam_size: 20
|
||||
penalty: 0.0
|
||||
maxlenratio: 0.0
|
||||
minlenratio: 0.0
|
||||
ctc_weight: 0.6
|
||||
lm_weight: 0.3
|
||||
87
egs/alimeeting/sa-asr/conf/train_asr_conformer.yaml
Normal file
87
egs/alimeeting/sa-asr/conf/train_asr_conformer.yaml
Normal file
@ -0,0 +1,87 @@
|
||||
# network architecture
|
||||
frontend: default
|
||||
frontend_conf:
|
||||
n_fft: 400
|
||||
win_length: 400
|
||||
hop_length: 160
|
||||
|
||||
# encoder related
|
||||
encoder: conformer
|
||||
encoder_conf:
|
||||
output_size: 256 # dimension of attention
|
||||
attention_heads: 4
|
||||
linear_units: 2048 # the number of units of position-wise feed forward
|
||||
num_blocks: 12 # the number of encoder blocks
|
||||
dropout_rate: 0.1
|
||||
positional_dropout_rate: 0.1
|
||||
attention_dropout_rate: 0.0
|
||||
input_layer: conv2d # encoder architecture type
|
||||
normalize_before: true
|
||||
rel_pos_type: latest
|
||||
pos_enc_layer_type: rel_pos
|
||||
selfattention_layer_type: rel_selfattn
|
||||
activation_type: swish
|
||||
macaron_style: true
|
||||
use_cnn_module: true
|
||||
cnn_module_kernel: 15
|
||||
|
||||
# decoder related
|
||||
decoder: transformer
|
||||
decoder_conf:
|
||||
attention_heads: 4
|
||||
linear_units: 2048
|
||||
num_blocks: 6
|
||||
dropout_rate: 0.1
|
||||
positional_dropout_rate: 0.1
|
||||
self_attention_dropout_rate: 0.0
|
||||
src_attention_dropout_rate: 0.0
|
||||
|
||||
# ctc related
|
||||
ctc_conf:
|
||||
ignore_nan_grad: true
|
||||
|
||||
# hybrid CTC/attention
|
||||
model_conf:
|
||||
ctc_weight: 0.3
|
||||
lsm_weight: 0.1 # label smoothing option
|
||||
length_normalized_loss: false
|
||||
|
||||
# minibatch related
|
||||
batch_type: numel
|
||||
batch_bins: 10000000 # reduce/increase this number according to your GPU memory
|
||||
|
||||
# optimization related
|
||||
accum_grad: 1
|
||||
grad_clip: 5
|
||||
max_epoch: 100
|
||||
val_scheduler_criterion:
|
||||
- valid
|
||||
- acc
|
||||
best_model_criterion:
|
||||
- - valid
|
||||
- acc
|
||||
- max
|
||||
keep_nbest_models: 10
|
||||
|
||||
optim: adam
|
||||
optim_conf:
|
||||
lr: 0.001
|
||||
scheduler: warmuplr
|
||||
scheduler_conf:
|
||||
warmup_steps: 25000
|
||||
|
||||
specaug: specaug
|
||||
specaug_conf:
|
||||
apply_time_warp: true
|
||||
time_warp_window: 5
|
||||
time_warp_mode: bicubic
|
||||
apply_freq_mask: true
|
||||
freq_mask_width_range:
|
||||
- 0
|
||||
- 30
|
||||
num_freq_mask: 2
|
||||
apply_time_mask: true
|
||||
time_mask_width_range:
|
||||
- 0
|
||||
- 40
|
||||
num_time_mask: 2
|
||||
29
egs/alimeeting/sa-asr/conf/train_lm_transformer.yaml
Normal file
29
egs/alimeeting/sa-asr/conf/train_lm_transformer.yaml
Normal file
@ -0,0 +1,29 @@
|
||||
lm: transformer
|
||||
lm_conf:
|
||||
pos_enc: null
|
||||
embed_unit: 128
|
||||
att_unit: 512
|
||||
head: 8
|
||||
unit: 2048
|
||||
layer: 16
|
||||
dropout_rate: 0.1
|
||||
|
||||
# optimization related
|
||||
grad_clip: 5.0
|
||||
batch_type: numel
|
||||
batch_bins: 500000 # 4gpus * 500000
|
||||
accum_grad: 1
|
||||
max_epoch: 15 # 15epoch is enougth
|
||||
|
||||
optim: adam
|
||||
optim_conf:
|
||||
lr: 0.001
|
||||
scheduler: warmuplr
|
||||
scheduler_conf:
|
||||
warmup_steps: 25000
|
||||
|
||||
best_model_criterion:
|
||||
- - valid
|
||||
- loss
|
||||
- min
|
||||
keep_nbest_models: 10 # 10 is good.
|
||||
115
egs/alimeeting/sa-asr/conf/train_sa_asr_conformer.yaml
Normal file
115
egs/alimeeting/sa-asr/conf/train_sa_asr_conformer.yaml
Normal file
@ -0,0 +1,115 @@
|
||||
# network architecture
|
||||
frontend: default
|
||||
frontend_conf:
|
||||
n_fft: 400
|
||||
win_length: 400
|
||||
hop_length: 160
|
||||
|
||||
# encoder related
|
||||
asr_encoder: conformer
|
||||
asr_encoder_conf:
|
||||
output_size: 256 # dimension of attention
|
||||
attention_heads: 4
|
||||
linear_units: 2048 # the number of units of position-wise feed forward
|
||||
num_blocks: 12 # the number of encoder blocks
|
||||
dropout_rate: 0.1
|
||||
positional_dropout_rate: 0.1
|
||||
attention_dropout_rate: 0.0
|
||||
input_layer: conv2d # encoder architecture type
|
||||
normalize_before: true
|
||||
pos_enc_layer_type: rel_pos
|
||||
selfattention_layer_type: rel_selfattn
|
||||
activation_type: swish
|
||||
macaron_style: true
|
||||
use_cnn_module: true
|
||||
cnn_module_kernel: 15
|
||||
|
||||
spk_encoder: resnet34_diar
|
||||
spk_encoder_conf:
|
||||
use_head_conv: true
|
||||
batchnorm_momentum: 0.5
|
||||
use_head_maxpool: false
|
||||
num_nodes_pooling_layer: 256
|
||||
layers_in_block:
|
||||
- 3
|
||||
- 4
|
||||
- 6
|
||||
- 3
|
||||
filters_in_block:
|
||||
- 32
|
||||
- 64
|
||||
- 128
|
||||
- 256
|
||||
pooling_type: statistic
|
||||
num_nodes_resnet1: 256
|
||||
num_nodes_last_layer: 256
|
||||
batchnorm_momentum: 0.5
|
||||
|
||||
# decoder related
|
||||
decoder: sa_decoder
|
||||
decoder_conf:
|
||||
attention_heads: 4
|
||||
linear_units: 2048
|
||||
asr_num_blocks: 6
|
||||
spk_num_blocks: 3
|
||||
dropout_rate: 0.1
|
||||
positional_dropout_rate: 0.1
|
||||
self_attention_dropout_rate: 0.0
|
||||
src_attention_dropout_rate: 0.0
|
||||
|
||||
# hybrid CTC/attention
|
||||
model_conf:
|
||||
spk_weight: 0.5
|
||||
ctc_weight: 0.3
|
||||
lsm_weight: 0.1 # label smoothing option
|
||||
length_normalized_loss: false
|
||||
|
||||
ctc_conf:
|
||||
ignore_nan_grad: true
|
||||
|
||||
# minibatch related
|
||||
batch_type: numel
|
||||
batch_bins: 10000000
|
||||
|
||||
# optimization related
|
||||
accum_grad: 1
|
||||
grad_clip: 5
|
||||
max_epoch: 60
|
||||
val_scheduler_criterion:
|
||||
- valid
|
||||
- loss
|
||||
best_model_criterion:
|
||||
- - valid
|
||||
- acc
|
||||
- max
|
||||
- - valid
|
||||
- acc_spk
|
||||
- max
|
||||
- - valid
|
||||
- loss
|
||||
- min
|
||||
keep_nbest_models: 10
|
||||
|
||||
optim: adam
|
||||
optim_conf:
|
||||
lr: 0.0005
|
||||
scheduler: warmuplr
|
||||
scheduler_conf:
|
||||
warmup_steps: 8000
|
||||
|
||||
specaug: specaug
|
||||
specaug_conf:
|
||||
apply_time_warp: true
|
||||
time_warp_window: 5
|
||||
time_warp_mode: bicubic
|
||||
apply_freq_mask: true
|
||||
freq_mask_width_range:
|
||||
- 0
|
||||
- 30
|
||||
num_freq_mask: 2
|
||||
apply_time_mask: true
|
||||
time_mask_width_range:
|
||||
- 0
|
||||
- 40
|
||||
num_time_mask: 2
|
||||
|
||||
162
egs/alimeeting/sa-asr/local/alimeeting_data_prep.sh
Executable file
162
egs/alimeeting/sa-asr/local/alimeeting_data_prep.sh
Executable file
@ -0,0 +1,162 @@
|
||||
#!/usr/bin/env bash
|
||||
# Set bash to 'debug' mode, it will exit on :
|
||||
# -e 'error', -u 'undefined variable', -o ... 'error in pipeline', -x 'print commands',
|
||||
set -e
|
||||
set -u
|
||||
set -o pipefail
|
||||
|
||||
log() {
|
||||
local fname=${BASH_SOURCE[1]##*/}
|
||||
echo -e "$(date '+%Y-%m-%dT%H:%M:%S') (${fname}:${BASH_LINENO[0]}:${FUNCNAME[1]}) $*"
|
||||
}
|
||||
|
||||
help_messge=$(cat << EOF
|
||||
Usage: $0
|
||||
|
||||
Options:
|
||||
--no_overlap (bool): Whether to ignore the overlapping utterance in the training set.
|
||||
--tgt (string): Which set to process, test or train.
|
||||
EOF
|
||||
)
|
||||
|
||||
SECONDS=0
|
||||
tgt=Train #Train or Eval
|
||||
|
||||
|
||||
log "$0 $*"
|
||||
echo $tgt
|
||||
. ./utils/parse_options.sh
|
||||
|
||||
. ./path.sh
|
||||
|
||||
AliMeeting="${PWD}/dataset"
|
||||
|
||||
if [ $# -gt 2 ]; then
|
||||
log "${help_message}"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
|
||||
if [ ! -d "${AliMeeting}" ]; then
|
||||
log "Error: ${AliMeeting} is empty."
|
||||
exit 2
|
||||
fi
|
||||
|
||||
# To absolute path
|
||||
AliMeeting=$(cd ${AliMeeting}; pwd)
|
||||
echo $AliMeeting
|
||||
far_raw_dir=${AliMeeting}/${tgt}_Ali_far/
|
||||
near_raw_dir=${AliMeeting}/${tgt}_Ali_near/
|
||||
|
||||
far_dir=data/local/${tgt}_Ali_far
|
||||
near_dir=data/local/${tgt}_Ali_near
|
||||
far_single_speaker_dir=data/local/${tgt}_Ali_far_correct_single_speaker
|
||||
mkdir -p $far_single_speaker_dir
|
||||
|
||||
stage=1
|
||||
stop_stage=4
|
||||
mkdir -p $far_dir
|
||||
mkdir -p $near_dir
|
||||
|
||||
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
|
||||
log "stage 1:process alimeeting near dir"
|
||||
|
||||
find -L $near_raw_dir/audio_dir -iname "*.wav" > $near_dir/wavlist
|
||||
awk -F '/' '{print $NF}' $near_dir/wavlist | awk -F '.' '{print $1}' > $near_dir/uttid
|
||||
find -L $near_raw_dir/textgrid_dir -iname "*.TextGrid" > $near_dir/textgrid.flist
|
||||
n1_wav=$(wc -l < $near_dir/wavlist)
|
||||
n2_text=$(wc -l < $near_dir/textgrid.flist)
|
||||
log near file found $n1_wav wav and $n2_text text.
|
||||
|
||||
paste $near_dir/uttid $near_dir/wavlist > $near_dir/wav_raw.scp
|
||||
|
||||
# cat $near_dir/wav_raw.scp | awk '{printf("%s sox -t wav %s -r 16000 -b 16 -c 1 -t wav - |\n", $1, $2)}' > $near_dir/wav.scp
|
||||
cat $near_dir/wav_raw.scp | awk '{printf("%s sox -t wav %s -r 16000 -b 16 -t wav - |\n", $1, $2)}' > $near_dir/wav.scp
|
||||
|
||||
python local/alimeeting_process_textgrid.py --path $near_dir --no-overlap False
|
||||
cat $near_dir/text_all | local/text_normalize.pl | local/text_format.pl | sort -u > $near_dir/text
|
||||
utils/filter_scp.pl -f 1 $near_dir/text $near_dir/utt2spk_all | sort -u > $near_dir/utt2spk
|
||||
#sed -e 's/ [a-z,A-Z,_,0-9,-]\+SPK/ SPK/' $near_dir/utt2spk_old >$near_dir/tmp1
|
||||
#sed -e 's/-[a-z,A-Z,0-9]\+$//' $near_dir/tmp1 | sort -u > $near_dir/utt2spk
|
||||
local/utt2spk_to_spk2utt.pl $near_dir/utt2spk > $near_dir/spk2utt
|
||||
utils/filter_scp.pl -f 1 $near_dir/text $near_dir/segments_all | sort -u > $near_dir/segments
|
||||
sed -e 's/ $//g' $near_dir/text> $near_dir/tmp1
|
||||
sed -e 's/!//g' $near_dir/tmp1> $near_dir/tmp2
|
||||
sed -e 's/?//g' $near_dir/tmp2> $near_dir/text
|
||||
|
||||
fi
|
||||
|
||||
|
||||
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
|
||||
log "stage 2:process alimeeting far dir"
|
||||
|
||||
find -L $far_raw_dir/audio_dir -iname "*.wav" > $far_dir/wavlist
|
||||
awk -F '/' '{print $NF}' $far_dir/wavlist | awk -F '.' '{print $1}' > $far_dir/uttid
|
||||
find -L $far_raw_dir/textgrid_dir -iname "*.TextGrid" > $far_dir/textgrid.flist
|
||||
n1_wav=$(wc -l < $far_dir/wavlist)
|
||||
n2_text=$(wc -l < $far_dir/textgrid.flist)
|
||||
log far file found $n1_wav wav and $n2_text text.
|
||||
|
||||
paste $far_dir/uttid $far_dir/wavlist > $far_dir/wav_raw.scp
|
||||
|
||||
cat $far_dir/wav_raw.scp | awk '{printf("%s sox -t wav %s -r 16000 -b 16 -t wav - |\n", $1, $2)}' > $far_dir/wav.scp
|
||||
|
||||
python local/alimeeting_process_overlap_force.py --path $far_dir \
|
||||
--no-overlap false --mars True \
|
||||
--overlap_length 0.8 --max_length 7
|
||||
|
||||
cat $far_dir/text_all | local/text_normalize.pl | local/text_format.pl | sort -u > $far_dir/text
|
||||
utils/filter_scp.pl -f 1 $far_dir/text $far_dir/utt2spk_all | sort -u > $far_dir/utt2spk
|
||||
#sed -e 's/ [a-z,A-Z,_,0-9,-]\+SPK/ SPK/' $far_dir/utt2spk_old >$far_dir/utt2spk
|
||||
|
||||
local/utt2spk_to_spk2utt.pl $far_dir/utt2spk > $far_dir/spk2utt
|
||||
utils/filter_scp.pl -f 1 $far_dir/text $far_dir/segments_all | sort -u > $far_dir/segments
|
||||
sed -e 's/SRC/$/g' $far_dir/text> $far_dir/tmp1
|
||||
sed -e 's/ $//g' $far_dir/tmp1> $far_dir/tmp2
|
||||
sed -e 's/!//g' $far_dir/tmp2> $far_dir/tmp3
|
||||
sed -e 's/?//g' $far_dir/tmp3> $far_dir/text
|
||||
fi
|
||||
|
||||
|
||||
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
|
||||
log "stage 3: finali data process"
|
||||
|
||||
local/copy_data_dir.sh $near_dir data/${tgt}_Ali_near
|
||||
local/copy_data_dir.sh $far_dir data/${tgt}_Ali_far
|
||||
|
||||
sort $far_dir/utt2spk_all_fifo > data/${tgt}_Ali_far/utt2spk_all_fifo
|
||||
sed -i "s/src/$/g" data/${tgt}_Ali_far/utt2spk_all_fifo
|
||||
|
||||
# remove space in text
|
||||
for x in ${tgt}_Ali_near ${tgt}_Ali_far; do
|
||||
cp data/${x}/text data/${x}/text.org
|
||||
paste -d " " <(cut -f 1 -d" " data/${x}/text.org) <(cut -f 2- -d" " data/${x}/text.org | tr -d " ") \
|
||||
> data/${x}/text
|
||||
rm data/${x}/text.org
|
||||
done
|
||||
|
||||
log "Successfully finished. [elapsed=${SECONDS}s]"
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 4 ] && [ ${stop_stage} -ge 4 ]; then
|
||||
log "stage 4: process alimeeting far dir (single speaker by oracle time strap)"
|
||||
cp -r $far_dir/* $far_single_speaker_dir
|
||||
mv $far_single_speaker_dir/textgrid.flist $far_single_speaker_dir/textgrid_oldpath
|
||||
paste -d " " $far_single_speaker_dir/uttid $far_single_speaker_dir/textgrid_oldpath > $far_single_speaker_dir/textgrid.flist
|
||||
python local/process_textgrid_to_single_speaker_wav.py --path $far_single_speaker_dir
|
||||
|
||||
cp $far_single_speaker_dir/utt2spk $far_single_speaker_dir/text
|
||||
local/utt2spk_to_spk2utt.pl $far_single_speaker_dir/utt2spk > $far_single_speaker_dir/spk2utt
|
||||
|
||||
./local/fix_data_dir.sh $far_single_speaker_dir
|
||||
local/copy_data_dir.sh $far_single_speaker_dir data/${tgt}_Ali_far_single_speaker
|
||||
|
||||
# remove space in text
|
||||
for x in ${tgt}_Ali_far_single_speaker; do
|
||||
cp data/${x}/text data/${x}/text.org
|
||||
paste -d " " <(cut -f 1 -d" " data/${x}/text.org) <(cut -f 2- -d" " data/${x}/text.org | tr -d " ") \
|
||||
> data/${x}/text
|
||||
rm data/${x}/text.org
|
||||
done
|
||||
log "Successfully finished. [elapsed=${SECONDS}s]"
|
||||
fi
|
||||
129
egs/alimeeting/sa-asr/local/alimeeting_data_prep_test_2023.sh
Executable file
129
egs/alimeeting/sa-asr/local/alimeeting_data_prep_test_2023.sh
Executable file
@ -0,0 +1,129 @@
|
||||
#!/usr/bin/env bash
|
||||
# Set bash to 'debug' mode, it will exit on :
|
||||
# -e 'error', -u 'undefined variable', -o ... 'error in pipeline', -x 'print commands',
|
||||
set -e
|
||||
set -u
|
||||
set -o pipefail
|
||||
|
||||
log() {
|
||||
local fname=${BASH_SOURCE[1]##*/}
|
||||
echo -e "$(date '+%Y-%m-%dT%H:%M:%S') (${fname}:${BASH_LINENO[0]}:${FUNCNAME[1]}) $*"
|
||||
}
|
||||
|
||||
help_messge=$(cat << EOF
|
||||
Usage: $0
|
||||
|
||||
Options:
|
||||
--no_overlap (bool): Whether to ignore the overlapping utterance in the training set.
|
||||
--tgt (string): Which set to process, test or train.
|
||||
EOF
|
||||
)
|
||||
|
||||
SECONDS=0
|
||||
tgt=Train #Train or Eval
|
||||
|
||||
|
||||
log "$0 $*"
|
||||
echo $tgt
|
||||
. ./utils/parse_options.sh
|
||||
|
||||
. ./path.sh
|
||||
|
||||
AliMeeting="${PWD}/dataset"
|
||||
|
||||
if [ $# -gt 2 ]; then
|
||||
log "${help_message}"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
|
||||
if [ ! -d "${AliMeeting}" ]; then
|
||||
log "Error: ${AliMeeting} is empty."
|
||||
exit 2
|
||||
fi
|
||||
|
||||
# To absolute path
|
||||
AliMeeting=$(cd ${AliMeeting}; pwd)
|
||||
echo $AliMeeting
|
||||
far_raw_dir=${AliMeeting}/${tgt}_Ali_far/
|
||||
|
||||
far_dir=data/local/${tgt}_Ali_far
|
||||
far_single_speaker_dir=data/local/${tgt}_Ali_far_correct_single_speaker
|
||||
mkdir -p $far_single_speaker_dir
|
||||
|
||||
stage=1
|
||||
stop_stage=3
|
||||
mkdir -p $far_dir
|
||||
|
||||
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
|
||||
log "stage 1:process alimeeting far dir"
|
||||
|
||||
find -L $far_raw_dir/audio_dir -iname "*.wav" > $far_dir/wavlist
|
||||
awk -F '/' '{print $NF}' $far_dir/wavlist | awk -F '.' '{print $1}' > $far_dir/uttid
|
||||
find -L $far_raw_dir/textgrid_dir -iname "*.TextGrid" > $far_dir/textgrid.flist
|
||||
n1_wav=$(wc -l < $far_dir/wavlist)
|
||||
n2_text=$(wc -l < $far_dir/textgrid.flist)
|
||||
log far file found $n1_wav wav and $n2_text text.
|
||||
|
||||
paste $far_dir/uttid $far_dir/wavlist > $far_dir/wav_raw.scp
|
||||
|
||||
cat $far_dir/wav_raw.scp | awk '{printf("%s sox -t wav %s -r 16000 -b 16 -t wav - |\n", $1, $2)}' > $far_dir/wav.scp
|
||||
|
||||
python local/alimeeting_process_overlap_force.py --path $far_dir \
|
||||
--no-overlap false --mars True \
|
||||
--overlap_length 0.8 --max_length 7
|
||||
|
||||
cat $far_dir/text_all | local/text_normalize.pl | local/text_format.pl | sort -u > $far_dir/text
|
||||
utils/filter_scp.pl -f 1 $far_dir/text $far_dir/utt2spk_all | sort -u > $far_dir/utt2spk
|
||||
#sed -e 's/ [a-z,A-Z,_,0-9,-]\+SPK/ SPK/' $far_dir/utt2spk_old >$far_dir/utt2spk
|
||||
|
||||
local/utt2spk_to_spk2utt.pl $far_dir/utt2spk > $far_dir/spk2utt
|
||||
utils/filter_scp.pl -f 1 $far_dir/text $far_dir/segments_all | sort -u > $far_dir/segments
|
||||
sed -e 's/SRC/$/g' $far_dir/text> $far_dir/tmp1
|
||||
sed -e 's/ $//g' $far_dir/tmp1> $far_dir/tmp2
|
||||
sed -e 's/!//g' $far_dir/tmp2> $far_dir/tmp3
|
||||
sed -e 's/?//g' $far_dir/tmp3> $far_dir/text
|
||||
fi
|
||||
|
||||
|
||||
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
|
||||
log "stage 2: finali data process"
|
||||
|
||||
local/copy_data_dir.sh $far_dir data/${tgt}_Ali_far
|
||||
|
||||
sort $far_dir/utt2spk_all_fifo > data/${tgt}_Ali_far/utt2spk_all_fifo
|
||||
sed -i "s/src/$/g" data/${tgt}_Ali_far/utt2spk_all_fifo
|
||||
|
||||
# remove space in text
|
||||
for x in ${tgt}_Ali_far; do
|
||||
cp data/${x}/text data/${x}/text.org
|
||||
paste -d " " <(cut -f 1 -d" " data/${x}/text.org) <(cut -f 2- -d" " data/${x}/text.org | tr -d " ") \
|
||||
> data/${x}/text
|
||||
rm data/${x}/text.org
|
||||
done
|
||||
|
||||
log "Successfully finished. [elapsed=${SECONDS}s]"
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
|
||||
log "stage 3:process alimeeting far dir (single speaker by oracal time strap)"
|
||||
cp -r $far_dir/* $far_single_speaker_dir
|
||||
mv $far_single_speaker_dir/textgrid.flist $far_single_speaker_dir/textgrid_oldpath
|
||||
paste -d " " $far_single_speaker_dir/uttid $far_single_speaker_dir/textgrid_oldpath > $far_single_speaker_dir/textgrid.flist
|
||||
python local/process_textgrid_to_single_speaker_wav.py --path $far_single_speaker_dir
|
||||
|
||||
cp $far_single_speaker_dir/utt2spk $far_single_speaker_dir/text
|
||||
local/utt2spk_to_spk2utt.pl $far_single_speaker_dir/utt2spk > $far_single_speaker_dir/spk2utt
|
||||
|
||||
./local/fix_data_dir.sh $far_single_speaker_dir
|
||||
local/copy_data_dir.sh $far_single_speaker_dir data/${tgt}_Ali_far_single_speaker
|
||||
|
||||
# remove space in text
|
||||
for x in ${tgt}_Ali_far_single_speaker; do
|
||||
cp data/${x}/text data/${x}/text.org
|
||||
paste -d " " <(cut -f 1 -d" " data/${x}/text.org) <(cut -f 2- -d" " data/${x}/text.org | tr -d " ") \
|
||||
> data/${x}/text
|
||||
rm data/${x}/text.org
|
||||
done
|
||||
log "Successfully finished. [elapsed=${SECONDS}s]"
|
||||
fi
|
||||
235
egs/alimeeting/sa-asr/local/alimeeting_process_overlap_force.py
Executable file
235
egs/alimeeting/sa-asr/local/alimeeting_process_overlap_force.py
Executable file
@ -0,0 +1,235 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Process the textgrid files
|
||||
"""
|
||||
import argparse
|
||||
import codecs
|
||||
from distutils.util import strtobool
|
||||
from pathlib import Path
|
||||
import textgrid
|
||||
import pdb
|
||||
|
||||
class Segment(object):
|
||||
def __init__(self, uttid, spkr, stime, etime, text):
|
||||
self.uttid = uttid
|
||||
self.spkr = spkr
|
||||
self.spkr_all = uttid+"-"+spkr
|
||||
self.stime = round(stime, 2)
|
||||
self.etime = round(etime, 2)
|
||||
self.text = text
|
||||
self.spk_text = {uttid+"-"+spkr: text}
|
||||
|
||||
def change_stime(self, time):
|
||||
self.stime = time
|
||||
|
||||
def change_etime(self, time):
|
||||
self.etime = time
|
||||
|
||||
|
||||
def get_args():
|
||||
parser = argparse.ArgumentParser(description="process the textgrid files")
|
||||
parser.add_argument("--path", type=str, required=True, help="Data path")
|
||||
parser.add_argument(
|
||||
"--no-overlap",
|
||||
type=strtobool,
|
||||
default=False,
|
||||
help="Whether to ignore the overlapping utterances.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--max_length",
|
||||
default=100000,
|
||||
type=float,
|
||||
help="overlap speech max time,if longger than max length should cut",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--overlap_length",
|
||||
default=1,
|
||||
type=float,
|
||||
help="if length longer than max length, speech overlength shorter, is cut",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--mars",
|
||||
type=strtobool,
|
||||
default=False,
|
||||
help="Whether to process mars data set.",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
return args
|
||||
|
||||
|
||||
def preposs_overlap(segments,max_length,overlap_length):
|
||||
new_segments = []
|
||||
# init a helper list to store all overlap segments
|
||||
tmp_segments = segments[0]
|
||||
min_stime = segments[0].stime
|
||||
max_etime = segments[0].etime
|
||||
overlap_length_big = 1.5
|
||||
max_length_big = 15
|
||||
for i in range(1, len(segments)):
|
||||
if segments[i].stime >= max_etime:
|
||||
# doesn't overlap with preivous segments
|
||||
new_segments.append(tmp_segments)
|
||||
tmp_segments = segments[i]
|
||||
min_stime = segments[i].stime
|
||||
max_etime = segments[i].etime
|
||||
else:
|
||||
# overlap with previous segments
|
||||
dur_time = max_etime - min_stime
|
||||
if dur_time < max_length:
|
||||
if min_stime > segments[i].stime:
|
||||
min_stime = segments[i].stime
|
||||
if max_etime < segments[i].etime:
|
||||
max_etime = segments[i].etime
|
||||
tmp_segments.stime = min_stime
|
||||
tmp_segments.etime = max_etime
|
||||
tmp_segments.text = tmp_segments.text + "src" + segments[i].text
|
||||
spk_name =segments[i].uttid +"-" + segments[i].spkr
|
||||
if spk_name in tmp_segments.spk_text:
|
||||
tmp_segments.spk_text[spk_name] += segments[i].text
|
||||
else:
|
||||
tmp_segments.spk_text[spk_name] = segments[i].text
|
||||
tmp_segments.spkr_all = tmp_segments.spkr_all + "src" + spk_name
|
||||
else:
|
||||
overlap_time = max_etime - segments[i].stime
|
||||
if dur_time < max_length_big:
|
||||
overlap_length_option = overlap_length
|
||||
else:
|
||||
overlap_length_option = overlap_length_big
|
||||
if overlap_time > overlap_length_option:
|
||||
if min_stime > segments[i].stime:
|
||||
min_stime = segments[i].stime
|
||||
if max_etime < segments[i].etime:
|
||||
max_etime = segments[i].etime
|
||||
tmp_segments.stime = min_stime
|
||||
tmp_segments.etime = max_etime
|
||||
tmp_segments.text = tmp_segments.text + "src" + segments[i].text
|
||||
spk_name =segments[i].uttid +"-" + segments[i].spkr
|
||||
if spk_name in tmp_segments.spk_text:
|
||||
tmp_segments.spk_text[spk_name] += segments[i].text
|
||||
else:
|
||||
tmp_segments.spk_text[spk_name] = segments[i].text
|
||||
tmp_segments.spkr_all = tmp_segments.spkr_all + "src" + spk_name
|
||||
else:
|
||||
new_segments.append(tmp_segments)
|
||||
tmp_segments = segments[i]
|
||||
min_stime = segments[i].stime
|
||||
max_etime = segments[i].etime
|
||||
|
||||
return new_segments
|
||||
|
||||
def filter_overlap(segments):
|
||||
new_segments = []
|
||||
# init a helper list to store all overlap segments
|
||||
tmp_segments = [segments[0]]
|
||||
min_stime = segments[0].stime
|
||||
max_etime = segments[0].etime
|
||||
|
||||
for i in range(1, len(segments)):
|
||||
if segments[i].stime >= max_etime:
|
||||
# doesn't overlap with preivous segments
|
||||
if len(tmp_segments) == 1:
|
||||
new_segments.append(tmp_segments[0])
|
||||
# TODO: for multi-spkr asr, we can reset the stime/etime to
|
||||
# min_stime/max_etime for generating a max length mixutre speech
|
||||
tmp_segments = [segments[i]]
|
||||
min_stime = segments[i].stime
|
||||
max_etime = segments[i].etime
|
||||
else:
|
||||
# overlap with previous segments
|
||||
tmp_segments.append(segments[i])
|
||||
if min_stime > segments[i].stime:
|
||||
min_stime = segments[i].stime
|
||||
if max_etime < segments[i].etime:
|
||||
max_etime = segments[i].etime
|
||||
|
||||
return new_segments
|
||||
|
||||
|
||||
def main(args):
|
||||
wav_scp = codecs.open(Path(args.path) / "wav.scp", "r", "utf-8")
|
||||
textgrid_flist = codecs.open(Path(args.path) / "textgrid.flist", "r", "utf-8")
|
||||
|
||||
# get the path of textgrid file for each utterance
|
||||
utt2textgrid = {}
|
||||
for line in textgrid_flist:
|
||||
path = Path(line.strip())
|
||||
uttid = path.stem
|
||||
utt2textgrid[uttid] = path
|
||||
|
||||
# parse the textgrid file for each utterance
|
||||
all_segments = []
|
||||
for line in wav_scp:
|
||||
uttid = line.strip().split(" ")[0]
|
||||
uttid_part=uttid
|
||||
if args.mars == True:
|
||||
uttid_list = uttid.split("_")
|
||||
uttid_part= uttid_list[0]+"_"+uttid_list[1]
|
||||
if uttid_part not in utt2textgrid:
|
||||
print("%s doesn't have transcription" % uttid)
|
||||
continue
|
||||
|
||||
segments = []
|
||||
tg = textgrid.TextGrid.fromFile(utt2textgrid[uttid_part])
|
||||
for i in range(tg.__len__()):
|
||||
for j in range(tg[i].__len__()):
|
||||
if tg[i][j].mark:
|
||||
segments.append(
|
||||
Segment(
|
||||
uttid,
|
||||
tg[i].name,
|
||||
tg[i][j].minTime,
|
||||
tg[i][j].maxTime,
|
||||
tg[i][j].mark.strip(),
|
||||
)
|
||||
)
|
||||
|
||||
segments = sorted(segments, key=lambda x: x.stime)
|
||||
|
||||
if args.no_overlap:
|
||||
segments = filter_overlap(segments)
|
||||
else:
|
||||
segments = preposs_overlap(segments,args.max_length,args.overlap_length)
|
||||
all_segments += segments
|
||||
|
||||
wav_scp.close()
|
||||
textgrid_flist.close()
|
||||
|
||||
segments_file = codecs.open(Path(args.path) / "segments_all", "w", "utf-8")
|
||||
utt2spk_file = codecs.open(Path(args.path) / "utt2spk_all", "w", "utf-8")
|
||||
text_file = codecs.open(Path(args.path) / "text_all", "w", "utf-8")
|
||||
utt2spk_file_fifo = codecs.open(Path(args.path) / "utt2spk_all_fifo", "w", "utf-8")
|
||||
|
||||
for i in range(len(all_segments)):
|
||||
utt_name = "%s-%s-%07d-%07d" % (
|
||||
all_segments[i].uttid,
|
||||
all_segments[i].spkr,
|
||||
all_segments[i].stime * 100,
|
||||
all_segments[i].etime * 100,
|
||||
)
|
||||
|
||||
segments_file.write(
|
||||
"%s %s %.2f %.2f\n"
|
||||
% (
|
||||
utt_name,
|
||||
all_segments[i].uttid,
|
||||
all_segments[i].stime,
|
||||
all_segments[i].etime,
|
||||
)
|
||||
)
|
||||
utt2spk_file.write(
|
||||
"%s %s-%s\n" % (utt_name, all_segments[i].uttid, all_segments[i].spkr)
|
||||
)
|
||||
utt2spk_file_fifo.write(
|
||||
"%s %s\n" % (utt_name, all_segments[i].spkr_all)
|
||||
)
|
||||
text_file.write("%s %s\n" % (utt_name, all_segments[i].text))
|
||||
|
||||
segments_file.close()
|
||||
utt2spk_file.close()
|
||||
text_file.close()
|
||||
utt2spk_file_fifo.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
args = get_args()
|
||||
main(args)
|
||||
158
egs/alimeeting/sa-asr/local/alimeeting_process_textgrid.py
Executable file
158
egs/alimeeting/sa-asr/local/alimeeting_process_textgrid.py
Executable file
@ -0,0 +1,158 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Process the textgrid files
|
||||
"""
|
||||
import argparse
|
||||
import codecs
|
||||
from distutils.util import strtobool
|
||||
from pathlib import Path
|
||||
import textgrid
|
||||
import pdb
|
||||
|
||||
class Segment(object):
|
||||
def __init__(self, uttid, spkr, stime, etime, text):
|
||||
self.uttid = uttid
|
||||
self.spkr = spkr
|
||||
self.stime = round(stime, 2)
|
||||
self.etime = round(etime, 2)
|
||||
self.text = text
|
||||
|
||||
def change_stime(self, time):
|
||||
self.stime = time
|
||||
|
||||
def change_etime(self, time):
|
||||
self.etime = time
|
||||
|
||||
|
||||
def get_args():
|
||||
parser = argparse.ArgumentParser(description="process the textgrid files")
|
||||
parser.add_argument("--path", type=str, required=True, help="Data path")
|
||||
parser.add_argument(
|
||||
"--no-overlap",
|
||||
type=strtobool,
|
||||
default=False,
|
||||
help="Whether to ignore the overlapping utterances.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--mars",
|
||||
type=strtobool,
|
||||
default=False,
|
||||
help="Whether to process mars data set.",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
return args
|
||||
|
||||
|
||||
def filter_overlap(segments):
|
||||
new_segments = []
|
||||
# init a helper list to store all overlap segments
|
||||
tmp_segments = [segments[0]]
|
||||
min_stime = segments[0].stime
|
||||
max_etime = segments[0].etime
|
||||
|
||||
for i in range(1, len(segments)):
|
||||
if segments[i].stime >= max_etime:
|
||||
# doesn't overlap with preivous segments
|
||||
if len(tmp_segments) == 1:
|
||||
new_segments.append(tmp_segments[0])
|
||||
# TODO: for multi-spkr asr, we can reset the stime/etime to
|
||||
# min_stime/max_etime for generating a max length mixutre speech
|
||||
tmp_segments = [segments[i]]
|
||||
min_stime = segments[i].stime
|
||||
max_etime = segments[i].etime
|
||||
else:
|
||||
# overlap with previous segments
|
||||
tmp_segments.append(segments[i])
|
||||
if min_stime > segments[i].stime:
|
||||
min_stime = segments[i].stime
|
||||
if max_etime < segments[i].etime:
|
||||
max_etime = segments[i].etime
|
||||
|
||||
return new_segments
|
||||
|
||||
|
||||
def main(args):
|
||||
wav_scp = codecs.open(Path(args.path) / "wav.scp", "r", "utf-8")
|
||||
textgrid_flist = codecs.open(Path(args.path) / "textgrid.flist", "r", "utf-8")
|
||||
|
||||
# get the path of textgrid file for each utterance
|
||||
utt2textgrid = {}
|
||||
for line in textgrid_flist:
|
||||
path = Path(line.strip())
|
||||
uttid = path.stem
|
||||
utt2textgrid[uttid] = path
|
||||
|
||||
# parse the textgrid file for each utterance
|
||||
all_segments = []
|
||||
for line in wav_scp:
|
||||
uttid = line.strip().split(" ")[0]
|
||||
uttid_part=uttid
|
||||
if args.mars == True:
|
||||
uttid_list = uttid.split("_")
|
||||
uttid_part= uttid_list[0]+"_"+uttid_list[1]
|
||||
if uttid_part not in utt2textgrid:
|
||||
print("%s doesn't have transcription" % uttid)
|
||||
continue
|
||||
#pdb.set_trace()
|
||||
segments = []
|
||||
try:
|
||||
tg = textgrid.TextGrid.fromFile(utt2textgrid[uttid_part])
|
||||
except:
|
||||
pdb.set_trace()
|
||||
for i in range(tg.__len__()):
|
||||
for j in range(tg[i].__len__()):
|
||||
if tg[i][j].mark:
|
||||
segments.append(
|
||||
Segment(
|
||||
uttid,
|
||||
tg[i].name,
|
||||
tg[i][j].minTime,
|
||||
tg[i][j].maxTime,
|
||||
tg[i][j].mark.strip(),
|
||||
)
|
||||
)
|
||||
|
||||
segments = sorted(segments, key=lambda x: x.stime)
|
||||
|
||||
if args.no_overlap:
|
||||
segments = filter_overlap(segments)
|
||||
|
||||
all_segments += segments
|
||||
|
||||
wav_scp.close()
|
||||
textgrid_flist.close()
|
||||
|
||||
segments_file = codecs.open(Path(args.path) / "segments_all", "w", "utf-8")
|
||||
utt2spk_file = codecs.open(Path(args.path) / "utt2spk_all", "w", "utf-8")
|
||||
text_file = codecs.open(Path(args.path) / "text_all", "w", "utf-8")
|
||||
|
||||
for i in range(len(all_segments)):
|
||||
utt_name = "%s-%s-%07d-%07d" % (
|
||||
all_segments[i].uttid,
|
||||
all_segments[i].spkr,
|
||||
all_segments[i].stime * 100,
|
||||
all_segments[i].etime * 100,
|
||||
)
|
||||
|
||||
segments_file.write(
|
||||
"%s %s %.2f %.2f\n"
|
||||
% (
|
||||
utt_name,
|
||||
all_segments[i].uttid,
|
||||
all_segments[i].stime,
|
||||
all_segments[i].etime,
|
||||
)
|
||||
)
|
||||
utt2spk_file.write(
|
||||
"%s %s-%s\n" % (utt_name, all_segments[i].uttid, all_segments[i].spkr)
|
||||
)
|
||||
text_file.write("%s %s\n" % (utt_name, all_segments[i].text))
|
||||
|
||||
segments_file.close()
|
||||
utt2spk_file.close()
|
||||
text_file.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
args = get_args()
|
||||
main(args)
|
||||
97
egs/alimeeting/sa-asr/local/apply_map.pl
Executable file
97
egs/alimeeting/sa-asr/local/apply_map.pl
Executable file
@ -0,0 +1,97 @@
|
||||
#!/usr/bin/env perl
|
||||
use warnings; #sed replacement for -w perl parameter
|
||||
# Copyright 2012 Johns Hopkins University (Author: Daniel Povey)
|
||||
# Apache 2.0.
|
||||
|
||||
# This program is a bit like ./sym2int.pl in that it applies a map
|
||||
# to things in a file, but it's a bit more general in that it doesn't
|
||||
# assume the things being mapped to are single tokens, they could
|
||||
# be sequences of tokens. See the usage message.
|
||||
|
||||
|
||||
$permissive = 0;
|
||||
|
||||
for ($x = 0; $x <= 2; $x++) {
|
||||
|
||||
if (@ARGV > 0 && $ARGV[0] eq "-f") {
|
||||
shift @ARGV;
|
||||
$field_spec = shift @ARGV;
|
||||
if ($field_spec =~ m/^\d+$/) {
|
||||
$field_begin = $field_spec - 1; $field_end = $field_spec - 1;
|
||||
}
|
||||
if ($field_spec =~ m/^(\d*)[-:](\d*)/) { # accept e.g. 1:10 as a courtesty (properly, 1-10)
|
||||
if ($1 ne "") {
|
||||
$field_begin = $1 - 1; # Change to zero-based indexing.
|
||||
}
|
||||
if ($2 ne "") {
|
||||
$field_end = $2 - 1; # Change to zero-based indexing.
|
||||
}
|
||||
}
|
||||
if (!defined $field_begin && !defined $field_end) {
|
||||
die "Bad argument to -f option: $field_spec";
|
||||
}
|
||||
}
|
||||
|
||||
if (@ARGV > 0 && $ARGV[0] eq '--permissive') {
|
||||
shift @ARGV;
|
||||
# Mapping is optional (missing key is printed to output)
|
||||
$permissive = 1;
|
||||
}
|
||||
}
|
||||
|
||||
if(@ARGV != 1) {
|
||||
print STDERR "Invalid usage: " . join(" ", @ARGV) . "\n";
|
||||
print STDERR <<'EOF';
|
||||
Usage: apply_map.pl [options] map <input >output
|
||||
options: [-f <field-range> ] [--permissive]
|
||||
This applies a map to some specified fields of some input text:
|
||||
For each line in the map file: the first field is the thing we
|
||||
map from, and the remaining fields are the sequence we map it to.
|
||||
The -f (field-range) option says which fields of the input file the map
|
||||
map should apply to.
|
||||
If the --permissive option is supplied, fields which are not present
|
||||
in the map will be left as they were.
|
||||
Applies the map 'map' to all input text, where each line of the map
|
||||
is interpreted as a map from the first field to the list of the other fields
|
||||
Note: <field-range> can look like 4-5, or 4-, or 5-, or 1, it means the field
|
||||
range in the input to apply the map to.
|
||||
e.g.: echo A B | apply_map.pl a.txt
|
||||
where a.txt is:
|
||||
A a1 a2
|
||||
B b
|
||||
will produce:
|
||||
a1 a2 b
|
||||
EOF
|
||||
exit(1);
|
||||
}
|
||||
|
||||
($map_file) = @ARGV;
|
||||
open(M, "<$map_file") || die "Error opening map file $map_file: $!";
|
||||
|
||||
while (<M>) {
|
||||
@A = split(" ", $_);
|
||||
@A >= 1 || die "apply_map.pl: empty line.";
|
||||
$i = shift @A;
|
||||
$o = join(" ", @A);
|
||||
$map{$i} = $o;
|
||||
}
|
||||
|
||||
while(<STDIN>) {
|
||||
@A = split(" ", $_);
|
||||
for ($x = 0; $x < @A; $x++) {
|
||||
if ( (!defined $field_begin || $x >= $field_begin)
|
||||
&& (!defined $field_end || $x <= $field_end)) {
|
||||
$a = $A[$x];
|
||||
if (!defined $map{$a}) {
|
||||
if (!$permissive) {
|
||||
die "apply_map.pl: undefined key $a in $map_file\n";
|
||||
} else {
|
||||
print STDERR "apply_map.pl: warning! missing key $a in $map_file\n";
|
||||
}
|
||||
} else {
|
||||
$A[$x] = $map{$a};
|
||||
}
|
||||
}
|
||||
}
|
||||
print join(" ", @A) . "\n";
|
||||
}
|
||||
146
egs/alimeeting/sa-asr/local/combine_data.sh
Executable file
146
egs/alimeeting/sa-asr/local/combine_data.sh
Executable file
@ -0,0 +1,146 @@
|
||||
#!/usr/bin/env bash
|
||||
# Copyright 2012 Johns Hopkins University (Author: Daniel Povey). Apache 2.0.
|
||||
# 2014 David Snyder
|
||||
|
||||
# This script combines the data from multiple source directories into
|
||||
# a single destination directory.
|
||||
|
||||
# See http://kaldi-asr.org/doc/data_prep.html#data_prep_data for information
|
||||
# about what these directories contain.
|
||||
|
||||
# Begin configuration section.
|
||||
extra_files= # specify additional files in 'src-data-dir' to merge, ex. "file1 file2 ..."
|
||||
skip_fix=false # skip the fix_data_dir.sh in the end
|
||||
# End configuration section.
|
||||
|
||||
echo "$0 $@" # Print the command line for logging
|
||||
|
||||
if [ -f path.sh ]; then . ./path.sh; fi
|
||||
. parse_options.sh || exit 1;
|
||||
|
||||
if [ $# -lt 2 ]; then
|
||||
echo "Usage: combine_data.sh [--extra-files 'file1 file2'] <dest-data-dir> <src-data-dir1> <src-data-dir2> ..."
|
||||
echo "Note, files that don't appear in all source dirs will not be combined,"
|
||||
echo "with the exception of utt2uniq and segments, which are created where necessary."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
dest=$1;
|
||||
shift;
|
||||
|
||||
first_src=$1;
|
||||
|
||||
rm -r $dest 2>/dev/null || true
|
||||
mkdir -p $dest;
|
||||
|
||||
export LC_ALL=C
|
||||
|
||||
for dir in $*; do
|
||||
if [ ! -f $dir/utt2spk ]; then
|
||||
echo "$0: no such file $dir/utt2spk"
|
||||
exit 1;
|
||||
fi
|
||||
done
|
||||
|
||||
# Check that frame_shift are compatible, where present together with features.
|
||||
dir_with_frame_shift=
|
||||
for dir in $*; do
|
||||
if [[ -f $dir/feats.scp && -f $dir/frame_shift ]]; then
|
||||
if [[ $dir_with_frame_shift ]] &&
|
||||
! cmp -s $dir_with_frame_shift/frame_shift $dir/frame_shift; then
|
||||
echo "$0:error: different frame_shift in directories $dir and " \
|
||||
"$dir_with_frame_shift. Cannot combine features."
|
||||
exit 1;
|
||||
fi
|
||||
dir_with_frame_shift=$dir
|
||||
fi
|
||||
done
|
||||
|
||||
# W.r.t. utt2uniq file the script has different behavior compared to other files
|
||||
# it is not compulsary for it to exist in src directories, but if it exists in
|
||||
# even one it should exist in all. We will create the files where necessary
|
||||
has_utt2uniq=false
|
||||
for in_dir in $*; do
|
||||
if [ -f $in_dir/utt2uniq ]; then
|
||||
has_utt2uniq=true
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
if $has_utt2uniq; then
|
||||
# we are going to create an utt2uniq file in the destdir
|
||||
for in_dir in $*; do
|
||||
if [ ! -f $in_dir/utt2uniq ]; then
|
||||
# we assume that utt2uniq is a one to one mapping
|
||||
cat $in_dir/utt2spk | awk '{printf("%s %s\n", $1, $1);}'
|
||||
else
|
||||
cat $in_dir/utt2uniq
|
||||
fi
|
||||
done | sort -k1 > $dest/utt2uniq
|
||||
echo "$0: combined utt2uniq"
|
||||
else
|
||||
echo "$0 [info]: not combining utt2uniq as it does not exist"
|
||||
fi
|
||||
# some of the old scripts might provide utt2uniq as an extrafile, so just remove it
|
||||
extra_files=$(echo "$extra_files"|sed -e "s/utt2uniq//g")
|
||||
|
||||
# segments are treated similarly to utt2uniq. If it exists in some, but not all
|
||||
# src directories, then we generate segments where necessary.
|
||||
has_segments=false
|
||||
for in_dir in $*; do
|
||||
if [ -f $in_dir/segments ]; then
|
||||
has_segments=true
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
if $has_segments; then
|
||||
for in_dir in $*; do
|
||||
if [ ! -f $in_dir/segments ]; then
|
||||
echo "$0 [info]: will generate missing segments for $in_dir" 1>&2
|
||||
local/data/get_segments_for_data.sh $in_dir
|
||||
else
|
||||
cat $in_dir/segments
|
||||
fi
|
||||
done | sort -k1 > $dest/segments
|
||||
echo "$0: combined segments"
|
||||
else
|
||||
echo "$0 [info]: not combining segments as it does not exist"
|
||||
fi
|
||||
|
||||
for file in utt2spk utt2lang utt2dur utt2num_frames reco2dur feats.scp text cmvn.scp vad.scp reco2file_and_channel wav.scp spk2gender $extra_files; do
|
||||
exists_somewhere=false
|
||||
absent_somewhere=false
|
||||
for d in $*; do
|
||||
if [ -f $d/$file ]; then
|
||||
exists_somewhere=true
|
||||
else
|
||||
absent_somewhere=true
|
||||
fi
|
||||
done
|
||||
|
||||
if ! $absent_somewhere; then
|
||||
set -o pipefail
|
||||
( for f in $*; do cat $f/$file; done ) | sort -k1 > $dest/$file || exit 1;
|
||||
set +o pipefail
|
||||
echo "$0: combined $file"
|
||||
else
|
||||
if ! $exists_somewhere; then
|
||||
echo "$0 [info]: not combining $file as it does not exist"
|
||||
else
|
||||
echo "$0 [info]: **not combining $file as it does not exist everywhere**"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
local/utt2spk_to_spk2utt.pl <$dest/utt2spk >$dest/spk2utt
|
||||
|
||||
if [[ $dir_with_frame_shift ]]; then
|
||||
cp $dir_with_frame_shift/frame_shift $dest
|
||||
fi
|
||||
|
||||
if ! $skip_fix ; then
|
||||
local/fix_data_dir.sh $dest || exit 1;
|
||||
fi
|
||||
|
||||
exit 0
|
||||
91
egs/alimeeting/sa-asr/local/compute_cpcer.py
Normal file
91
egs/alimeeting/sa-asr/local/compute_cpcer.py
Normal file
@ -0,0 +1,91 @@
|
||||
import editdistance
|
||||
import sys
|
||||
import os
|
||||
from itertools import permutations
|
||||
|
||||
|
||||
def load_transcripts(file_path):
|
||||
trans_list = []
|
||||
for one_line in open(file_path, "rt"):
|
||||
meeting_id, trans = one_line.strip().split(" ")
|
||||
trans_list.append((meeting_id.strip(), trans.strip()))
|
||||
|
||||
return trans_list
|
||||
|
||||
def calc_spk_trans(trans):
|
||||
spk_trans_ = [x.strip() for x in trans.split("$")]
|
||||
spk_trans = []
|
||||
for i in range(len(spk_trans_)):
|
||||
spk_trans.append((str(i), spk_trans_[i]))
|
||||
return spk_trans
|
||||
|
||||
def calc_cer(ref_trans, hyp_trans):
|
||||
ref_spk_trans = calc_spk_trans(ref_trans)
|
||||
hyp_spk_trans = calc_spk_trans(hyp_trans)
|
||||
ref_spk_num, hyp_spk_num = len(ref_spk_trans), len(hyp_spk_trans)
|
||||
num_spk = max(len(ref_spk_trans), len(hyp_spk_trans))
|
||||
ref_spk_trans.extend([("", "")] * (num_spk - len(ref_spk_trans)))
|
||||
hyp_spk_trans.extend([("", "")] * (num_spk - len(hyp_spk_trans)))
|
||||
|
||||
errors, counts, permutes = [], [], []
|
||||
min_error = 0
|
||||
cost_dict = {}
|
||||
for perm in permutations(range(num_spk)):
|
||||
flag = True
|
||||
p_err, p_count = 0, 0
|
||||
for idx, p in enumerate(perm):
|
||||
if abs(len(ref_spk_trans[idx][1]) - len(hyp_spk_trans[p][1])) > min_error > 0:
|
||||
flag = False
|
||||
break
|
||||
cost_key = "{}-{}".format(idx, p)
|
||||
if cost_key in cost_dict:
|
||||
_e = cost_dict[cost_key]
|
||||
else:
|
||||
_e = editdistance.eval(ref_spk_trans[idx][1], hyp_spk_trans[p][1])
|
||||
cost_dict[cost_key] = _e
|
||||
if _e > min_error > 0:
|
||||
flag = False
|
||||
break
|
||||
p_err += _e
|
||||
p_count += len(ref_spk_trans[idx][1])
|
||||
|
||||
if flag:
|
||||
if p_err < min_error or min_error == 0:
|
||||
min_error = p_err
|
||||
|
||||
errors.append(p_err)
|
||||
counts.append(p_count)
|
||||
permutes.append(perm)
|
||||
|
||||
sd_cer = [(err, cnt, err/cnt, permute)
|
||||
for err, cnt, permute in zip(errors, counts, permutes)]
|
||||
# import ipdb;ipdb.set_trace()
|
||||
best_rst = min(sd_cer, key=lambda x: x[2])
|
||||
|
||||
return best_rst[0], best_rst[1], ref_spk_num, hyp_spk_num
|
||||
|
||||
|
||||
def main():
|
||||
ref=sys.argv[1]
|
||||
hyp=sys.argv[2]
|
||||
result_path=sys.argv[3]
|
||||
ref_list = load_transcripts(ref)
|
||||
hyp_list = load_transcripts(hyp)
|
||||
result_file = open(result_path,'w')
|
||||
error, count = 0, 0
|
||||
for (ref_id, ref_trans), (hyp_id, hyp_trans) in zip(ref_list, hyp_list):
|
||||
assert ref_id == hyp_id
|
||||
mid = ref_id
|
||||
dist, length, ref_spk_num, hyp_spk_num = calc_cer(ref_trans, hyp_trans)
|
||||
error, count = error + dist, count + length
|
||||
result_file.write("{} {:.2f} {} {}\n".format(mid, dist / length * 100.0, ref_spk_num, hyp_spk_num))
|
||||
|
||||
# print("{} {:.2f} {} {}".format(mid, dist / length * 100.0, ref_spk_num, hyp_spk_num))
|
||||
|
||||
result_file.write("CP-CER: {:.2f}\n".format(error / count * 100.0))
|
||||
result_file.close()
|
||||
# print("Sum/Avg: {:.2f}".format(error / count * 100.0))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
145
egs/alimeeting/sa-asr/local/copy_data_dir.sh
Executable file
145
egs/alimeeting/sa-asr/local/copy_data_dir.sh
Executable file
@ -0,0 +1,145 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2013 Johns Hopkins University (author: Daniel Povey)
|
||||
# Apache 2.0
|
||||
|
||||
# This script operates on a directory, such as in data/train/,
|
||||
# that contains some subset of the following files:
|
||||
# feats.scp
|
||||
# wav.scp
|
||||
# vad.scp
|
||||
# spk2utt
|
||||
# utt2spk
|
||||
# text
|
||||
#
|
||||
# It copies to another directory, possibly adding a specified prefix or a suffix
|
||||
# to the utterance and/or speaker names. Note, the recording-ids stay the same.
|
||||
#
|
||||
|
||||
|
||||
# begin configuration section
|
||||
spk_prefix=
|
||||
utt_prefix=
|
||||
spk_suffix=
|
||||
utt_suffix=
|
||||
validate_opts= # should rarely be needed.
|
||||
# end configuration section
|
||||
|
||||
. utils/parse_options.sh
|
||||
|
||||
if [ $# != 2 ]; then
|
||||
echo "Usage: "
|
||||
echo " $0 [options] <srcdir> <destdir>"
|
||||
echo "e.g.:"
|
||||
echo " $0 --spk-prefix=1- --utt-prefix=1- data/train data/train_1"
|
||||
echo "Options"
|
||||
echo " --spk-prefix=<prefix> # Prefix for speaker ids, default empty"
|
||||
echo " --utt-prefix=<prefix> # Prefix for utterance ids, default empty"
|
||||
echo " --spk-suffix=<suffix> # Suffix for speaker ids, default empty"
|
||||
echo " --utt-suffix=<suffix> # Suffix for utterance ids, default empty"
|
||||
exit 1;
|
||||
fi
|
||||
|
||||
|
||||
export LC_ALL=C
|
||||
|
||||
srcdir=$1
|
||||
destdir=$2
|
||||
|
||||
if [ ! -f $srcdir/utt2spk ]; then
|
||||
echo "copy_data_dir.sh: no such file $srcdir/utt2spk"
|
||||
exit 1;
|
||||
fi
|
||||
|
||||
if [ "$destdir" == "$srcdir" ]; then
|
||||
echo "$0: this script requires <srcdir> and <destdir> to be different."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
set -e;
|
||||
|
||||
mkdir -p $destdir
|
||||
|
||||
cat $srcdir/utt2spk | awk -v p=$utt_prefix -v s=$utt_suffix '{printf("%s %s%s%s\n", $1, p, $1, s);}' > $destdir/utt_map
|
||||
cat $srcdir/spk2utt | awk -v p=$spk_prefix -v s=$spk_suffix '{printf("%s %s%s%s\n", $1, p, $1, s);}' > $destdir/spk_map
|
||||
|
||||
if [ ! -f $srcdir/utt2uniq ]; then
|
||||
if [[ ! -z $utt_prefix || ! -z $utt_suffix ]]; then
|
||||
cat $srcdir/utt2spk | awk -v p=$utt_prefix -v s=$utt_suffix '{printf("%s%s%s %s\n", p, $1, s, $1);}' > $destdir/utt2uniq
|
||||
fi
|
||||
else
|
||||
cat $srcdir/utt2uniq | awk -v p=$utt_prefix -v s=$utt_suffix '{printf("%s%s%s %s\n", p, $1, s, $2);}' > $destdir/utt2uniq
|
||||
fi
|
||||
|
||||
cat $srcdir/utt2spk | local/apply_map.pl -f 1 $destdir/utt_map | \
|
||||
local/apply_map.pl -f 2 $destdir/spk_map >$destdir/utt2spk
|
||||
|
||||
local/utt2spk_to_spk2utt.pl <$destdir/utt2spk >$destdir/spk2utt
|
||||
|
||||
if [ -f $srcdir/feats.scp ]; then
|
||||
local/apply_map.pl -f 1 $destdir/utt_map <$srcdir/feats.scp >$destdir/feats.scp
|
||||
fi
|
||||
|
||||
if [ -f $srcdir/vad.scp ]; then
|
||||
local/apply_map.pl -f 1 $destdir/utt_map <$srcdir/vad.scp >$destdir/vad.scp
|
||||
fi
|
||||
|
||||
if [ -f $srcdir/segments ]; then
|
||||
local/apply_map.pl -f 1 $destdir/utt_map <$srcdir/segments >$destdir/segments
|
||||
cp $srcdir/wav.scp $destdir
|
||||
else # no segments->wav indexed by utt.
|
||||
if [ -f $srcdir/wav.scp ]; then
|
||||
local/apply_map.pl -f 1 $destdir/utt_map <$srcdir/wav.scp >$destdir/wav.scp
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -f $srcdir/reco2file_and_channel ]; then
|
||||
cp $srcdir/reco2file_and_channel $destdir/
|
||||
fi
|
||||
|
||||
if [ -f $srcdir/text ]; then
|
||||
local/apply_map.pl -f 1 $destdir/utt_map <$srcdir/text >$destdir/text
|
||||
fi
|
||||
if [ -f $srcdir/utt2dur ]; then
|
||||
local/apply_map.pl -f 1 $destdir/utt_map <$srcdir/utt2dur >$destdir/utt2dur
|
||||
fi
|
||||
if [ -f $srcdir/utt2num_frames ]; then
|
||||
local/apply_map.pl -f 1 $destdir/utt_map <$srcdir/utt2num_frames >$destdir/utt2num_frames
|
||||
fi
|
||||
if [ -f $srcdir/reco2dur ]; then
|
||||
if [ -f $srcdir/segments ]; then
|
||||
cp $srcdir/reco2dur $destdir/reco2dur
|
||||
else
|
||||
local/apply_map.pl -f 1 $destdir/utt_map <$srcdir/reco2dur >$destdir/reco2dur
|
||||
fi
|
||||
fi
|
||||
if [ -f $srcdir/spk2gender ]; then
|
||||
local/apply_map.pl -f 1 $destdir/spk_map <$srcdir/spk2gender >$destdir/spk2gender
|
||||
fi
|
||||
if [ -f $srcdir/cmvn.scp ]; then
|
||||
local/apply_map.pl -f 1 $destdir/spk_map <$srcdir/cmvn.scp >$destdir/cmvn.scp
|
||||
fi
|
||||
for f in frame_shift stm glm ctm; do
|
||||
if [ -f $srcdir/$f ]; then
|
||||
cp $srcdir/$f $destdir
|
||||
fi
|
||||
done
|
||||
|
||||
rm $destdir/spk_map $destdir/utt_map
|
||||
|
||||
echo "$0: copied data from $srcdir to $destdir"
|
||||
|
||||
for f in feats.scp cmvn.scp vad.scp utt2lang utt2uniq utt2dur utt2num_frames text wav.scp reco2file_and_channel frame_shift stm glm ctm; do
|
||||
if [ -f $destdir/$f ] && [ ! -f $srcdir/$f ]; then
|
||||
echo "$0: file $f exists in dest $destdir but not in src $srcdir. Moving it to"
|
||||
echo " ... $destdir/.backup/$f"
|
||||
mkdir -p $destdir/.backup
|
||||
mv $destdir/$f $destdir/.backup/
|
||||
fi
|
||||
done
|
||||
|
||||
|
||||
[ ! -f $srcdir/feats.scp ] && validate_opts="$validate_opts --no-feats"
|
||||
[ ! -f $srcdir/text ] && validate_opts="$validate_opts --no-text"
|
||||
|
||||
local/validate_data_dir.sh $validate_opts $destdir
|
||||
143
egs/alimeeting/sa-asr/local/data/get_reco2dur.sh
Executable file
143
egs/alimeeting/sa-asr/local/data/get_reco2dur.sh
Executable file
@ -0,0 +1,143 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2016 Johns Hopkins University (author: Daniel Povey)
|
||||
# 2018 Andrea Carmantini
|
||||
# Apache 2.0
|
||||
|
||||
# This script operates on a data directory, such as in data/train/, and adds the
|
||||
# reco2dur file if it does not already exist. The file 'reco2dur' maps from
|
||||
# recording to the duration of the recording in seconds. This script works it
|
||||
# out from the 'wav.scp' file, or, if utterance-ids are the same as recording-ids, from the
|
||||
# utt2dur file (it first tries interrogating the headers, and if this fails, it reads the wave
|
||||
# files in entirely.)
|
||||
# We could use durations from segments file, but that's not the duration of the recordings
|
||||
# but the sum of utterance lenghts (silence in between could be excluded from segments)
|
||||
# For sum of utterance lenghts:
|
||||
# awk 'FNR==NR{uttdur[$1]=$2;next}
|
||||
# { for(i=2;i<=NF;i++){dur+=uttdur[$i];}
|
||||
# print $1 FS dur; dur=0 }' $data/utt2dur $data/reco2utt
|
||||
|
||||
|
||||
frame_shift=0.01
|
||||
cmd=run.pl
|
||||
nj=4
|
||||
|
||||
. utils/parse_options.sh
|
||||
. ./path.sh
|
||||
|
||||
if [ $# != 1 ]; then
|
||||
echo "Usage: $0 [options] <datadir>"
|
||||
echo "e.g.:"
|
||||
echo " $0 data/train"
|
||||
echo " Options:"
|
||||
echo " --frame-shift # frame shift in seconds. Only relevant when we are"
|
||||
echo " # getting duration from feats.scp (default: 0.01). "
|
||||
exit 1
|
||||
fi
|
||||
|
||||
export LC_ALL=C
|
||||
|
||||
data=$1
|
||||
|
||||
|
||||
if [ -s $data/reco2dur ] && \
|
||||
[ $(wc -l < $data/wav.scp) -eq $(wc -l < $data/reco2dur) ]; then
|
||||
echo "$0: $data/reco2dur already exists with the expected length. We won't recompute it."
|
||||
exit 0;
|
||||
fi
|
||||
|
||||
if [ -s $data/utt2dur ] && \
|
||||
[ $(wc -l < $data/utt2spk) -eq $(wc -l < $data/utt2dur) ] && \
|
||||
[ ! -s $data/segments ]; then
|
||||
|
||||
echo "$0: $data/wav.scp indexed by utt-id; copying utt2dur to reco2dur"
|
||||
cp $data/utt2dur $data/reco2dur && exit 0;
|
||||
|
||||
elif [ -f $data/wav.scp ]; then
|
||||
echo "$0: obtaining durations from recordings"
|
||||
|
||||
# if the wav.scp contains only lines of the form
|
||||
# utt1 /foo/bar/sph2pipe -f wav /baz/foo.sph |
|
||||
if cat $data/wav.scp | perl -e '
|
||||
while (<>) { s/\|\s*$/ |/; # make sure final | is preceded by space.
|
||||
@A = split; if (!($#A == 5 && $A[1] =~ m/sph2pipe$/ &&
|
||||
$A[2] eq "-f" && $A[3] eq "wav" && $A[5] eq "|")) { exit(1); }
|
||||
$reco = $A[0]; $sphere_file = $A[4];
|
||||
|
||||
if (!open(F, "<$sphere_file")) { die "Error opening sphere file $sphere_file"; }
|
||||
$sample_rate = -1; $sample_count = -1;
|
||||
for ($n = 0; $n <= 30; $n++) {
|
||||
$line = <F>;
|
||||
if ($line =~ m/sample_rate -i (\d+)/) { $sample_rate = $1; }
|
||||
if ($line =~ m/sample_count -i (\d+)/) { $sample_count = $1; }
|
||||
if ($line =~ m/end_head/) { break; }
|
||||
}
|
||||
close(F);
|
||||
if ($sample_rate == -1 || $sample_count == -1) {
|
||||
die "could not parse sphere header from $sphere_file";
|
||||
}
|
||||
$duration = $sample_count * 1.0 / $sample_rate;
|
||||
print "$reco $duration\n";
|
||||
} ' > $data/reco2dur; then
|
||||
echo "$0: successfully obtained recording lengths from sphere-file headers"
|
||||
else
|
||||
echo "$0: could not get recording lengths from sphere-file headers, using wav-to-duration"
|
||||
if ! command -v wav-to-duration >/dev/null; then
|
||||
echo "$0: wav-to-duration is not on your path"
|
||||
exit 1;
|
||||
fi
|
||||
|
||||
read_entire_file=false
|
||||
if grep -q 'sox.*speed' $data/wav.scp; then
|
||||
read_entire_file=true
|
||||
echo "$0: reading from the entire wav file to fix the problem caused by sox commands with speed perturbation. It is going to be slow."
|
||||
echo "... It is much faster if you call get_reco2dur.sh *before* doing the speed perturbation via e.g. perturb_data_dir_speed.sh or "
|
||||
echo "... perturb_data_dir_speed_3way.sh."
|
||||
fi
|
||||
|
||||
num_recos=$(wc -l <$data/wav.scp)
|
||||
if [ $nj -gt $num_recos ]; then
|
||||
nj=$num_recos
|
||||
fi
|
||||
|
||||
temp_data_dir=$data/wav${nj}split
|
||||
wavscps=$(for n in `seq $nj`; do echo $temp_data_dir/$n/wav.scp; done)
|
||||
subdirs=$(for n in `seq $nj`; do echo $temp_data_dir/$n; done)
|
||||
|
||||
if ! mkdir -p $subdirs >&/dev/null; then
|
||||
for n in `seq $nj`; do
|
||||
mkdir -p $temp_data_dir/$n
|
||||
done
|
||||
fi
|
||||
|
||||
utils/split_scp.pl $data/wav.scp $wavscps
|
||||
|
||||
|
||||
$cmd JOB=1:$nj $data/log/get_reco_durations.JOB.log \
|
||||
wav-to-duration --read-entire-file=$read_entire_file \
|
||||
scp:$temp_data_dir/JOB/wav.scp ark,t:$temp_data_dir/JOB/reco2dur || \
|
||||
{ echo "$0: there was a problem getting the durations"; exit 1; } # This could
|
||||
|
||||
for n in `seq $nj`; do
|
||||
cat $temp_data_dir/$n/reco2dur
|
||||
done > $data/reco2dur
|
||||
fi
|
||||
rm -r $temp_data_dir
|
||||
else
|
||||
echo "$0: Expected $data/wav.scp to exist"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
len1=$(wc -l < $data/wav.scp)
|
||||
len2=$(wc -l < $data/reco2dur)
|
||||
if [ "$len1" != "$len2" ]; then
|
||||
echo "$0: warning: length of reco2dur does not equal that of wav.scp, $len2 != $len1"
|
||||
if [ $len1 -gt $[$len2*2] ]; then
|
||||
echo "$0: less than half of recordings got a duration: failing."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "$0: computed $data/reco2dur"
|
||||
|
||||
exit 0
|
||||
29
egs/alimeeting/sa-asr/local/data/get_segments_for_data.sh
Executable file
29
egs/alimeeting/sa-asr/local/data/get_segments_for_data.sh
Executable file
@ -0,0 +1,29 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# This script operates on a data directory, such as in data/train/,
|
||||
# and writes new segments to stdout. The file 'segments' maps from
|
||||
# utterance to time offsets into a recording, with the format:
|
||||
# <utterance-id> <recording-id> <segment-begin> <segment-end>
|
||||
# This script assumes utterance and recording ids are the same (i.e., that
|
||||
# wav.scp is indexed by utterance), and uses durations from 'utt2dur',
|
||||
# created if necessary by get_utt2dur.sh.
|
||||
|
||||
. ./path.sh
|
||||
|
||||
if [ $# != 1 ]; then
|
||||
echo "Usage: $0 [options] <datadir>"
|
||||
echo "e.g.:"
|
||||
echo " $0 data/train > data/train/segments"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
data=$1
|
||||
|
||||
if [ ! -s $data/utt2dur ]; then
|
||||
local/data/get_utt2dur.sh $data 1>&2 || exit 1;
|
||||
fi
|
||||
|
||||
# <utt-id> <utt-id> 0 <utt-dur>
|
||||
awk '{ print $1, $1, 0, $2 }' $data/utt2dur
|
||||
|
||||
exit 0
|
||||
135
egs/alimeeting/sa-asr/local/data/get_utt2dur.sh
Executable file
135
egs/alimeeting/sa-asr/local/data/get_utt2dur.sh
Executable file
@ -0,0 +1,135 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2016 Johns Hopkins University (author: Daniel Povey)
|
||||
# Apache 2.0
|
||||
|
||||
# This script operates on a data directory, such as in data/train/, and adds the
|
||||
# utt2dur file if it does not already exist. The file 'utt2dur' maps from
|
||||
# utterance to the duration of the utterance in seconds. This script works it
|
||||
# out from the 'segments' file, or, if not present, from the wav.scp file (it
|
||||
# first tries interrogating the headers, and if this fails, it reads the wave
|
||||
# files in entirely.)
|
||||
|
||||
frame_shift=0.01
|
||||
cmd=run.pl
|
||||
nj=4
|
||||
read_entire_file=false
|
||||
|
||||
. utils/parse_options.sh
|
||||
. ./path.sh
|
||||
|
||||
if [ $# != 1 ]; then
|
||||
echo "Usage: $0 [options] <datadir>"
|
||||
echo "e.g.:"
|
||||
echo " $0 data/train"
|
||||
echo " Options:"
|
||||
echo " --frame-shift # frame shift in seconds. Only relevant when we are"
|
||||
echo " # getting duration from feats.scp, and only if the "
|
||||
echo " # file frame_shift does not exist (default: 0.01). "
|
||||
exit 1
|
||||
fi
|
||||
|
||||
export LC_ALL=C
|
||||
|
||||
data=$1
|
||||
|
||||
if [ -s $data/utt2dur ] && \
|
||||
[ $(wc -l < $data/utt2spk) -eq $(wc -l < $data/utt2dur) ]; then
|
||||
echo "$0: $data/utt2dur already exists with the expected length. We won't recompute it."
|
||||
exit 0;
|
||||
fi
|
||||
|
||||
if [ -s $data/segments ]; then
|
||||
echo "$0: working out $data/utt2dur from $data/segments"
|
||||
awk '{len=$4-$3; print $1, len;}' < $data/segments > $data/utt2dur
|
||||
elif [[ -s $data/frame_shift && -f $data/utt2num_frames ]]; then
|
||||
echo "$0: computing $data/utt2dur from $data/{frame_shift,utt2num_frames}."
|
||||
frame_shift=$(cat $data/frame_shift) || exit 1
|
||||
# The 1.5 correction is the typical value of (frame_length-frame_shift)/frame_shift.
|
||||
awk -v fs=$frame_shift '{ $2=($2+1.5)*fs; print }' <$data/utt2num_frames >$data/utt2dur
|
||||
elif [ -f $data/wav.scp ]; then
|
||||
echo "$0: segments file does not exist so getting durations from wave files"
|
||||
|
||||
# if the wav.scp contains only lines of the form
|
||||
# utt1 /foo/bar/sph2pipe -f wav /baz/foo.sph |
|
||||
if perl <$data/wav.scp -e '
|
||||
while (<>) { s/\|\s*$/ |/; # make sure final | is preceded by space.
|
||||
@A = split; if (!($#A == 5 && $A[1] =~ m/sph2pipe$/ &&
|
||||
$A[2] eq "-f" && $A[3] eq "wav" && $A[5] eq "|")) { exit(1); }
|
||||
$utt = $A[0]; $sphere_file = $A[4];
|
||||
|
||||
if (!open(F, "<$sphere_file")) { die "Error opening sphere file $sphere_file"; }
|
||||
$sample_rate = -1; $sample_count = -1;
|
||||
for ($n = 0; $n <= 30; $n++) {
|
||||
$line = <F>;
|
||||
if ($line =~ m/sample_rate -i (\d+)/) { $sample_rate = $1; }
|
||||
if ($line =~ m/sample_count -i (\d+)/) { $sample_count = $1; }
|
||||
if ($line =~ m/end_head/) { break; }
|
||||
}
|
||||
close(F);
|
||||
if ($sample_rate == -1 || $sample_count == -1) {
|
||||
die "could not parse sphere header from $sphere_file";
|
||||
}
|
||||
$duration = $sample_count * 1.0 / $sample_rate;
|
||||
print "$utt $duration\n";
|
||||
} ' > $data/utt2dur; then
|
||||
echo "$0: successfully obtained utterance lengths from sphere-file headers"
|
||||
else
|
||||
echo "$0: could not get utterance lengths from sphere-file headers, using wav-to-duration"
|
||||
if ! command -v wav-to-duration >/dev/null; then
|
||||
echo "$0: wav-to-duration is not on your path"
|
||||
exit 1;
|
||||
fi
|
||||
|
||||
if grep -q 'sox.*speed' $data/wav.scp; then
|
||||
read_entire_file=true
|
||||
echo "$0: reading from the entire wav file to fix the problem caused by sox commands with speed perturbation. It is going to be slow."
|
||||
echo "... It is much faster if you call get_utt2dur.sh *before* doing the speed perturbation via e.g. perturb_data_dir_speed.sh or "
|
||||
echo "... perturb_data_dir_speed_3way.sh."
|
||||
fi
|
||||
|
||||
|
||||
num_utts=$(wc -l <$data/utt2spk)
|
||||
if [ $nj -gt $num_utts ]; then
|
||||
nj=$num_utts
|
||||
fi
|
||||
|
||||
local/data/split_data.sh --per-utt $data $nj
|
||||
sdata=$data/split${nj}utt
|
||||
|
||||
$cmd JOB=1:$nj $data/log/get_durations.JOB.log \
|
||||
wav-to-duration --read-entire-file=$read_entire_file \
|
||||
scp:$sdata/JOB/wav.scp ark,t:$sdata/JOB/utt2dur || \
|
||||
{ echo "$0: there was a problem getting the durations"; exit 1; }
|
||||
|
||||
for n in `seq $nj`; do
|
||||
cat $sdata/$n/utt2dur
|
||||
done > $data/utt2dur
|
||||
fi
|
||||
elif [ -f $data/feats.scp ]; then
|
||||
echo "$0: wave file does not exist so getting durations from feats files"
|
||||
if [[ -s $data/frame_shift ]]; then
|
||||
frame_shift=$(cat $data/frame_shift) || exit 1
|
||||
echo "$0: using frame_shift=$frame_shift from file $data/frame_shift"
|
||||
fi
|
||||
# The 1.5 correction is the typical value of (frame_length-frame_shift)/frame_shift.
|
||||
feat-to-len scp:$data/feats.scp ark,t:- |
|
||||
awk -v frame_shift=$frame_shift '{print $1, ($2+1.5)*frame_shift}' >$data/utt2dur
|
||||
else
|
||||
echo "$0: Expected $data/wav.scp, $data/segments or $data/feats.scp to exist"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
len1=$(wc -l < $data/utt2spk)
|
||||
len2=$(wc -l < $data/utt2dur)
|
||||
if [ "$len1" != "$len2" ]; then
|
||||
echo "$0: warning: length of utt2dur does not equal that of utt2spk, $len2 != $len1"
|
||||
if [ $len1 -gt $[$len2*2] ]; then
|
||||
echo "$0: less than half of utterances got a duration: failing."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "$0: computed $data/utt2dur"
|
||||
|
||||
exit 0
|
||||
160
egs/alimeeting/sa-asr/local/data/split_data.sh
Executable file
160
egs/alimeeting/sa-asr/local/data/split_data.sh
Executable file
@ -0,0 +1,160 @@
|
||||
#!/usr/bin/env bash
|
||||
# Copyright 2010-2013 Microsoft Corporation
|
||||
# Johns Hopkins University (Author: Daniel Povey)
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# THIS CODE IS PROVIDED *AS IS* BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
# KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION ANY IMPLIED
|
||||
# WARRANTIES OR CONDITIONS OF TITLE, FITNESS FOR A PARTICULAR PURPOSE,
|
||||
# MERCHANTABLITY OR NON-INFRINGEMENT.
|
||||
# See the Apache 2 License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
split_per_spk=true
|
||||
if [ "$1" == "--per-utt" ]; then
|
||||
split_per_spk=false
|
||||
shift
|
||||
fi
|
||||
|
||||
if [ $# != 2 ]; then
|
||||
echo "Usage: $0 [--per-utt] <data-dir> <num-to-split>"
|
||||
echo "E.g.: $0 data/train 50"
|
||||
echo "It creates its output in e.g. data/train/split50/{1,2,3,...50}, or if the "
|
||||
echo "--per-utt option was given, in e.g. data/train/split50utt/{1,2,3,...50}."
|
||||
echo ""
|
||||
echo "This script will not split the data-dir if it detects that the output is newer than the input."
|
||||
echo "By default it splits per speaker (so each speaker is in only one split dir),"
|
||||
echo "but with the --per-utt option it will ignore the speaker information while splitting."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
data=$1
|
||||
numsplit=$2
|
||||
|
||||
if ! [ "$numsplit" -gt 0 ]; then
|
||||
echo "Invalid num-split argument $numsplit";
|
||||
exit 1;
|
||||
fi
|
||||
|
||||
if $split_per_spk; then
|
||||
warning_opt=
|
||||
else
|
||||
# suppress warnings from filter_scps.pl about 'some input lines were output
|
||||
# to multiple files'.
|
||||
warning_opt="--no-warn"
|
||||
fi
|
||||
|
||||
n=0;
|
||||
feats=""
|
||||
wavs=""
|
||||
utt2spks=""
|
||||
texts=""
|
||||
|
||||
nu=`cat $data/utt2spk | wc -l`
|
||||
nf=`cat $data/feats.scp 2>/dev/null | wc -l`
|
||||
nt=`cat $data/text 2>/dev/null | wc -l` # take it as zero if no such file
|
||||
if [ -f $data/feats.scp ] && [ $nu -ne $nf ]; then
|
||||
echo "** split_data.sh: warning, #lines is (utt2spk,feats.scp) is ($nu,$nf); you can "
|
||||
echo "** use local/fix_data_dir.sh $data to fix this."
|
||||
fi
|
||||
if [ -f $data/text ] && [ $nu -ne $nt ]; then
|
||||
echo "** split_data.sh: warning, #lines is (utt2spk,text) is ($nu,$nt); you can "
|
||||
echo "** use local/fix_data_dir.sh to fix this."
|
||||
fi
|
||||
|
||||
|
||||
if $split_per_spk; then
|
||||
utt2spk_opt="--utt2spk=$data/utt2spk"
|
||||
utt=""
|
||||
else
|
||||
utt2spk_opt=
|
||||
utt="utt"
|
||||
fi
|
||||
|
||||
s1=$data/split${numsplit}${utt}/1
|
||||
if [ ! -d $s1 ]; then
|
||||
need_to_split=true
|
||||
else
|
||||
need_to_split=false
|
||||
for f in utt2spk spk2utt spk2warp feats.scp text wav.scp cmvn.scp spk2gender \
|
||||
vad.scp segments reco2file_and_channel utt2lang; do
|
||||
if [[ -f $data/$f && ( ! -f $s1/$f || $s1/$f -ot $data/$f ) ]]; then
|
||||
need_to_split=true
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
if ! $need_to_split; then
|
||||
exit 0;
|
||||
fi
|
||||
|
||||
utt2spks=$(for n in `seq $numsplit`; do echo $data/split${numsplit}${utt}/$n/utt2spk; done)
|
||||
|
||||
directories=$(for n in `seq $numsplit`; do echo $data/split${numsplit}${utt}/$n; done)
|
||||
|
||||
# if this mkdir fails due to argument-list being too long, iterate.
|
||||
if ! mkdir -p $directories >&/dev/null; then
|
||||
for n in `seq $numsplit`; do
|
||||
mkdir -p $data/split${numsplit}${utt}/$n
|
||||
done
|
||||
fi
|
||||
|
||||
# If lockfile is not installed, just don't lock it. It's not a big deal.
|
||||
which lockfile >&/dev/null && lockfile -l 60 $data/.split_lock
|
||||
trap 'rm -f $data/.split_lock' EXIT HUP INT PIPE TERM
|
||||
|
||||
utils/split_scp.pl $utt2spk_opt $data/utt2spk $utt2spks || exit 1
|
||||
|
||||
for n in `seq $numsplit`; do
|
||||
dsn=$data/split${numsplit}${utt}/$n
|
||||
local/utt2spk_to_spk2utt.pl $dsn/utt2spk > $dsn/spk2utt || exit 1;
|
||||
done
|
||||
|
||||
maybe_wav_scp=
|
||||
if [ ! -f $data/segments ]; then
|
||||
maybe_wav_scp=wav.scp # If there is no segments file, then wav file is
|
||||
# indexed per utt.
|
||||
fi
|
||||
|
||||
# split some things that are indexed by utterance.
|
||||
for f in feats.scp text vad.scp utt2lang $maybe_wav_scp utt2dur utt2num_frames; do
|
||||
if [ -f $data/$f ]; then
|
||||
utils/filter_scps.pl JOB=1:$numsplit \
|
||||
$data/split${numsplit}${utt}/JOB/utt2spk $data/$f $data/split${numsplit}${utt}/JOB/$f || exit 1;
|
||||
fi
|
||||
done
|
||||
|
||||
# split some things that are indexed by speaker
|
||||
for f in spk2gender spk2warp cmvn.scp; do
|
||||
if [ -f $data/$f ]; then
|
||||
utils/filter_scps.pl $warning_opt JOB=1:$numsplit \
|
||||
$data/split${numsplit}${utt}/JOB/spk2utt $data/$f $data/split${numsplit}${utt}/JOB/$f || exit 1;
|
||||
fi
|
||||
done
|
||||
|
||||
if [ -f $data/segments ]; then
|
||||
utils/filter_scps.pl JOB=1:$numsplit \
|
||||
$data/split${numsplit}${utt}/JOB/utt2spk $data/segments $data/split${numsplit}${utt}/JOB/segments || exit 1
|
||||
for n in `seq $numsplit`; do
|
||||
dsn=$data/split${numsplit}${utt}/$n
|
||||
awk '{print $2;}' $dsn/segments | sort | uniq > $dsn/tmp.reco # recording-ids.
|
||||
done
|
||||
if [ -f $data/reco2file_and_channel ]; then
|
||||
utils/filter_scps.pl $warning_opt JOB=1:$numsplit \
|
||||
$data/split${numsplit}${utt}/JOB/tmp.reco $data/reco2file_and_channel \
|
||||
$data/split${numsplit}${utt}/JOB/reco2file_and_channel || exit 1
|
||||
fi
|
||||
if [ -f $data/wav.scp ]; then
|
||||
utils/filter_scps.pl $warning_opt JOB=1:$numsplit \
|
||||
$data/split${numsplit}${utt}/JOB/tmp.reco $data/wav.scp \
|
||||
$data/split${numsplit}${utt}/JOB/wav.scp || exit 1
|
||||
fi
|
||||
for f in $data/split${numsplit}${utt}/*/tmp.reco; do rm $f; done
|
||||
fi
|
||||
|
||||
exit 0
|
||||
6
egs/alimeeting/sa-asr/local/download_xvector_model.py
Normal file
6
egs/alimeeting/sa-asr/local/download_xvector_model.py
Normal file
@ -0,0 +1,6 @@
|
||||
from modelscope.hub.snapshot_download import snapshot_download
|
||||
import sys
|
||||
|
||||
|
||||
cache_dir = sys.argv[1]
|
||||
model_dir = snapshot_download('damo/speech_xvector_sv-zh-cn-cnceleb-16k-spk3465-pytorch', cache_dir=cache_dir)
|
||||
22
egs/alimeeting/sa-asr/local/filter_utt2spk_all_fifo.py
Normal file
22
egs/alimeeting/sa-asr/local/filter_utt2spk_all_fifo.py
Normal file
@ -0,0 +1,22 @@
|
||||
import sys
|
||||
if __name__=="__main__":
|
||||
uttid_path=sys.argv[1]
|
||||
src_path=sys.argv[2]
|
||||
tgt_path=sys.argv[3]
|
||||
uttid_file=open(uttid_path,'r')
|
||||
uttid_line=uttid_file.readlines()
|
||||
uttid_file.close()
|
||||
ori_utt2spk_all_fifo_file=open(src_path+'/utt2spk_all_fifo','r')
|
||||
ori_utt2spk_all_fifo_line=ori_utt2spk_all_fifo_file.readlines()
|
||||
ori_utt2spk_all_fifo_file.close()
|
||||
new_utt2spk_all_fifo_file=open(tgt_path+'/utt2spk_all_fifo','w')
|
||||
|
||||
uttid_list=[]
|
||||
for line in uttid_line:
|
||||
uttid_list.append(line.strip())
|
||||
|
||||
for line in ori_utt2spk_all_fifo_line:
|
||||
if line.strip().split(' ')[0] in uttid_list:
|
||||
new_utt2spk_all_fifo_file.write(line)
|
||||
|
||||
new_utt2spk_all_fifo_file.close()
|
||||
215
egs/alimeeting/sa-asr/local/fix_data_dir.sh
Executable file
215
egs/alimeeting/sa-asr/local/fix_data_dir.sh
Executable file
@ -0,0 +1,215 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# This script makes sure that only the segments present in
|
||||
# all of "feats.scp", "wav.scp" [if present], segments [if present]
|
||||
# text, and utt2spk are present in any of them.
|
||||
# It puts the original contents of data-dir into
|
||||
# data-dir/.backup
|
||||
|
||||
cmd="$@"
|
||||
|
||||
utt_extra_files=
|
||||
spk_extra_files=
|
||||
|
||||
. utils/parse_options.sh
|
||||
|
||||
if [ $# != 1 ]; then
|
||||
echo "Usage: utils/data/fix_data_dir.sh <data-dir>"
|
||||
echo "e.g.: utils/data/fix_data_dir.sh data/train"
|
||||
echo "This script helps ensure that the various files in a data directory"
|
||||
echo "are correctly sorted and filtered, for example removing utterances"
|
||||
echo "that have no features (if feats.scp is present)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
data=$1
|
||||
|
||||
if [ -f $data/images.scp ]; then
|
||||
image/fix_data_dir.sh $cmd
|
||||
exit $?
|
||||
fi
|
||||
|
||||
mkdir -p $data/.backup
|
||||
|
||||
[ ! -d $data ] && echo "$0: no such directory $data" && exit 1;
|
||||
|
||||
[ ! -f $data/utt2spk ] && echo "$0: no such file $data/utt2spk" && exit 1;
|
||||
|
||||
set -e -o pipefail -u
|
||||
|
||||
tmpdir=$(mktemp -d /tmp/kaldi.XXXX);
|
||||
trap 'rm -rf "$tmpdir"' EXIT HUP INT PIPE TERM
|
||||
|
||||
export LC_ALL=C
|
||||
|
||||
function check_sorted {
|
||||
file=$1
|
||||
sort -k1,1 -u <$file >$file.tmp
|
||||
if ! cmp -s $file $file.tmp; then
|
||||
echo "$0: file $1 is not in sorted order or not unique, sorting it"
|
||||
mv $file.tmp $file
|
||||
else
|
||||
rm $file.tmp
|
||||
fi
|
||||
}
|
||||
|
||||
for x in utt2spk spk2utt feats.scp text segments wav.scp cmvn.scp vad.scp \
|
||||
reco2file_and_channel spk2gender utt2lang utt2uniq utt2dur reco2dur utt2num_frames; do
|
||||
if [ -f $data/$x ]; then
|
||||
cp $data/$x $data/.backup/$x
|
||||
check_sorted $data/$x
|
||||
fi
|
||||
done
|
||||
|
||||
|
||||
function filter_file {
|
||||
filter=$1
|
||||
file_to_filter=$2
|
||||
cp $file_to_filter ${file_to_filter}.tmp
|
||||
utils/filter_scp.pl $filter ${file_to_filter}.tmp > $file_to_filter
|
||||
if ! cmp ${file_to_filter}.tmp $file_to_filter >&/dev/null; then
|
||||
length1=$(cat ${file_to_filter}.tmp | wc -l)
|
||||
length2=$(cat ${file_to_filter} | wc -l)
|
||||
if [ $length1 -ne $length2 ]; then
|
||||
echo "$0: filtered $file_to_filter from $length1 to $length2 lines based on filter $filter."
|
||||
fi
|
||||
fi
|
||||
rm $file_to_filter.tmp
|
||||
}
|
||||
|
||||
function filter_recordings {
|
||||
# We call this once before the stage when we filter on utterance-id, and once
|
||||
# after.
|
||||
|
||||
if [ -f $data/segments ]; then
|
||||
# We have a segments file -> we need to filter this and the file wav.scp, and
|
||||
# reco2file_and_utt, if it exists, to make sure they have the same list of
|
||||
# recording-ids.
|
||||
|
||||
if [ ! -f $data/wav.scp ]; then
|
||||
echo "$0: $data/segments exists but not $data/wav.scp"
|
||||
exit 1;
|
||||
fi
|
||||
awk '{print $2}' < $data/segments | sort | uniq > $tmpdir/recordings
|
||||
n1=$(cat $tmpdir/recordings | wc -l)
|
||||
[ ! -s $tmpdir/recordings ] && \
|
||||
echo "Empty list of recordings (bad file $data/segments)?" && exit 1;
|
||||
utils/filter_scp.pl $data/wav.scp $tmpdir/recordings > $tmpdir/recordings.tmp
|
||||
mv $tmpdir/recordings.tmp $tmpdir/recordings
|
||||
|
||||
|
||||
cp $data/segments{,.tmp}; awk '{print $2, $1, $3, $4}' <$data/segments.tmp >$data/segments
|
||||
filter_file $tmpdir/recordings $data/segments
|
||||
cp $data/segments{,.tmp}; awk '{print $2, $1, $3, $4}' <$data/segments.tmp >$data/segments
|
||||
rm $data/segments.tmp
|
||||
|
||||
filter_file $tmpdir/recordings $data/wav.scp
|
||||
[ -f $data/reco2file_and_channel ] && filter_file $tmpdir/recordings $data/reco2file_and_channel
|
||||
[ -f $data/reco2dur ] && filter_file $tmpdir/recordings $data/reco2dur
|
||||
true
|
||||
fi
|
||||
}
|
||||
|
||||
function filter_speakers {
|
||||
# throughout this program, we regard utt2spk as primary and spk2utt as derived, so...
|
||||
local/utt2spk_to_spk2utt.pl $data/utt2spk > $data/spk2utt
|
||||
|
||||
cat $data/spk2utt | awk '{print $1}' > $tmpdir/speakers
|
||||
for s in cmvn.scp spk2gender; do
|
||||
f=$data/$s
|
||||
if [ -f $f ]; then
|
||||
filter_file $f $tmpdir/speakers
|
||||
fi
|
||||
done
|
||||
|
||||
filter_file $tmpdir/speakers $data/spk2utt
|
||||
local/spk2utt_to_utt2spk.pl $data/spk2utt > $data/utt2spk
|
||||
|
||||
for s in cmvn.scp spk2gender $spk_extra_files; do
|
||||
f=$data/$s
|
||||
if [ -f $f ]; then
|
||||
filter_file $tmpdir/speakers $f
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
function filter_utts {
|
||||
cat $data/utt2spk | awk '{print $1}' > $tmpdir/utts
|
||||
|
||||
! cat $data/utt2spk | sort | cmp - $data/utt2spk && \
|
||||
echo "utt2spk is not in sorted order (fix this yourself)" && exit 1;
|
||||
|
||||
! cat $data/utt2spk | sort -k2 | cmp - $data/utt2spk && \
|
||||
echo "utt2spk is not in sorted order when sorted first on speaker-id " && \
|
||||
echo "(fix this by making speaker-ids prefixes of utt-ids)" && exit 1;
|
||||
|
||||
! cat $data/spk2utt | sort | cmp - $data/spk2utt && \
|
||||
echo "spk2utt is not in sorted order (fix this yourself)" && exit 1;
|
||||
|
||||
if [ -f $data/utt2uniq ]; then
|
||||
! cat $data/utt2uniq | sort | cmp - $data/utt2uniq && \
|
||||
echo "utt2uniq is not in sorted order (fix this yourself)" && exit 1;
|
||||
fi
|
||||
|
||||
maybe_wav=
|
||||
maybe_reco2dur=
|
||||
[ ! -f $data/segments ] && maybe_wav=wav.scp # wav indexed by utts only if segments does not exist.
|
||||
[ -s $data/reco2dur ] && [ ! -f $data/segments ] && maybe_reco2dur=reco2dur # reco2dur indexed by utts
|
||||
|
||||
maybe_utt2dur=
|
||||
if [ -f $data/utt2dur ]; then
|
||||
cat $data/utt2dur | \
|
||||
awk '{ if (NF == 2 && $2 > 0) { print }}' > $data/utt2dur.ok || exit 1
|
||||
maybe_utt2dur=utt2dur.ok
|
||||
fi
|
||||
|
||||
maybe_utt2num_frames=
|
||||
if [ -f $data/utt2num_frames ]; then
|
||||
cat $data/utt2num_frames | \
|
||||
awk '{ if (NF == 2 && $2 > 0) { print }}' > $data/utt2num_frames.ok || exit 1
|
||||
maybe_utt2num_frames=utt2num_frames.ok
|
||||
fi
|
||||
|
||||
for x in feats.scp text segments utt2lang $maybe_wav $maybe_utt2dur $maybe_utt2num_frames; do
|
||||
if [ -f $data/$x ]; then
|
||||
utils/filter_scp.pl $data/$x $tmpdir/utts > $tmpdir/utts.tmp
|
||||
mv $tmpdir/utts.tmp $tmpdir/utts
|
||||
fi
|
||||
done
|
||||
rm $data/utt2dur.ok 2>/dev/null || true
|
||||
rm $data/utt2num_frames.ok 2>/dev/null || true
|
||||
|
||||
[ ! -s $tmpdir/utts ] && echo "fix_data_dir.sh: no utterances remained: not proceeding further." && \
|
||||
rm $tmpdir/utts && exit 1;
|
||||
|
||||
|
||||
if [ -f $data/utt2spk ]; then
|
||||
new_nutts=$(cat $tmpdir/utts | wc -l)
|
||||
old_nutts=$(cat $data/utt2spk | wc -l)
|
||||
if [ $new_nutts -ne $old_nutts ]; then
|
||||
echo "fix_data_dir.sh: kept $new_nutts utterances out of $old_nutts"
|
||||
else
|
||||
echo "fix_data_dir.sh: kept all $old_nutts utterances."
|
||||
fi
|
||||
fi
|
||||
|
||||
for x in utt2spk utt2uniq feats.scp vad.scp text segments utt2lang utt2dur utt2num_frames $maybe_wav $maybe_reco2dur $utt_extra_files; do
|
||||
if [ -f $data/$x ]; then
|
||||
cp $data/$x $data/.backup/$x
|
||||
if ! cmp -s $data/$x <( utils/filter_scp.pl $tmpdir/utts $data/$x ) ; then
|
||||
utils/filter_scp.pl $tmpdir/utts $data/.backup/$x > $data/$x
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
}
|
||||
|
||||
filter_recordings
|
||||
filter_speakers
|
||||
filter_utts
|
||||
filter_speakers
|
||||
filter_recordings
|
||||
|
||||
local/utt2spk_to_spk2utt.pl $data/utt2spk > $data/spk2utt
|
||||
|
||||
echo "fix_data_dir.sh: old files are kept in $data/.backup"
|
||||
243
egs/alimeeting/sa-asr/local/format_wav_scp.py
Executable file
243
egs/alimeeting/sa-asr/local/format_wav_scp.py
Executable file
@ -0,0 +1,243 @@
|
||||
#!/usr/bin/env python3
|
||||
import argparse
|
||||
import logging
|
||||
from io import BytesIO
|
||||
from pathlib import Path
|
||||
from typing import Tuple, Optional
|
||||
|
||||
import kaldiio
|
||||
import humanfriendly
|
||||
import numpy as np
|
||||
import resampy
|
||||
import soundfile
|
||||
from tqdm import tqdm
|
||||
from typeguard import check_argument_types
|
||||
|
||||
from funasr.utils.cli_utils import get_commandline_args
|
||||
from funasr.fileio.read_text import read_2column_text
|
||||
from funasr.fileio.sound_scp import SoundScpWriter
|
||||
|
||||
|
||||
def humanfriendly_or_none(value: str):
|
||||
if value in ("none", "None", "NONE"):
|
||||
return None
|
||||
return humanfriendly.parse_size(value)
|
||||
|
||||
|
||||
def str2int_tuple(integers: str) -> Optional[Tuple[int, ...]]:
|
||||
"""
|
||||
|
||||
>>> str2int_tuple('3,4,5')
|
||||
(3, 4, 5)
|
||||
|
||||
"""
|
||||
assert check_argument_types()
|
||||
if integers.strip() in ("none", "None", "NONE", "null", "Null", "NULL"):
|
||||
return None
|
||||
return tuple(map(int, integers.strip().split(",")))
|
||||
|
||||
|
||||
def main():
|
||||
logfmt = "%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s"
|
||||
logging.basicConfig(level=logging.INFO, format=logfmt)
|
||||
logging.info(get_commandline_args())
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description='Create waves list from "wav.scp"',
|
||||
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
|
||||
)
|
||||
parser.add_argument("scp")
|
||||
parser.add_argument("outdir")
|
||||
parser.add_argument(
|
||||
"--name",
|
||||
default="wav",
|
||||
help="Specify the prefix word of output file name " 'such as "wav.scp"',
|
||||
)
|
||||
parser.add_argument("--segments", default=None)
|
||||
parser.add_argument(
|
||||
"--fs",
|
||||
type=humanfriendly_or_none,
|
||||
default=None,
|
||||
help="If the sampling rate specified, " "Change the sampling rate.",
|
||||
)
|
||||
parser.add_argument("--audio-format", default="wav")
|
||||
group = parser.add_mutually_exclusive_group()
|
||||
group.add_argument("--ref-channels", default=None, type=str2int_tuple)
|
||||
group.add_argument("--utt2ref-channels", default=None, type=str)
|
||||
args = parser.parse_args()
|
||||
|
||||
out_num_samples = Path(args.outdir) / f"utt2num_samples"
|
||||
|
||||
if args.ref_channels is not None:
|
||||
|
||||
def utt2ref_channels(x) -> Tuple[int, ...]:
|
||||
return args.ref_channels
|
||||
|
||||
elif args.utt2ref_channels is not None:
|
||||
utt2ref_channels_dict = read_2column_text(args.utt2ref_channels)
|
||||
|
||||
def utt2ref_channels(x, d=utt2ref_channels_dict) -> Tuple[int, ...]:
|
||||
chs_str = d[x]
|
||||
return tuple(map(int, chs_str.split()))
|
||||
|
||||
else:
|
||||
utt2ref_channels = None
|
||||
|
||||
Path(args.outdir).mkdir(parents=True, exist_ok=True)
|
||||
out_wavscp = Path(args.outdir) / f"{args.name}.scp"
|
||||
if args.segments is not None:
|
||||
# Note: kaldiio supports only wav-pcm-int16le file.
|
||||
loader = kaldiio.load_scp_sequential(args.scp, segments=args.segments)
|
||||
if args.audio_format.endswith("ark"):
|
||||
fark = open(Path(args.outdir) / f"data_{args.name}.ark", "wb")
|
||||
fscp = out_wavscp.open("w")
|
||||
else:
|
||||
writer = SoundScpWriter(
|
||||
args.outdir,
|
||||
out_wavscp,
|
||||
format=args.audio_format,
|
||||
)
|
||||
|
||||
with out_num_samples.open("w") as fnum_samples:
|
||||
for uttid, (rate, wave) in tqdm(loader):
|
||||
# wave: (Time,) or (Time, Nmic)
|
||||
if wave.ndim == 2 and utt2ref_channels is not None:
|
||||
wave = wave[:, utt2ref_channels(uttid)]
|
||||
|
||||
if args.fs is not None and args.fs != rate:
|
||||
# FIXME(kamo): To use sox?
|
||||
wave = resampy.resample(
|
||||
wave.astype(np.float64), rate, args.fs, axis=0
|
||||
)
|
||||
wave = wave.astype(np.int16)
|
||||
rate = args.fs
|
||||
if args.audio_format.endswith("ark"):
|
||||
if "flac" in args.audio_format:
|
||||
suf = "flac"
|
||||
elif "wav" in args.audio_format:
|
||||
suf = "wav"
|
||||
else:
|
||||
raise RuntimeError("wav.ark or flac")
|
||||
|
||||
# NOTE(kamo): Using extended ark format style here.
|
||||
# This format is incompatible with Kaldi
|
||||
kaldiio.save_ark(
|
||||
fark,
|
||||
{uttid: (wave, rate)},
|
||||
scp=fscp,
|
||||
append=True,
|
||||
write_function=f"soundfile_{suf}",
|
||||
)
|
||||
|
||||
else:
|
||||
writer[uttid] = rate, wave
|
||||
fnum_samples.write(f"{uttid} {len(wave)}\n")
|
||||
else:
|
||||
if args.audio_format.endswith("ark"):
|
||||
fark = open(Path(args.outdir) / f"data_{args.name}.ark", "wb")
|
||||
else:
|
||||
wavdir = Path(args.outdir) / f"data_{args.name}"
|
||||
wavdir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
with Path(args.scp).open("r") as fscp, out_wavscp.open(
|
||||
"w"
|
||||
) as fout, out_num_samples.open("w") as fnum_samples:
|
||||
for line in tqdm(fscp):
|
||||
uttid, wavpath = line.strip().split(None, 1)
|
||||
|
||||
if wavpath.endswith("|"):
|
||||
# Streaming input e.g. cat a.wav |
|
||||
with kaldiio.open_like_kaldi(wavpath, "rb") as f:
|
||||
with BytesIO(f.read()) as g:
|
||||
wave, rate = soundfile.read(g, dtype=np.int16)
|
||||
if wave.ndim == 2 and utt2ref_channels is not None:
|
||||
wave = wave[:, utt2ref_channels(uttid)]
|
||||
|
||||
if args.fs is not None and args.fs != rate:
|
||||
# FIXME(kamo): To use sox?
|
||||
wave = resampy.resample(
|
||||
wave.astype(np.float64), rate, args.fs, axis=0
|
||||
)
|
||||
wave = wave.astype(np.int16)
|
||||
rate = args.fs
|
||||
|
||||
if args.audio_format.endswith("ark"):
|
||||
if "flac" in args.audio_format:
|
||||
suf = "flac"
|
||||
elif "wav" in args.audio_format:
|
||||
suf = "wav"
|
||||
else:
|
||||
raise RuntimeError("wav.ark or flac")
|
||||
|
||||
# NOTE(kamo): Using extended ark format style here.
|
||||
# This format is incompatible with Kaldi
|
||||
kaldiio.save_ark(
|
||||
fark,
|
||||
{uttid: (wave, rate)},
|
||||
scp=fout,
|
||||
append=True,
|
||||
write_function=f"soundfile_{suf}",
|
||||
)
|
||||
else:
|
||||
owavpath = str(wavdir / f"{uttid}.{args.audio_format}")
|
||||
soundfile.write(owavpath, wave, rate)
|
||||
fout.write(f"{uttid} {owavpath}\n")
|
||||
else:
|
||||
wave, rate = soundfile.read(wavpath, dtype=np.int16)
|
||||
if wave.ndim == 2 and utt2ref_channels is not None:
|
||||
wave = wave[:, utt2ref_channels(uttid)]
|
||||
save_asis = False
|
||||
|
||||
elif args.audio_format.endswith("ark"):
|
||||
save_asis = False
|
||||
|
||||
elif Path(wavpath).suffix == "." + args.audio_format and (
|
||||
args.fs is None or args.fs == rate
|
||||
):
|
||||
save_asis = True
|
||||
|
||||
else:
|
||||
save_asis = False
|
||||
|
||||
if save_asis:
|
||||
# Neither --segments nor --fs are specified and
|
||||
# the line doesn't end with "|",
|
||||
# i.e. not using unix-pipe,
|
||||
# only in this case,
|
||||
# just using the original file as is.
|
||||
fout.write(f"{uttid} {wavpath}\n")
|
||||
else:
|
||||
if args.fs is not None and args.fs != rate:
|
||||
# FIXME(kamo): To use sox?
|
||||
wave = resampy.resample(
|
||||
wave.astype(np.float64), rate, args.fs, axis=0
|
||||
)
|
||||
wave = wave.astype(np.int16)
|
||||
rate = args.fs
|
||||
|
||||
if args.audio_format.endswith("ark"):
|
||||
if "flac" in args.audio_format:
|
||||
suf = "flac"
|
||||
elif "wav" in args.audio_format:
|
||||
suf = "wav"
|
||||
else:
|
||||
raise RuntimeError("wav.ark or flac")
|
||||
|
||||
# NOTE(kamo): Using extended ark format style here.
|
||||
# This format is not supported in Kaldi.
|
||||
kaldiio.save_ark(
|
||||
fark,
|
||||
{uttid: (wave, rate)},
|
||||
scp=fout,
|
||||
append=True,
|
||||
write_function=f"soundfile_{suf}",
|
||||
)
|
||||
else:
|
||||
owavpath = str(wavdir / f"{uttid}.{args.audio_format}")
|
||||
soundfile.write(owavpath, wave, rate)
|
||||
fout.write(f"{uttid} {owavpath}\n")
|
||||
fnum_samples.write(f"{uttid} {len(wave)}\n")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
142
egs/alimeeting/sa-asr/local/format_wav_scp.sh
Executable file
142
egs/alimeeting/sa-asr/local/format_wav_scp.sh
Executable file
@ -0,0 +1,142 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
SECONDS=0
|
||||
log() {
|
||||
local fname=${BASH_SOURCE[1]##*/}
|
||||
echo -e "$(date '+%Y-%m-%dT%H:%M:%S') (${fname}:${BASH_LINENO[0]}:${FUNCNAME[1]}) $*"
|
||||
}
|
||||
help_message=$(cat << EOF
|
||||
Usage: $0 <in-wav.scp> <out-datadir> [<logdir> [<outdir>]]
|
||||
e.g.
|
||||
$0 data/test/wav.scp data/test_format/
|
||||
|
||||
Format 'wav.scp': In short words,
|
||||
changing "kaldi-datadir" to "modified-kaldi-datadir"
|
||||
|
||||
The 'wav.scp' format in kaldi is very flexible,
|
||||
e.g. It can use unix-pipe as describing that wav file,
|
||||
but it sometime looks confusing and make scripts more complex.
|
||||
This tools creates actual wav files from 'wav.scp'
|
||||
and also segments wav files using 'segments'.
|
||||
|
||||
Options
|
||||
--fs <fs>
|
||||
--segments <segments>
|
||||
--nj <nj>
|
||||
--cmd <cmd>
|
||||
EOF
|
||||
)
|
||||
|
||||
out_filename=wav.scp
|
||||
cmd=utils/run.pl
|
||||
nj=30
|
||||
fs=none
|
||||
segments=
|
||||
|
||||
ref_channels=
|
||||
utt2ref_channels=
|
||||
|
||||
audio_format=wav
|
||||
write_utt2num_samples=true
|
||||
|
||||
log "$0 $*"
|
||||
. utils/parse_options.sh
|
||||
|
||||
if [ $# -ne 2 ] && [ $# -ne 3 ] && [ $# -ne 4 ]; then
|
||||
log "${help_message}"
|
||||
log "Error: invalid command line arguments"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
. ./path.sh # Setup the environment
|
||||
|
||||
scp=$1
|
||||
if [ ! -f "${scp}" ]; then
|
||||
log "${help_message}"
|
||||
echo "$0: Error: No such file: ${scp}"
|
||||
exit 1
|
||||
fi
|
||||
dir=$2
|
||||
|
||||
|
||||
if [ $# -eq 2 ]; then
|
||||
logdir=${dir}/logs
|
||||
outdir=${dir}/data
|
||||
|
||||
elif [ $# -eq 3 ]; then
|
||||
logdir=$3
|
||||
outdir=${dir}/data
|
||||
|
||||
elif [ $# -eq 4 ]; then
|
||||
logdir=$3
|
||||
outdir=$4
|
||||
fi
|
||||
|
||||
|
||||
mkdir -p ${logdir}
|
||||
|
||||
rm -f "${dir}/${out_filename}"
|
||||
|
||||
|
||||
opts=
|
||||
if [ -n "${utt2ref_channels}" ]; then
|
||||
opts="--utt2ref-channels ${utt2ref_channels} "
|
||||
elif [ -n "${ref_channels}" ]; then
|
||||
opts="--ref-channels ${ref_channels} "
|
||||
fi
|
||||
|
||||
|
||||
if [ -n "${segments}" ]; then
|
||||
log "[info]: using ${segments}"
|
||||
nutt=$(<${segments} wc -l)
|
||||
nj=$((nj<nutt?nj:nutt))
|
||||
|
||||
split_segments=""
|
||||
for n in $(seq ${nj}); do
|
||||
split_segments="${split_segments} ${logdir}/segments.${n}"
|
||||
done
|
||||
|
||||
utils/split_scp.pl "${segments}" ${split_segments}
|
||||
|
||||
${cmd} "JOB=1:${nj}" "${logdir}/format_wav_scp.JOB.log" \
|
||||
local/format_wav_scp.py \
|
||||
${opts} \
|
||||
--fs ${fs} \
|
||||
--audio-format "${audio_format}" \
|
||||
"--segment=${logdir}/segments.JOB" \
|
||||
"${scp}" "${outdir}/format.JOB"
|
||||
|
||||
else
|
||||
log "[info]: without segments"
|
||||
nutt=$(<${scp} wc -l)
|
||||
nj=$((nj<nutt?nj:nutt))
|
||||
|
||||
split_scps=""
|
||||
for n in $(seq ${nj}); do
|
||||
split_scps="${split_scps} ${logdir}/wav.${n}.scp"
|
||||
done
|
||||
|
||||
utils/split_scp.pl "${scp}" ${split_scps}
|
||||
${cmd} "JOB=1:${nj}" "${logdir}/format_wav_scp.JOB.log" \
|
||||
local/format_wav_scp.py \
|
||||
${opts} \
|
||||
--fs "${fs}" \
|
||||
--audio-format "${audio_format}" \
|
||||
"${logdir}/wav.JOB.scp" ${outdir}/format.JOB""
|
||||
fi
|
||||
|
||||
# Workaround for the NFS problem
|
||||
ls ${outdir}/format.* > /dev/null
|
||||
|
||||
# concatenate the .scp files together.
|
||||
for n in $(seq ${nj}); do
|
||||
cat "${outdir}/format.${n}/wav.scp" || exit 1;
|
||||
done > "${dir}/${out_filename}" || exit 1
|
||||
|
||||
if "${write_utt2num_samples}"; then
|
||||
for n in $(seq ${nj}); do
|
||||
cat "${outdir}/format.${n}/utt2num_samples" || exit 1;
|
||||
done > "${dir}/utt2num_samples" || exit 1
|
||||
fi
|
||||
|
||||
log "Successfully finished. [elapsed=${SECONDS}s]"
|
||||
167
egs/alimeeting/sa-asr/local/gen_cluster_profile_infer.py
Normal file
167
egs/alimeeting/sa-asr/local/gen_cluster_profile_infer.py
Normal file
@ -0,0 +1,167 @@
|
||||
from modelscope.pipelines import pipeline
|
||||
from modelscope.utils.constant import Tasks
|
||||
import numpy as np
|
||||
import sys
|
||||
import os
|
||||
import soundfile
|
||||
from itertools import permutations
|
||||
from sklearn.metrics.pairwise import cosine_similarity
|
||||
from sklearn import cluster
|
||||
|
||||
|
||||
def custom_spectral_clustering(affinity, min_n_clusters=2, max_n_clusters=4, refine=True,
|
||||
threshold=0.995, laplacian_type="graph_cut"):
|
||||
if refine:
|
||||
# Symmetrization
|
||||
affinity = np.maximum(affinity, np.transpose(affinity))
|
||||
# Diffusion
|
||||
affinity = np.matmul(affinity, np.transpose(affinity))
|
||||
# Row-wise max normalization
|
||||
row_max = affinity.max(axis=1, keepdims=True)
|
||||
affinity = affinity / row_max
|
||||
|
||||
# a) Construct S and set diagonal elements to 0
|
||||
affinity = affinity - np.diag(np.diag(affinity))
|
||||
# b) Compute Laplacian matrix L and perform normalization:
|
||||
degree = np.diag(np.sum(affinity, axis=1))
|
||||
laplacian = degree - affinity
|
||||
if laplacian_type == "random_walk":
|
||||
degree_norm = np.diag(1 / (np.diag(degree) + 1e-10))
|
||||
laplacian_norm = degree_norm.dot(laplacian)
|
||||
else:
|
||||
degree_half = np.diag(degree) ** 0.5 + 1e-15
|
||||
laplacian_norm = laplacian / degree_half[:, np.newaxis] / degree_half
|
||||
|
||||
# c) Compute eigenvalues and eigenvectors of L_norm
|
||||
eigenvalues, eigenvectors = np.linalg.eig(laplacian_norm)
|
||||
eigenvalues = eigenvalues.real
|
||||
eigenvectors = eigenvectors.real
|
||||
index_array = np.argsort(eigenvalues)
|
||||
eigenvalues = eigenvalues[index_array]
|
||||
eigenvectors = eigenvectors[:, index_array]
|
||||
|
||||
# d) Compute the number of clusters k
|
||||
k = min_n_clusters
|
||||
for k in range(min_n_clusters, max_n_clusters + 1):
|
||||
if eigenvalues[k] > threshold:
|
||||
break
|
||||
k = max(k, min_n_clusters)
|
||||
spectral_embeddings = eigenvectors[:, :k]
|
||||
# print(mid, k, eigenvalues[:10])
|
||||
|
||||
spectral_embeddings = spectral_embeddings / np.linalg.norm(spectral_embeddings, axis=1, ord=2, keepdims=True)
|
||||
solver = cluster.KMeans(n_clusters=k, max_iter=1000, random_state=42)
|
||||
solver.fit(spectral_embeddings)
|
||||
return solver.labels_
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
path = sys.argv[1] # dump2/raw/Eval_Ali_far
|
||||
raw_path = sys.argv[2] # data/local/Eval_Ali_far
|
||||
threshold = float(sys.argv[3]) # 0.996
|
||||
sv_threshold = float(sys.argv[4]) # 0.815
|
||||
wav_scp_file = open(path+'/wav.scp', 'r')
|
||||
wav_scp = wav_scp_file.readlines()
|
||||
wav_scp_file.close()
|
||||
raw_meeting_scp_file = open(raw_path + '/wav_raw.scp', 'r')
|
||||
raw_meeting_scp = raw_meeting_scp_file.readlines()
|
||||
raw_meeting_scp_file.close()
|
||||
segments_scp_file = open(raw_path + '/segments', 'r')
|
||||
segments_scp = segments_scp_file.readlines()
|
||||
segments_scp_file.close()
|
||||
|
||||
segments_map = {}
|
||||
for line in segments_scp:
|
||||
line_list = line.strip().split(' ')
|
||||
meeting = line_list[1]
|
||||
seg = (float(line_list[-2]), float(line_list[-1]))
|
||||
if meeting not in segments_map.keys():
|
||||
segments_map[meeting] = [seg]
|
||||
else:
|
||||
segments_map[meeting].append(seg)
|
||||
|
||||
inference_sv_pipline = pipeline(
|
||||
task=Tasks.speaker_verification,
|
||||
model='damo/speech_xvector_sv-zh-cn-cnceleb-16k-spk3465-pytorch'
|
||||
)
|
||||
|
||||
chunk_len = int(1.5*16000) # 1.5 seconds
|
||||
hop_len = int(0.75*16000) # 0.75 seconds
|
||||
|
||||
os.system("mkdir -p " + path + "/cluster_profile_infer")
|
||||
cluster_spk_num_file = open(path + '/cluster_spk_num', 'w')
|
||||
meeting_map = {}
|
||||
for line in raw_meeting_scp:
|
||||
meeting = line.strip().split('\t')[0]
|
||||
wav_path = line.strip().split('\t')[1]
|
||||
wav = soundfile.read(wav_path)[0]
|
||||
# take the first channel
|
||||
if wav.ndim == 2:
|
||||
wav=wav[:, 0]
|
||||
# gen_seg_embedding
|
||||
segments_list = segments_map[meeting]
|
||||
|
||||
# import ipdb;ipdb.set_trace()
|
||||
all_seg_embedding_list = []
|
||||
for seg in segments_list:
|
||||
wav_seg = wav[int(seg[0] * 16000): int(seg[1] * 16000)]
|
||||
wav_seg_len = wav_seg.shape[0]
|
||||
i = 0
|
||||
while i < wav_seg_len:
|
||||
if i + chunk_len < wav_seg_len:
|
||||
cur_wav_chunk = wav_seg[i: i+chunk_len]
|
||||
else:
|
||||
cur_wav_chunk=wav_seg[i: ]
|
||||
# chunks under 0.2s are ignored
|
||||
if cur_wav_chunk.shape[0] >= 0.2 * 16000:
|
||||
cur_chunk_embedding = inference_sv_pipline(audio_in=cur_wav_chunk)["spk_embedding"]
|
||||
all_seg_embedding_list.append(cur_chunk_embedding)
|
||||
i += hop_len
|
||||
all_seg_embedding = np.vstack(all_seg_embedding_list)
|
||||
# all_seg_embedding (n, dim)
|
||||
|
||||
# compute affinity
|
||||
affinity=cosine_similarity(all_seg_embedding)
|
||||
|
||||
affinity = np.maximum(affinity - sv_threshold, 0.0001) / (affinity.max() - sv_threshold)
|
||||
|
||||
# clustering
|
||||
labels = custom_spectral_clustering(
|
||||
affinity=affinity,
|
||||
min_n_clusters=2,
|
||||
max_n_clusters=4,
|
||||
refine=True,
|
||||
threshold=threshold,
|
||||
laplacian_type="graph_cut")
|
||||
|
||||
|
||||
cluster_dict={}
|
||||
for j in range(labels.shape[0]):
|
||||
if labels[j] not in cluster_dict.keys():
|
||||
cluster_dict[labels[j]] = np.atleast_2d(all_seg_embedding[j])
|
||||
else:
|
||||
cluster_dict[labels[j]] = np.concatenate((cluster_dict[labels[j]], np.atleast_2d(all_seg_embedding[j])))
|
||||
|
||||
emb_list = []
|
||||
# get cluster center
|
||||
for k in cluster_dict.keys():
|
||||
cluster_dict[k] = np.mean(cluster_dict[k], axis=0)
|
||||
emb_list.append(cluster_dict[k])
|
||||
|
||||
spk_num = len(emb_list)
|
||||
profile_for_infer = np.vstack(emb_list)
|
||||
# save profile for each meeting
|
||||
np.save(path + '/cluster_profile_infer/' + meeting + '.npy', profile_for_infer)
|
||||
meeting_map[meeting] = (path + '/cluster_profile_infer/' + meeting + '.npy', spk_num)
|
||||
cluster_spk_num_file.write(meeting + ' ' + str(spk_num) + '\n')
|
||||
cluster_spk_num_file.flush()
|
||||
|
||||
cluster_spk_num_file.close()
|
||||
|
||||
profile_scp = open(path + "/cluster_profile_infer.scp", 'w')
|
||||
for line in wav_scp:
|
||||
uttid = line.strip().split(' ')[0]
|
||||
meeting = uttid.split('-')[0]
|
||||
profile_scp.write(uttid + ' ' + meeting_map[meeting][0] + '\n')
|
||||
profile_scp.flush()
|
||||
profile_scp.close()
|
||||
70
egs/alimeeting/sa-asr/local/gen_oracle_embedding.py
Normal file
70
egs/alimeeting/sa-asr/local/gen_oracle_embedding.py
Normal file
@ -0,0 +1,70 @@
|
||||
from modelscope.pipelines import pipeline
|
||||
from modelscope.utils.constant import Tasks
|
||||
import numpy as np
|
||||
import sys
|
||||
import os
|
||||
import soundfile
|
||||
|
||||
|
||||
if __name__=="__main__":
|
||||
path = sys.argv[1] # dump2/raw/Eval_Ali_far
|
||||
raw_path = sys.argv[2] # data/local/Eval_Ali_far_correct_single_speaker
|
||||
raw_meeting_scp_file = open(raw_path + '/wav_raw.scp', 'r')
|
||||
raw_meeting_scp = raw_meeting_scp_file.readlines()
|
||||
raw_meeting_scp_file.close()
|
||||
segments_scp_file = open(raw_path + '/segments', 'r')
|
||||
segments_scp = segments_scp_file.readlines()
|
||||
segments_scp_file.close()
|
||||
|
||||
oracle_emb_dir = path + '/oracle_embedding/'
|
||||
os.system("mkdir -p " + oracle_emb_dir)
|
||||
oracle_emb_scp_file = open(path+'/oracle_embedding.scp', 'w')
|
||||
|
||||
raw_wav_map = {}
|
||||
for line in raw_meeting_scp:
|
||||
meeting = line.strip().split('\t')[0]
|
||||
wav_path = line.strip().split('\t')[1]
|
||||
raw_wav_map[meeting] = wav_path
|
||||
|
||||
spk_map = {}
|
||||
for line in segments_scp:
|
||||
line_list = line.strip().split(' ')
|
||||
meeting = line_list[1]
|
||||
spk_id = line_list[0].split('_')[3]
|
||||
spk = meeting + '_' + spk_id
|
||||
time_start = float(line_list[-2])
|
||||
time_end = float(line_list[-1])
|
||||
if time_end - time_start > 0.5:
|
||||
if spk not in spk_map.keys():
|
||||
spk_map[spk] = [(int(time_start * 16000), int(time_end * 16000))]
|
||||
else:
|
||||
spk_map[spk].append((int(time_start * 16000), int(time_end * 16000)))
|
||||
|
||||
inference_sv_pipline = pipeline(
|
||||
task=Tasks.speaker_verification,
|
||||
model='damo/speech_xvector_sv-zh-cn-cnceleb-16k-spk3465-pytorch'
|
||||
)
|
||||
|
||||
for spk in spk_map.keys():
|
||||
meeting = spk.split('_SPK')[0]
|
||||
wav_path = raw_wav_map[meeting]
|
||||
wav = soundfile.read(wav_path)[0]
|
||||
# take the first channel
|
||||
if wav.ndim == 2:
|
||||
wav = wav[:, 0]
|
||||
all_seg_embedding_list=[]
|
||||
# import ipdb;ipdb.set_trace()
|
||||
for seg_time in spk_map[spk]:
|
||||
if seg_time[0] < wav.shape[0] - 0.5 * 16000:
|
||||
if seg_time[1] > wav.shape[0]:
|
||||
cur_seg_embedding = inference_sv_pipline(audio_in=wav[seg_time[0]: ])["spk_embedding"]
|
||||
else:
|
||||
cur_seg_embedding = inference_sv_pipline(audio_in=wav[seg_time[0]: seg_time[1]])["spk_embedding"]
|
||||
all_seg_embedding_list.append(cur_seg_embedding)
|
||||
all_seg_embedding = np.vstack(all_seg_embedding_list)
|
||||
spk_embedding = np.mean(all_seg_embedding, axis=0)
|
||||
np.save(oracle_emb_dir + spk + '.npy', spk_embedding)
|
||||
oracle_emb_scp_file.write(spk + ' ' + oracle_emb_dir + spk + '.npy' + '\n')
|
||||
oracle_emb_scp_file.flush()
|
||||
|
||||
oracle_emb_scp_file.close()
|
||||
59
egs/alimeeting/sa-asr/local/gen_oracle_profile_nopadding.py
Normal file
59
egs/alimeeting/sa-asr/local/gen_oracle_profile_nopadding.py
Normal file
@ -0,0 +1,59 @@
|
||||
import random
|
||||
import numpy as np
|
||||
import os
|
||||
import sys
|
||||
|
||||
|
||||
if __name__=="__main__":
|
||||
path = sys.argv[1] # dump2/raw/Eval_Ali_far
|
||||
wav_scp_file = open(path+"/wav.scp", 'r')
|
||||
wav_scp = wav_scp_file.readlines()
|
||||
wav_scp_file.close()
|
||||
spk2id_file = open(path + "/spk2id", 'r')
|
||||
spk2id = spk2id_file.readlines()
|
||||
spk2id_file.close()
|
||||
embedding_scp_file = open(path + "/oracle_embedding.scp", 'r')
|
||||
embedding_scp = embedding_scp_file.readlines()
|
||||
embedding_scp_file.close()
|
||||
|
||||
embedding_map = {}
|
||||
for line in embedding_scp:
|
||||
spk = line.strip().split(' ')[0]
|
||||
if spk not in embedding_map.keys():
|
||||
emb=np.load(line.strip().split(' ')[1])
|
||||
embedding_map[spk] = emb
|
||||
|
||||
meeting_map_tmp = {}
|
||||
global_spk_list = []
|
||||
for line in spk2id:
|
||||
line_list = line.strip().split(' ')
|
||||
meeting = line_list[0].split('-')[0]
|
||||
spk_id = line_list[0].split('-')[-1].split('_')[-1]
|
||||
spk = meeting + '_' + spk_id
|
||||
global_spk_list.append(spk)
|
||||
if meeting in meeting_map_tmp.keys():
|
||||
meeting_map_tmp[meeting].append(spk)
|
||||
else:
|
||||
meeting_map_tmp[meeting] = [spk]
|
||||
|
||||
meeting_map = {}
|
||||
os.system('mkdir -p ' + path + '/oracle_profile_nopadding')
|
||||
for meeting in meeting_map_tmp.keys():
|
||||
emb_list = []
|
||||
for i in range(len(meeting_map_tmp[meeting])):
|
||||
spk = meeting_map_tmp[meeting][i]
|
||||
emb_list.append(embedding_map[spk])
|
||||
profile = np.vstack(emb_list)
|
||||
np.save(path + '/oracle_profile_nopadding/' + meeting + '.npy', profile)
|
||||
meeting_map[meeting] = path + '/oracle_profile_nopadding/' + meeting + '.npy'
|
||||
|
||||
profile_scp = open(path + '/oracle_profile_nopadding.scp', 'w')
|
||||
profile_map_scp = open(path + '/oracle_profile_nopadding_spk_list', 'w')
|
||||
|
||||
for line in wav_scp:
|
||||
uttid = line.strip().split(' ')[0]
|
||||
meeting = uttid.split('-')[0]
|
||||
profile_scp.write(uttid + ' ' + meeting_map[meeting] + '\n')
|
||||
profile_map_scp.write(uttid + ' ' + '$'.join(meeting_map_tmp[meeting]) + '\n')
|
||||
profile_scp.close()
|
||||
profile_map_scp.close()
|
||||
68
egs/alimeeting/sa-asr/local/gen_oracle_profile_padding.py
Normal file
68
egs/alimeeting/sa-asr/local/gen_oracle_profile_padding.py
Normal file
@ -0,0 +1,68 @@
|
||||
import random
|
||||
import numpy as np
|
||||
import os
|
||||
import sys
|
||||
|
||||
|
||||
if __name__=="__main__":
|
||||
path = sys.argv[1] # dump2/raw/Train_Ali_far
|
||||
wav_scp_file = open(path+"/wav.scp", 'r')
|
||||
wav_scp = wav_scp_file.readlines()
|
||||
wav_scp_file.close()
|
||||
spk2id_file = open(path+"/spk2id", 'r')
|
||||
spk2id = spk2id_file.readlines()
|
||||
spk2id_file.close()
|
||||
embedding_scp_file = open(path + "/oracle_embedding.scp", 'r')
|
||||
embedding_scp = embedding_scp_file.readlines()
|
||||
embedding_scp_file.close()
|
||||
|
||||
embedding_map = {}
|
||||
for line in embedding_scp:
|
||||
spk = line.strip().split(' ')[0]
|
||||
if spk not in embedding_map.keys():
|
||||
emb = np.load(line.strip().split(' ')[1])
|
||||
embedding_map[spk] = emb
|
||||
|
||||
meeting_map_tmp = {}
|
||||
global_spk_list = []
|
||||
for line in spk2id:
|
||||
line_list = line.strip().split(' ')
|
||||
meeting = line_list[0].split('-')[0]
|
||||
spk_id = line_list[0].split('-')[-1].split('_')[-1]
|
||||
spk = meeting+'_' + spk_id
|
||||
global_spk_list.append(spk)
|
||||
if meeting in meeting_map_tmp.keys():
|
||||
meeting_map_tmp[meeting].append(spk)
|
||||
else:
|
||||
meeting_map_tmp[meeting] = [spk]
|
||||
|
||||
for meeting in meeting_map_tmp.keys():
|
||||
num = len(meeting_map_tmp[meeting])
|
||||
if num < 4:
|
||||
global_spk_list_tmp = global_spk_list[: ]
|
||||
for spk in meeting_map_tmp[meeting]:
|
||||
global_spk_list_tmp.remove(spk)
|
||||
padding_spk = random.sample(global_spk_list_tmp, 4 - num)
|
||||
meeting_map_tmp[meeting] = meeting_map_tmp[meeting] + padding_spk
|
||||
|
||||
meeting_map = {}
|
||||
os.system('mkdir -p ' + path + '/oracle_profile_padding')
|
||||
for meeting in meeting_map_tmp.keys():
|
||||
emb_list = []
|
||||
for i in range(len(meeting_map_tmp[meeting])):
|
||||
spk = meeting_map_tmp[meeting][i]
|
||||
emb_list.append(embedding_map[spk])
|
||||
profile = np.vstack(emb_list)
|
||||
np.save(path + '/oracle_profile_padding/' + meeting + '.npy',profile)
|
||||
meeting_map[meeting] = path + '/oracle_profile_padding/' + meeting + '.npy'
|
||||
|
||||
profile_scp = open(path + '/oracle_profile_padding.scp', 'w')
|
||||
profile_map_scp = open(path + '/oracle_profile_padding_spk_list', 'w')
|
||||
|
||||
for line in wav_scp:
|
||||
uttid = line.strip().split(' ')[0]
|
||||
meeting = uttid.split('-')[0]
|
||||
profile_scp.write(uttid+' ' + meeting_map[meeting] + '\n')
|
||||
profile_map_scp.write(uttid+' ' + '$'.join(meeting_map_tmp[meeting]) + '\n')
|
||||
profile_scp.close()
|
||||
profile_map_scp.close()
|
||||
116
egs/alimeeting/sa-asr/local/perturb_data_dir_speed.sh
Executable file
116
egs/alimeeting/sa-asr/local/perturb_data_dir_speed.sh
Executable file
@ -0,0 +1,116 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# 2020 @kamo-naoyuki
|
||||
# This file was copied from Kaldi and
|
||||
# I deleted parts related to wav duration
|
||||
# because we shouldn't use kaldi's command here
|
||||
# and we don't need the files actually.
|
||||
|
||||
# Copyright 2013 Johns Hopkins University (author: Daniel Povey)
|
||||
# 2014 Tom Ko
|
||||
# 2018 Emotech LTD (author: Pawel Swietojanski)
|
||||
# Apache 2.0
|
||||
|
||||
# This script operates on a directory, such as in data/train/,
|
||||
# that contains some subset of the following files:
|
||||
# wav.scp
|
||||
# spk2utt
|
||||
# utt2spk
|
||||
# text
|
||||
#
|
||||
# It generates the files which are used for perturbing the speed of the original data.
|
||||
|
||||
export LC_ALL=C
|
||||
set -euo pipefail
|
||||
|
||||
if [[ $# != 3 ]]; then
|
||||
echo "Usage: perturb_data_dir_speed.sh <warping-factor> <srcdir> <destdir>"
|
||||
echo "e.g.:"
|
||||
echo " $0 0.9 data/train_si284 data/train_si284p"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
factor=$1
|
||||
srcdir=$2
|
||||
destdir=$3
|
||||
label="sp"
|
||||
spk_prefix="${label}${factor}-"
|
||||
utt_prefix="${label}${factor}-"
|
||||
|
||||
#check is sox on the path
|
||||
|
||||
! command -v sox &>/dev/null && echo "sox: command not found" && exit 1;
|
||||
|
||||
if [[ ! -f ${srcdir}/utt2spk ]]; then
|
||||
echo "$0: no such file ${srcdir}/utt2spk"
|
||||
exit 1;
|
||||
fi
|
||||
|
||||
if [[ ${destdir} == "${srcdir}" ]]; then
|
||||
echo "$0: this script requires <srcdir> and <destdir> to be different."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
mkdir -p "${destdir}"
|
||||
|
||||
<"${srcdir}"/utt2spk awk -v p="${utt_prefix}" '{printf("%s %s%s\n", $1, p, $1);}' > "${destdir}/utt_map"
|
||||
<"${srcdir}"/spk2utt awk -v p="${spk_prefix}" '{printf("%s %s%s\n", $1, p, $1);}' > "${destdir}/spk_map"
|
||||
<"${srcdir}"/wav.scp awk -v p="${spk_prefix}" '{printf("%s %s%s\n", $1, p, $1);}' > "${destdir}/reco_map"
|
||||
if [[ ! -f ${srcdir}/utt2uniq ]]; then
|
||||
<"${srcdir}/utt2spk" awk -v p="${utt_prefix}" '{printf("%s%s %s\n", p, $1, $1);}' > "${destdir}/utt2uniq"
|
||||
else
|
||||
<"${srcdir}/utt2uniq" awk -v p="${utt_prefix}" '{printf("%s%s %s\n", p, $1, $2);}' > "${destdir}/utt2uniq"
|
||||
fi
|
||||
|
||||
|
||||
<"${srcdir}"/utt2spk local/apply_map.pl -f 1 "${destdir}"/utt_map | \
|
||||
local/apply_map.pl -f 2 "${destdir}"/spk_map >"${destdir}"/utt2spk
|
||||
|
||||
local/utt2spk_to_spk2utt.pl <"${destdir}"/utt2spk >"${destdir}"/spk2utt
|
||||
|
||||
if [[ -f ${srcdir}/segments ]]; then
|
||||
|
||||
local/apply_map.pl -f 1 "${destdir}"/utt_map <"${srcdir}"/segments | \
|
||||
local/apply_map.pl -f 2 "${destdir}"/reco_map | \
|
||||
awk -v factor="${factor}" \
|
||||
'{s=$3/factor; e=$4/factor; if (e > s + 0.01) { printf("%s %s %.2f %.2f\n", $1, $2, $3/factor, $4/factor);} }' \
|
||||
>"${destdir}"/segments
|
||||
|
||||
local/apply_map.pl -f 1 "${destdir}"/reco_map <"${srcdir}"/wav.scp | sed 's/| *$/ |/' | \
|
||||
# Handle three cases of rxfilenames appropriately; "input piped command", "file offset" and "filename"
|
||||
awk -v factor="${factor}" \
|
||||
'{wid=$1; $1=""; if ($NF=="|") {print wid $_ " sox -t wav - -t wav - speed " factor " |"}
|
||||
else if (match($0, /:[0-9]+$/)) {print wid " wav-copy" $_ " - | sox -t wav - -t wav - speed " factor " |" }
|
||||
else {print wid " sox" $_ " -t wav - speed " factor " |"}}' \
|
||||
> "${destdir}"/wav.scp
|
||||
if [[ -f ${srcdir}/reco2file_and_channel ]]; then
|
||||
local/apply_map.pl -f 1 "${destdir}"/reco_map \
|
||||
<"${srcdir}"/reco2file_and_channel >"${destdir}"/reco2file_and_channel
|
||||
fi
|
||||
|
||||
else # no segments->wav indexed by utterance.
|
||||
if [[ -f ${srcdir}/wav.scp ]]; then
|
||||
local/apply_map.pl -f 1 "${destdir}"/utt_map <"${srcdir}"/wav.scp | sed 's/| *$/ |/' | \
|
||||
# Handle three cases of rxfilenames appropriately; "input piped command", "file offset" and "filename"
|
||||
awk -v factor="${factor}" \
|
||||
'{wid=$1; $1=""; if ($NF=="|") {print wid $_ " sox -t wav - -t wav - speed " factor " |"}
|
||||
else if (match($0, /:[0-9]+$/)) {print wid " wav-copy" $_ " - | sox -t wav - -t wav - speed " factor " |" }
|
||||
else {print wid " sox" $_ " -t wav - speed " factor " |"}}' \
|
||||
> "${destdir}"/wav.scp
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -f ${srcdir}/text ]]; then
|
||||
local/apply_map.pl -f 1 "${destdir}"/utt_map <"${srcdir}"/text >"${destdir}"/text
|
||||
fi
|
||||
if [[ -f ${srcdir}/spk2gender ]]; then
|
||||
local/apply_map.pl -f 1 "${destdir}"/spk_map <"${srcdir}"/spk2gender >"${destdir}"/spk2gender
|
||||
fi
|
||||
if [[ -f ${srcdir}/utt2lang ]]; then
|
||||
local/apply_map.pl -f 1 "${destdir}"/utt_map <"${srcdir}"/utt2lang >"${destdir}"/utt2lang
|
||||
fi
|
||||
|
||||
rm "${destdir}"/spk_map "${destdir}"/utt_map "${destdir}"/reco_map 2>/dev/null
|
||||
echo "$0: generated speed-perturbed version of data in ${srcdir}, in ${destdir}"
|
||||
|
||||
local/validate_data_dir.sh --no-feats --no-text "${destdir}"
|
||||
86
egs/alimeeting/sa-asr/local/process_sot_fifo_textchar2spk.py
Executable file
86
egs/alimeeting/sa-asr/local/process_sot_fifo_textchar2spk.py
Executable file
@ -0,0 +1,86 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Process the textgrid files
|
||||
"""
|
||||
import argparse
|
||||
import codecs
|
||||
from distutils.util import strtobool
|
||||
from pathlib import Path
|
||||
import textgrid
|
||||
import pdb
|
||||
|
||||
def get_args():
|
||||
parser = argparse.ArgumentParser(description="process the textgrid files")
|
||||
parser.add_argument("--path", type=str, required=True, help="Data path")
|
||||
args = parser.parse_args()
|
||||
return args
|
||||
|
||||
class Segment(object):
|
||||
def __init__(self, uttid, text):
|
||||
self.uttid = uttid
|
||||
self.text = text
|
||||
|
||||
def main(args):
|
||||
text = codecs.open(Path(args.path) / "text", "r", "utf-8")
|
||||
spk2utt = codecs.open(Path(args.path) / "spk2utt", "r", "utf-8")
|
||||
utt2spk = codecs.open(Path(args.path) / "utt2spk_all_fifo", "r", "utf-8")
|
||||
spk2id = codecs.open(Path(args.path) / "spk2id", "w", "utf-8")
|
||||
|
||||
spkid_map = {}
|
||||
meetingid_map = {}
|
||||
for line in spk2utt:
|
||||
spkid = line.strip().split(" ")[0]
|
||||
meeting_id_list = spkid.split("_")[:3]
|
||||
meeting_id = meeting_id_list[0] + "_" + meeting_id_list[1] + "_" + meeting_id_list[2]
|
||||
if meeting_id not in meetingid_map:
|
||||
meetingid_map[meeting_id] = 1
|
||||
else:
|
||||
meetingid_map[meeting_id] += 1
|
||||
spkid_map[spkid] = meetingid_map[meeting_id]
|
||||
spk2id.write("%s %s\n" % (spkid, meetingid_map[meeting_id]))
|
||||
|
||||
utt2spklist = {}
|
||||
for line in utt2spk:
|
||||
uttid = line.strip().split(" ")[0]
|
||||
spkid = line.strip().split(" ")[1]
|
||||
spklist = spkid.split("$")
|
||||
tmp = []
|
||||
for index in range(len(spklist)):
|
||||
tmp.append(spkid_map[spklist[index]])
|
||||
utt2spklist[uttid] = tmp
|
||||
# parse the textgrid file for each utterance
|
||||
all_segments = []
|
||||
for line in text:
|
||||
uttid = line.strip().split(" ")[0]
|
||||
context = line.strip().split(" ")[1]
|
||||
spklist = utt2spklist[uttid]
|
||||
length_text = len(context)
|
||||
cnt = 0
|
||||
tmp_text = ""
|
||||
for index in range(length_text):
|
||||
if context[index] != "$":
|
||||
tmp_text += str(spklist[cnt])
|
||||
else:
|
||||
tmp_text += "$"
|
||||
cnt += 1
|
||||
tmp_seg = Segment(uttid,tmp_text)
|
||||
all_segments.append(tmp_seg)
|
||||
|
||||
text.close()
|
||||
utt2spk.close()
|
||||
spk2utt.close()
|
||||
spk2id.close()
|
||||
|
||||
text_id = codecs.open(Path(args.path) / "text_id", "w", "utf-8")
|
||||
|
||||
for i in range(len(all_segments)):
|
||||
uttid_tmp = all_segments[i].uttid
|
||||
text_tmp = all_segments[i].text
|
||||
|
||||
text_id.write("%s %s\n" % (uttid_tmp, text_tmp))
|
||||
|
||||
text_id.close()
|
||||
|
||||
if __name__ == "__main__":
|
||||
args = get_args()
|
||||
main(args)
|
||||
24
egs/alimeeting/sa-asr/local/process_text_id.py
Normal file
24
egs/alimeeting/sa-asr/local/process_text_id.py
Normal file
@ -0,0 +1,24 @@
|
||||
import sys
|
||||
if __name__=="__main__":
|
||||
path=sys.argv[1]
|
||||
|
||||
text_id_old_file=open(path+"/text_id",'r')
|
||||
text_id_old=text_id_old_file.readlines()
|
||||
text_id_old_file.close()
|
||||
|
||||
text_id=open(path+"/text_id_train",'w')
|
||||
for line in text_id_old:
|
||||
uttid=line.strip().split(' ')[0]
|
||||
old_id=line.strip().split(' ')[1]
|
||||
pre_id='0'
|
||||
new_id_list=[]
|
||||
for i in old_id:
|
||||
if i == '$':
|
||||
new_id_list.append(pre_id)
|
||||
else:
|
||||
new_id_list.append(str(int(i)-1))
|
||||
pre_id=str(int(i)-1)
|
||||
new_id_list.append(pre_id)
|
||||
new_id=' '.join(new_id_list)
|
||||
text_id.write(uttid+' '+new_id+'\n')
|
||||
text_id.close()
|
||||
55
egs/alimeeting/sa-asr/local/process_text_spk_merge.py
Normal file
55
egs/alimeeting/sa-asr/local/process_text_spk_merge.py
Normal file
@ -0,0 +1,55 @@
|
||||
import sys
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
path=sys.argv[1]
|
||||
text_scp_file = open(path + '/text', 'r')
|
||||
text_scp = text_scp_file.readlines()
|
||||
text_scp_file.close()
|
||||
text_id_scp_file = open(path + '/text_id', 'r')
|
||||
text_id_scp = text_id_scp_file.readlines()
|
||||
text_id_scp_file.close()
|
||||
text_spk_merge_file = open(path + '/text_spk_merge', 'w')
|
||||
assert len(text_scp) == len(text_id_scp)
|
||||
|
||||
meeting_map = {} # {meeting_id: [(start_time, text, text_id), (start_time, text, text_id), ...]}
|
||||
for i in range(len(text_scp)):
|
||||
text_line = text_scp[i].strip().split(' ')
|
||||
text_id_line = text_id_scp[i].strip().split(' ')
|
||||
assert text_line[0] == text_id_line[0]
|
||||
if len(text_line) > 1:
|
||||
uttid = text_line[0]
|
||||
text = text_line[1]
|
||||
text_id = text_id_line[1]
|
||||
meeting_id = uttid.split('-')[0]
|
||||
start_time = int(uttid.split('-')[-2])
|
||||
if meeting_id not in meeting_map:
|
||||
meeting_map[meeting_id] = [(start_time,text,text_id)]
|
||||
else:
|
||||
meeting_map[meeting_id].append((start_time,text,text_id))
|
||||
|
||||
for meeting_id in sorted(meeting_map.keys()):
|
||||
cur_meeting_list = sorted(meeting_map[meeting_id], key=lambda x: x[0])
|
||||
text_spk_merge_map = {} #{1: text1, 2: text2, ...}
|
||||
for cur_utt in cur_meeting_list:
|
||||
cur_text = cur_utt[1]
|
||||
cur_text_id = cur_utt[2]
|
||||
assert len(cur_text)==len(cur_text_id)
|
||||
if len(cur_text) != 0:
|
||||
cur_text_split = cur_text.split('$')
|
||||
cur_text_id_split = cur_text_id.split('$')
|
||||
assert len(cur_text_split) == len(cur_text_id_split)
|
||||
for i in range(len(cur_text_split)):
|
||||
if len(cur_text_split[i]) != 0:
|
||||
spk_id = int(cur_text_id_split[i][0])
|
||||
if spk_id not in text_spk_merge_map.keys():
|
||||
text_spk_merge_map[spk_id] = cur_text_split[i]
|
||||
else:
|
||||
text_spk_merge_map[spk_id] += cur_text_split[i]
|
||||
text_spk_merge_list = []
|
||||
for spk_id in sorted(text_spk_merge_map.keys()):
|
||||
text_spk_merge_list.append(text_spk_merge_map[spk_id])
|
||||
text_spk_merge_file.write(meeting_id + ' ' + '$'.join(text_spk_merge_list) + '\n')
|
||||
text_spk_merge_file.flush()
|
||||
|
||||
text_spk_merge_file.close()
|
||||
127
egs/alimeeting/sa-asr/local/process_textgrid_to_single_speaker_wav.py
Executable file
127
egs/alimeeting/sa-asr/local/process_textgrid_to_single_speaker_wav.py
Executable file
@ -0,0 +1,127 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Process the textgrid files
|
||||
"""
|
||||
import argparse
|
||||
import codecs
|
||||
from distutils.util import strtobool
|
||||
from pathlib import Path
|
||||
import textgrid
|
||||
import pdb
|
||||
import numpy as np
|
||||
import sys
|
||||
import math
|
||||
|
||||
|
||||
class Segment(object):
|
||||
def __init__(self, uttid, spkr, stime, etime, text):
|
||||
self.uttid = uttid
|
||||
self.spkr = spkr
|
||||
self.stime = round(stime, 2)
|
||||
self.etime = round(etime, 2)
|
||||
self.text = text
|
||||
|
||||
def change_stime(self, time):
|
||||
self.stime = time
|
||||
|
||||
def change_etime(self, time):
|
||||
self.etime = time
|
||||
|
||||
|
||||
def get_args():
|
||||
parser = argparse.ArgumentParser(description="process the textgrid files")
|
||||
parser.add_argument("--path", type=str, required=True, help="Data path")
|
||||
args = parser.parse_args()
|
||||
return args
|
||||
|
||||
|
||||
|
||||
def main(args):
|
||||
textgrid_flist = codecs.open(Path(args.path) / "textgrid.flist", "r", "utf-8")
|
||||
segment_file = codecs.open(Path(args.path)/"segments", "w", "utf-8")
|
||||
utt2spk = codecs.open(Path(args.path)/"utt2spk", "w", "utf-8")
|
||||
|
||||
# get the path of textgrid file for each utterance
|
||||
for line in textgrid_flist:
|
||||
line_array = line.strip().split(" ")
|
||||
path = Path(line_array[1])
|
||||
uttid = line_array[0]
|
||||
|
||||
try:
|
||||
tg = textgrid.TextGrid.fromFile(path)
|
||||
except:
|
||||
pdb.set_trace()
|
||||
num_spk = tg.__len__()
|
||||
spk2textgrid = {}
|
||||
spk2weight = {}
|
||||
weight2spk = {}
|
||||
cnt = 2
|
||||
xmax = 0
|
||||
for i in range(tg.__len__()):
|
||||
spk_name = tg[i].name
|
||||
if spk_name not in spk2weight:
|
||||
spk2weight[spk_name] = cnt
|
||||
weight2spk[cnt] = spk_name
|
||||
cnt = cnt * 2
|
||||
segments = []
|
||||
for j in range(tg[i].__len__()):
|
||||
if tg[i][j].mark:
|
||||
if xmax < tg[i][j].maxTime:
|
||||
xmax = tg[i][j].maxTime
|
||||
segments.append(
|
||||
Segment(
|
||||
uttid,
|
||||
tg[i].name,
|
||||
tg[i][j].minTime,
|
||||
tg[i][j].maxTime,
|
||||
tg[i][j].mark.strip(),
|
||||
)
|
||||
)
|
||||
segments = sorted(segments, key=lambda x: x.stime)
|
||||
spk2textgrid[spk_name] = segments
|
||||
olp_label = np.zeros((num_spk, int(xmax/0.01)), dtype=np.int32)
|
||||
for spkid in spk2weight.keys():
|
||||
weight = spk2weight[spkid]
|
||||
segments = spk2textgrid[spkid]
|
||||
idx = int(math.log2(weight) )- 1
|
||||
for i in range(len(segments)):
|
||||
stime = segments[i].stime
|
||||
etime = segments[i].etime
|
||||
olp_label[idx, int(stime/0.01): int(etime/0.01)] = weight
|
||||
sum_label = olp_label.sum(axis=0)
|
||||
stime = 0
|
||||
pre_value = 0
|
||||
for pos in range(sum_label.shape[0]):
|
||||
if sum_label[pos] in weight2spk:
|
||||
if pre_value in weight2spk:
|
||||
if sum_label[pos] != pre_value:
|
||||
spkids = weight2spk[pre_value]
|
||||
spkid_array = spkids.split("_")
|
||||
spkid = spkid_array[-1]
|
||||
#spkid = uttid+spkid
|
||||
if round(stime*0.01, 2) != round((pos-1)*0.01, 2):
|
||||
segment_file.write("%s_%s_%s_%s %s %s %s\n" % (uttid, spkid, str(int(stime)).zfill(7), str(int(pos-1)).zfill(7), uttid, round(stime*0.01, 2) ,round((pos-1)*0.01, 2)))
|
||||
utt2spk.write("%s_%s_%s_%s %s\n" % (uttid, spkid, str(int(stime)).zfill(7), str(int(pos-1)).zfill(7), uttid+"_"+spkid))
|
||||
stime = pos
|
||||
pre_value = sum_label[pos]
|
||||
else:
|
||||
stime = pos
|
||||
pre_value = sum_label[pos]
|
||||
else:
|
||||
if pre_value in weight2spk:
|
||||
spkids = weight2spk[pre_value]
|
||||
spkid_array = spkids.split("_")
|
||||
spkid = spkid_array[-1]
|
||||
#spkid = uttid+spkid
|
||||
if round(stime*0.01, 2) != round((pos-1)*0.01, 2):
|
||||
segment_file.write("%s_%s_%s_%s %s %s %s\n" % (uttid, spkid, str(int(stime)).zfill(7), str(int(pos-1)).zfill(7), uttid, round(stime*0.01, 2) ,round((pos-1)*0.01, 2)))
|
||||
utt2spk.write("%s_%s_%s_%s %s\n" % (uttid, spkid, str(int(stime)).zfill(7), str(int(pos-1)).zfill(7), uttid+"_"+spkid))
|
||||
stime = pos
|
||||
pre_value = sum_label[pos]
|
||||
textgrid_flist.close()
|
||||
segment_file.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
args = get_args()
|
||||
main(args)
|
||||
27
egs/alimeeting/sa-asr/local/spk2utt_to_utt2spk.pl
Executable file
27
egs/alimeeting/sa-asr/local/spk2utt_to_utt2spk.pl
Executable file
@ -0,0 +1,27 @@
|
||||
#!/usr/bin/env perl
|
||||
# Copyright 2010-2011 Microsoft Corporation
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# THIS CODE IS PROVIDED *AS IS* BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
# KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION ANY IMPLIED
|
||||
# WARRANTIES OR CONDITIONS OF TITLE, FITNESS FOR A PARTICULAR PURPOSE,
|
||||
# MERCHANTABLITY OR NON-INFRINGEMENT.
|
||||
# See the Apache 2 License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
while(<>){
|
||||
@A = split(" ", $_);
|
||||
@A > 1 || die "Invalid line in spk2utt file: $_";
|
||||
$s = shift @A;
|
||||
foreach $u ( @A ) {
|
||||
print "$u $s\n";
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
14
egs/alimeeting/sa-asr/local/text_format.pl
Executable file
14
egs/alimeeting/sa-asr/local/text_format.pl
Executable file
@ -0,0 +1,14 @@
|
||||
#!/usr/bin/env perl
|
||||
use warnings; #sed replacement for -w perl parameter
|
||||
# Copyright Chao Weng
|
||||
|
||||
# normalizations for hkust trascript
|
||||
# see the docs/trans-guidelines.pdf for details
|
||||
|
||||
while (<STDIN>) {
|
||||
@A = split(" ", $_);
|
||||
if (@A == 1) {
|
||||
next;
|
||||
}
|
||||
print $_
|
||||
}
|
||||
38
egs/alimeeting/sa-asr/local/text_normalize.pl
Executable file
38
egs/alimeeting/sa-asr/local/text_normalize.pl
Executable file
@ -0,0 +1,38 @@
|
||||
#!/usr/bin/env perl
|
||||
use warnings; #sed replacement for -w perl parameter
|
||||
# Copyright Chao Weng
|
||||
|
||||
# normalizations for hkust trascript
|
||||
# see the docs/trans-guidelines.pdf for details
|
||||
|
||||
while (<STDIN>) {
|
||||
@A = split(" ", $_);
|
||||
print "$A[0] ";
|
||||
for ($n = 1; $n < @A; $n++) {
|
||||
$tmp = $A[$n];
|
||||
if ($tmp =~ /<sil>/) {$tmp =~ s:<sil>::g;}
|
||||
if ($tmp =~ /<%>/) {$tmp =~ s:<%>::g;}
|
||||
if ($tmp =~ /<->/) {$tmp =~ s:<->::g;}
|
||||
if ($tmp =~ /<\$>/) {$tmp =~ s:<\$>::g;}
|
||||
if ($tmp =~ /<#>/) {$tmp =~ s:<#>::g;}
|
||||
if ($tmp =~ /<_>/) {$tmp =~ s:<_>::g;}
|
||||
if ($tmp =~ /<space>/) {$tmp =~ s:<space>::g;}
|
||||
if ($tmp =~ /`/) {$tmp =~ s:`::g;}
|
||||
if ($tmp =~ /&/) {$tmp =~ s:&::g;}
|
||||
if ($tmp =~ /,/) {$tmp =~ s:,::g;}
|
||||
if ($tmp =~ /[a-zA-Z]/) {$tmp=uc($tmp);}
|
||||
if ($tmp =~ /A/) {$tmp =~ s:A:A:g;}
|
||||
if ($tmp =~ /a/) {$tmp =~ s:a:A:g;}
|
||||
if ($tmp =~ /b/) {$tmp =~ s:b:B:g;}
|
||||
if ($tmp =~ /c/) {$tmp =~ s:c:C:g;}
|
||||
if ($tmp =~ /k/) {$tmp =~ s:k:K:g;}
|
||||
if ($tmp =~ /t/) {$tmp =~ s:t:T:g;}
|
||||
if ($tmp =~ /,/) {$tmp =~ s:,::g;}
|
||||
if ($tmp =~ /丶/) {$tmp =~ s:丶::g;}
|
||||
if ($tmp =~ /。/) {$tmp =~ s:。::g;}
|
||||
if ($tmp =~ /、/) {$tmp =~ s:、::g;}
|
||||
if ($tmp =~ /?/) {$tmp =~ s:?::g;}
|
||||
print "$tmp ";
|
||||
}
|
||||
print "\n";
|
||||
}
|
||||
38
egs/alimeeting/sa-asr/local/utt2spk_to_spk2utt.pl
Executable file
38
egs/alimeeting/sa-asr/local/utt2spk_to_spk2utt.pl
Executable file
@ -0,0 +1,38 @@
|
||||
#!/usr/bin/env perl
|
||||
# Copyright 2010-2011 Microsoft Corporation
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# THIS CODE IS PROVIDED *AS IS* BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
# KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION ANY IMPLIED
|
||||
# WARRANTIES OR CONDITIONS OF TITLE, FITNESS FOR A PARTICULAR PURPOSE,
|
||||
# MERCHANTABLITY OR NON-INFRINGEMENT.
|
||||
# See the Apache 2 License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# converts an utt2spk file to a spk2utt file.
|
||||
# Takes input from the stdin or from a file argument;
|
||||
# output goes to the standard out.
|
||||
|
||||
if ( @ARGV > 1 ) {
|
||||
die "Usage: utt2spk_to_spk2utt.pl [ utt2spk ] > spk2utt";
|
||||
}
|
||||
|
||||
while(<>){
|
||||
@A = split(" ", $_);
|
||||
@A == 2 || die "Invalid line in utt2spk file: $_";
|
||||
($u,$s) = @A;
|
||||
if(!$seen_spk{$s}) {
|
||||
$seen_spk{$s} = 1;
|
||||
push @spklist, $s;
|
||||
}
|
||||
push (@{$spk_hash{$s}}, "$u");
|
||||
}
|
||||
foreach $s (@spklist) {
|
||||
$l = join(' ',@{$spk_hash{$s}});
|
||||
print "$s $l\n";
|
||||
}
|
||||
404
egs/alimeeting/sa-asr/local/validate_data_dir.sh
Executable file
404
egs/alimeeting/sa-asr/local/validate_data_dir.sh
Executable file
@ -0,0 +1,404 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
cmd="$@"
|
||||
|
||||
no_feats=false
|
||||
no_wav=false
|
||||
no_text=false
|
||||
no_spk_sort=false
|
||||
non_print=false
|
||||
|
||||
|
||||
function show_help
|
||||
{
|
||||
echo "Usage: $0 [--no-feats] [--no-text] [--non-print] [--no-wav] [--no-spk-sort] <data-dir>"
|
||||
echo "The --no-xxx options mean that the script does not require "
|
||||
echo "xxx.scp to be present, but it will check it if it is present."
|
||||
echo "--no-spk-sort means that the script does not require the utt2spk to be "
|
||||
echo "sorted by the speaker-id in addition to being sorted by utterance-id."
|
||||
echo "--non-print ignore the presence of non-printable characters."
|
||||
echo "By default, utt2spk is expected to be sorted by both, which can be "
|
||||
echo "achieved by making the speaker-id prefixes of the utterance-ids"
|
||||
echo "e.g.: $0 data/train"
|
||||
}
|
||||
|
||||
while [ $# -ne 0 ] ; do
|
||||
case "$1" in
|
||||
"--no-feats")
|
||||
no_feats=true;
|
||||
;;
|
||||
"--no-text")
|
||||
no_text=true;
|
||||
;;
|
||||
"--non-print")
|
||||
non_print=true;
|
||||
;;
|
||||
"--no-wav")
|
||||
no_wav=true;
|
||||
;;
|
||||
"--no-spk-sort")
|
||||
no_spk_sort=true;
|
||||
;;
|
||||
*)
|
||||
if ! [ -z "$data" ] ; then
|
||||
show_help;
|
||||
exit 1
|
||||
fi
|
||||
data=$1
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
|
||||
|
||||
if [ ! -d $data ]; then
|
||||
echo "$0: no such directory $data"
|
||||
exit 1;
|
||||
fi
|
||||
|
||||
if [ -f $data/images.scp ]; then
|
||||
cmd=${cmd/--no-wav/} # remove --no-wav if supplied
|
||||
image/validate_data_dir.sh $cmd
|
||||
exit $?
|
||||
fi
|
||||
|
||||
for f in spk2utt utt2spk; do
|
||||
if [ ! -f $data/$f ]; then
|
||||
echo "$0: no such file $f"
|
||||
exit 1;
|
||||
fi
|
||||
if [ ! -s $data/$f ]; then
|
||||
echo "$0: empty file $f"
|
||||
exit 1;
|
||||
fi
|
||||
done
|
||||
|
||||
! cat $data/utt2spk | awk '{if (NF != 2) exit(1); }' && \
|
||||
echo "$0: $data/utt2spk has wrong format." && exit;
|
||||
|
||||
ns=$(wc -l < $data/spk2utt)
|
||||
if [ "$ns" == 1 ]; then
|
||||
echo "$0: WARNING: you have only one speaker. This probably a bad idea."
|
||||
echo " Search for the word 'bold' in http://kaldi-asr.org/doc/data_prep.html"
|
||||
echo " for more information."
|
||||
fi
|
||||
|
||||
|
||||
tmpdir=$(mktemp -d /tmp/kaldi.XXXX);
|
||||
trap 'rm -rf "$tmpdir"' EXIT HUP INT PIPE TERM
|
||||
|
||||
export LC_ALL=C
|
||||
|
||||
function check_sorted_and_uniq {
|
||||
! perl -ne '((substr $_,-1) eq "\n") or die "file $ARGV has invalid newline";' $1 && exit 1;
|
||||
! awk '{print $1}' < $1 | sort -uC && echo "$0: file $1 is not sorted or has duplicates" && exit 1;
|
||||
}
|
||||
|
||||
function partial_diff {
|
||||
diff -U1 $1 $2 | (head -n 6; echo "..."; tail -n 6)
|
||||
n1=`cat $1 | wc -l`
|
||||
n2=`cat $2 | wc -l`
|
||||
echo "[Lengths are $1=$n1 versus $2=$n2]"
|
||||
}
|
||||
|
||||
check_sorted_and_uniq $data/utt2spk
|
||||
|
||||
if ! $no_spk_sort; then
|
||||
! sort -k2 -C $data/utt2spk && \
|
||||
echo "$0: utt2spk is not in sorted order when sorted first on speaker-id " && \
|
||||
echo "(fix this by making speaker-ids prefixes of utt-ids)" && exit 1;
|
||||
fi
|
||||
|
||||
check_sorted_and_uniq $data/spk2utt
|
||||
|
||||
! cmp -s <(cat $data/utt2spk | awk '{print $1, $2;}') \
|
||||
<(local/spk2utt_to_utt2spk.pl $data/spk2utt) && \
|
||||
echo "$0: spk2utt and utt2spk do not seem to match" && exit 1;
|
||||
|
||||
cat $data/utt2spk | awk '{print $1;}' > $tmpdir/utts
|
||||
|
||||
if [ ! -f $data/text ] && ! $no_text; then
|
||||
echo "$0: no such file $data/text (if this is by design, specify --no-text)"
|
||||
exit 1;
|
||||
fi
|
||||
|
||||
num_utts=`cat $tmpdir/utts | wc -l`
|
||||
if ! $no_text; then
|
||||
if ! $non_print; then
|
||||
if locale -a | grep "C.UTF-8" >/dev/null; then
|
||||
L=C.UTF-8
|
||||
else
|
||||
L=en_US.UTF-8
|
||||
fi
|
||||
n_non_print=$(LC_ALL="$L" grep -c '[^[:print:][:space:]]' $data/text) && \
|
||||
echo "$0: text contains $n_non_print lines with non-printable characters" &&\
|
||||
exit 1;
|
||||
fi
|
||||
local/validate_text.pl $data/text || exit 1;
|
||||
check_sorted_and_uniq $data/text
|
||||
text_len=`cat $data/text | wc -l`
|
||||
illegal_sym_list="<s> </s> #0"
|
||||
for x in $illegal_sym_list; do
|
||||
if grep -w "$x" $data/text > /dev/null; then
|
||||
echo "$0: Error: in $data, text contains illegal symbol $x"
|
||||
exit 1;
|
||||
fi
|
||||
done
|
||||
awk '{print $1}' < $data/text > $tmpdir/utts.txt
|
||||
if ! cmp -s $tmpdir/utts{,.txt}; then
|
||||
echo "$0: Error: in $data, utterance lists extracted from utt2spk and text"
|
||||
echo "$0: differ, partial diff is:"
|
||||
partial_diff $tmpdir/utts{,.txt}
|
||||
exit 1;
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -f $data/segments ] && [ ! -f $data/wav.scp ]; then
|
||||
echo "$0: in directory $data, segments file exists but no wav.scp"
|
||||
exit 1;
|
||||
fi
|
||||
|
||||
|
||||
if [ ! -f $data/wav.scp ] && ! $no_wav; then
|
||||
echo "$0: no such file $data/wav.scp (if this is by design, specify --no-wav)"
|
||||
exit 1;
|
||||
fi
|
||||
|
||||
if [ -f $data/wav.scp ]; then
|
||||
check_sorted_and_uniq $data/wav.scp
|
||||
|
||||
if grep -E -q '^\S+\s+~' $data/wav.scp; then
|
||||
# note: it's not a good idea to have any kind of tilde in wav.scp, even if
|
||||
# part of a command, as it would cause compatibility problems if run by
|
||||
# other users, but this used to be not checked for so we let it slide unless
|
||||
# it's something of the form "foo ~/foo.wav" (i.e. a plain file name) which
|
||||
# would definitely cause problems as the fopen system call does not do
|
||||
# tilde expansion.
|
||||
echo "$0: Please do not use tilde (~) in your wav.scp."
|
||||
exit 1;
|
||||
fi
|
||||
|
||||
if [ -f $data/segments ]; then
|
||||
|
||||
check_sorted_and_uniq $data/segments
|
||||
# We have a segments file -> interpret wav file as "recording-ids" not utterance-ids.
|
||||
! cat $data/segments | \
|
||||
awk '{if (NF != 4 || $4 <= $3) { print "Bad line in segments file", $0; exit(1); }}' && \
|
||||
echo "$0: badly formatted segments file" && exit 1;
|
||||
|
||||
segments_len=`cat $data/segments | wc -l`
|
||||
if [ -f $data/text ]; then
|
||||
! cmp -s $tmpdir/utts <(awk '{print $1}' <$data/segments) && \
|
||||
echo "$0: Utterance list differs between $data/utt2spk and $data/segments " && \
|
||||
echo "$0: Lengths are $segments_len vs $num_utts" && \
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cat $data/segments | awk '{print $2}' | sort | uniq > $tmpdir/recordings
|
||||
awk '{print $1}' $data/wav.scp > $tmpdir/recordings.wav
|
||||
if ! cmp -s $tmpdir/recordings{,.wav}; then
|
||||
echo "$0: Error: in $data, recording-ids extracted from segments and wav.scp"
|
||||
echo "$0: differ, partial diff is:"
|
||||
partial_diff $tmpdir/recordings{,.wav}
|
||||
exit 1;
|
||||
fi
|
||||
if [ -f $data/reco2file_and_channel ]; then
|
||||
# this file is needed only for ctm scoring; it's indexed by recording-id.
|
||||
check_sorted_and_uniq $data/reco2file_and_channel
|
||||
! cat $data/reco2file_and_channel | \
|
||||
awk '{if (NF != 3 || ($3 != "A" && $3 != "B" )) {
|
||||
if ( NF == 3 && $3 == "1" ) {
|
||||
warning_issued = 1;
|
||||
} else {
|
||||
print "Bad line ", $0; exit 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
END {
|
||||
if (warning_issued == 1) {
|
||||
print "The channel should be marked as A or B, not 1! You should change it ASAP! "
|
||||
}
|
||||
}' && echo "$0: badly formatted reco2file_and_channel file" && exit 1;
|
||||
cat $data/reco2file_and_channel | awk '{print $1}' > $tmpdir/recordings.r2fc
|
||||
if ! cmp -s $tmpdir/recordings{,.r2fc}; then
|
||||
echo "$0: Error: in $data, recording-ids extracted from segments and reco2file_and_channel"
|
||||
echo "$0: differ, partial diff is:"
|
||||
partial_diff $tmpdir/recordings{,.r2fc}
|
||||
exit 1;
|
||||
fi
|
||||
fi
|
||||
else
|
||||
# No segments file -> assume wav.scp indexed by utterance.
|
||||
cat $data/wav.scp | awk '{print $1}' > $tmpdir/utts.wav
|
||||
if ! cmp -s $tmpdir/utts{,.wav}; then
|
||||
echo "$0: Error: in $data, utterance lists extracted from utt2spk and wav.scp"
|
||||
echo "$0: differ, partial diff is:"
|
||||
partial_diff $tmpdir/utts{,.wav}
|
||||
exit 1;
|
||||
fi
|
||||
|
||||
if [ -f $data/reco2file_and_channel ]; then
|
||||
# this file is needed only for ctm scoring; it's indexed by recording-id.
|
||||
check_sorted_and_uniq $data/reco2file_and_channel
|
||||
! cat $data/reco2file_and_channel | \
|
||||
awk '{if (NF != 3 || ($3 != "A" && $3 != "B" )) {
|
||||
if ( NF == 3 && $3 == "1" ) {
|
||||
warning_issued = 1;
|
||||
} else {
|
||||
print "Bad line ", $0; exit 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
END {
|
||||
if (warning_issued == 1) {
|
||||
print "The channel should be marked as A or B, not 1! You should change it ASAP! "
|
||||
}
|
||||
}' && echo "$0: badly formatted reco2file_and_channel file" && exit 1;
|
||||
cat $data/reco2file_and_channel | awk '{print $1}' > $tmpdir/utts.r2fc
|
||||
if ! cmp -s $tmpdir/utts{,.r2fc}; then
|
||||
echo "$0: Error: in $data, utterance-ids extracted from segments and reco2file_and_channel"
|
||||
echo "$0: differ, partial diff is:"
|
||||
partial_diff $tmpdir/utts{,.r2fc}
|
||||
exit 1;
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ ! -f $data/feats.scp ] && ! $no_feats; then
|
||||
echo "$0: no such file $data/feats.scp (if this is by design, specify --no-feats)"
|
||||
exit 1;
|
||||
fi
|
||||
|
||||
if [ -f $data/feats.scp ]; then
|
||||
check_sorted_and_uniq $data/feats.scp
|
||||
cat $data/feats.scp | awk '{print $1}' > $tmpdir/utts.feats
|
||||
if ! cmp -s $tmpdir/utts{,.feats}; then
|
||||
echo "$0: Error: in $data, utterance-ids extracted from utt2spk and features"
|
||||
echo "$0: differ, partial diff is:"
|
||||
partial_diff $tmpdir/utts{,.feats}
|
||||
exit 1;
|
||||
fi
|
||||
fi
|
||||
|
||||
|
||||
if [ -f $data/cmvn.scp ]; then
|
||||
check_sorted_and_uniq $data/cmvn.scp
|
||||
cat $data/cmvn.scp | awk '{print $1}' > $tmpdir/speakers.cmvn
|
||||
cat $data/spk2utt | awk '{print $1}' > $tmpdir/speakers
|
||||
if ! cmp -s $tmpdir/speakers{,.cmvn}; then
|
||||
echo "$0: Error: in $data, speaker lists extracted from spk2utt and cmvn"
|
||||
echo "$0: differ, partial diff is:"
|
||||
partial_diff $tmpdir/speakers{,.cmvn}
|
||||
exit 1;
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -f $data/spk2gender ]; then
|
||||
check_sorted_and_uniq $data/spk2gender
|
||||
! cat $data/spk2gender | awk '{if (!((NF == 2 && ($2 == "m" || $2 == "f")))) exit 1; }' && \
|
||||
echo "$0: Mal-formed spk2gender file" && exit 1;
|
||||
cat $data/spk2gender | awk '{print $1}' > $tmpdir/speakers.spk2gender
|
||||
cat $data/spk2utt | awk '{print $1}' > $tmpdir/speakers
|
||||
if ! cmp -s $tmpdir/speakers{,.spk2gender}; then
|
||||
echo "$0: Error: in $data, speaker lists extracted from spk2utt and spk2gender"
|
||||
echo "$0: differ, partial diff is:"
|
||||
partial_diff $tmpdir/speakers{,.spk2gender}
|
||||
exit 1;
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -f $data/spk2warp ]; then
|
||||
check_sorted_and_uniq $data/spk2warp
|
||||
! cat $data/spk2warp | awk '{if (!((NF == 2 && ($2 > 0.5 && $2 < 1.5)))){ print; exit 1; }}' && \
|
||||
echo "$0: Mal-formed spk2warp file" && exit 1;
|
||||
cat $data/spk2warp | awk '{print $1}' > $tmpdir/speakers.spk2warp
|
||||
cat $data/spk2utt | awk '{print $1}' > $tmpdir/speakers
|
||||
if ! cmp -s $tmpdir/speakers{,.spk2warp}; then
|
||||
echo "$0: Error: in $data, speaker lists extracted from spk2utt and spk2warp"
|
||||
echo "$0: differ, partial diff is:"
|
||||
partial_diff $tmpdir/speakers{,.spk2warp}
|
||||
exit 1;
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -f $data/utt2warp ]; then
|
||||
check_sorted_and_uniq $data/utt2warp
|
||||
! cat $data/utt2warp | awk '{if (!((NF == 2 && ($2 > 0.5 && $2 < 1.5)))){ print; exit 1; }}' && \
|
||||
echo "$0: Mal-formed utt2warp file" && exit 1;
|
||||
cat $data/utt2warp | awk '{print $1}' > $tmpdir/utts.utt2warp
|
||||
cat $data/utt2spk | awk '{print $1}' > $tmpdir/utts
|
||||
if ! cmp -s $tmpdir/utts{,.utt2warp}; then
|
||||
echo "$0: Error: in $data, utterance lists extracted from utt2spk and utt2warp"
|
||||
echo "$0: differ, partial diff is:"
|
||||
partial_diff $tmpdir/utts{,.utt2warp}
|
||||
exit 1;
|
||||
fi
|
||||
fi
|
||||
|
||||
# check some optionally-required things
|
||||
for f in vad.scp utt2lang utt2uniq; do
|
||||
if [ -f $data/$f ]; then
|
||||
check_sorted_and_uniq $data/$f
|
||||
if ! cmp -s <( awk '{print $1}' $data/utt2spk ) \
|
||||
<( awk '{print $1}' $data/$f ); then
|
||||
echo "$0: error: in $data, $f and utt2spk do not have identical utterance-id list"
|
||||
exit 1;
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
|
||||
if [ -f $data/utt2dur ]; then
|
||||
check_sorted_and_uniq $data/utt2dur
|
||||
cat $data/utt2dur | awk '{print $1}' > $tmpdir/utts.utt2dur
|
||||
if ! cmp -s $tmpdir/utts{,.utt2dur}; then
|
||||
echo "$0: Error: in $data, utterance-ids extracted from utt2spk and utt2dur file"
|
||||
echo "$0: differ, partial diff is:"
|
||||
partial_diff $tmpdir/utts{,.utt2dur}
|
||||
exit 1;
|
||||
fi
|
||||
cat $data/utt2dur | \
|
||||
awk '{ if (NF != 2 || !($2 > 0)) { print "Bad line utt2dur:" NR ":" $0; exit(1) }}' || exit 1
|
||||
fi
|
||||
|
||||
if [ -f $data/utt2num_frames ]; then
|
||||
check_sorted_and_uniq $data/utt2num_frames
|
||||
cat $data/utt2num_frames | awk '{print $1}' > $tmpdir/utts.utt2num_frames
|
||||
if ! cmp -s $tmpdir/utts{,.utt2num_frames}; then
|
||||
echo "$0: Error: in $data, utterance-ids extracted from utt2spk and utt2num_frames file"
|
||||
echo "$0: differ, partial diff is:"
|
||||
partial_diff $tmpdir/utts{,.utt2num_frames}
|
||||
exit 1
|
||||
fi
|
||||
awk <$data/utt2num_frames '{
|
||||
if (NF != 2 || !($2 > 0) || $2 != int($2)) {
|
||||
print "Bad line utt2num_frames:" NR ":" $0
|
||||
exit 1 } }' || exit 1
|
||||
fi
|
||||
|
||||
if [ -f $data/reco2dur ]; then
|
||||
check_sorted_and_uniq $data/reco2dur
|
||||
cat $data/reco2dur | awk '{print $1}' > $tmpdir/recordings.reco2dur
|
||||
if [ -f $tmpdir/recordings ]; then
|
||||
if ! cmp -s $tmpdir/recordings{,.reco2dur}; then
|
||||
echo "$0: Error: in $data, recording-ids extracted from segments and reco2dur file"
|
||||
echo "$0: differ, partial diff is:"
|
||||
partial_diff $tmpdir/recordings{,.reco2dur}
|
||||
exit 1;
|
||||
fi
|
||||
else
|
||||
if ! cmp -s $tmpdir/{utts,recordings.reco2dur}; then
|
||||
echo "$0: Error: in $data, recording-ids extracted from wav.scp and reco2dur file"
|
||||
echo "$0: differ, partial diff is:"
|
||||
partial_diff $tmpdir/{utts,recordings.reco2dur}
|
||||
exit 1;
|
||||
fi
|
||||
fi
|
||||
cat $data/reco2dur | \
|
||||
awk '{ if (NF != 2 || !($2 > 0)) { print "Bad line : " $0; exit(1) }}' || exit 1
|
||||
fi
|
||||
|
||||
|
||||
echo "$0: Successfully validated data-directory $data"
|
||||
136
egs/alimeeting/sa-asr/local/validate_text.pl
Executable file
136
egs/alimeeting/sa-asr/local/validate_text.pl
Executable file
@ -0,0 +1,136 @@
|
||||
#!/usr/bin/env perl
|
||||
#
|
||||
#===============================================================================
|
||||
# Copyright 2017 Johns Hopkins University (author: Yenda Trmal <jtrmal@gmail.com>)
|
||||
# Johns Hopkins University (author: Daniel Povey)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# THIS CODE IS PROVIDED *AS IS* BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
# KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION ANY IMPLIED
|
||||
# WARRANTIES OR CONDITIONS OF TITLE, FITNESS FOR A PARTICULAR PURPOSE,
|
||||
# MERCHANTABLITY OR NON-INFRINGEMENT.
|
||||
# See the Apache 2 License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#===============================================================================
|
||||
|
||||
# validation script for data/<dataset>/text
|
||||
# to be called (preferably) from utils/validate_data_dir.sh
|
||||
use strict;
|
||||
use warnings;
|
||||
use utf8;
|
||||
use Fcntl qw< SEEK_SET >;
|
||||
|
||||
# this function reads the opened file (supplied as a first
|
||||
# parameter) into an array of lines. For each
|
||||
# line, it tests whether it's a valid utf-8 compatible
|
||||
# line. If all lines are valid utf-8, it returns the lines
|
||||
# decoded as utf-8, otherwise it assumes the file's encoding
|
||||
# is one of those 1-byte encodings, such as ISO-8859-x
|
||||
# or Windows CP-X.
|
||||
# Please recall we do not really care about
|
||||
# the actually encoding, we just need to
|
||||
# make sure the length of the (decoded) string
|
||||
# is correct (to make the output formatting looking right).
|
||||
sub get_utf8_or_bytestream {
|
||||
use Encode qw(decode encode);
|
||||
my $is_utf_compatible = 1;
|
||||
my @unicode_lines;
|
||||
my @raw_lines;
|
||||
my $raw_text;
|
||||
my $lineno = 0;
|
||||
my $file = shift;
|
||||
|
||||
while (<$file>) {
|
||||
$raw_text = $_;
|
||||
last unless $raw_text;
|
||||
if ($is_utf_compatible) {
|
||||
my $decoded_text = eval { decode("UTF-8", $raw_text, Encode::FB_CROAK) } ;
|
||||
$is_utf_compatible = $is_utf_compatible && defined($decoded_text);
|
||||
push @unicode_lines, $decoded_text;
|
||||
} else {
|
||||
#print STDERR "WARNING: the line $raw_text cannot be interpreted as UTF-8: $decoded_text\n";
|
||||
;
|
||||
}
|
||||
push @raw_lines, $raw_text;
|
||||
$lineno += 1;
|
||||
}
|
||||
|
||||
if (!$is_utf_compatible) {
|
||||
return (0, @raw_lines);
|
||||
} else {
|
||||
return (1, @unicode_lines);
|
||||
}
|
||||
}
|
||||
|
||||
# check if the given unicode string contain unicode whitespaces
|
||||
# other than the usual four: TAB, LF, CR and SPACE
|
||||
sub validate_utf8_whitespaces {
|
||||
my $unicode_lines = shift;
|
||||
use feature 'unicode_strings';
|
||||
for (my $i = 0; $i < scalar @{$unicode_lines}; $i++) {
|
||||
my $current_line = $unicode_lines->[$i];
|
||||
if ((substr $current_line, -1) ne "\n"){
|
||||
print STDERR "$0: The current line (nr. $i) has invalid newline\n";
|
||||
return 1;
|
||||
}
|
||||
my @A = split(" ", $current_line);
|
||||
my $utt_id = $A[0];
|
||||
# we replace TAB, LF, CR, and SPACE
|
||||
# this is to simplify the test
|
||||
if ($current_line =~ /\x{000d}/) {
|
||||
print STDERR "$0: The line for utterance $utt_id contains CR (0x0D) character\n";
|
||||
return 1;
|
||||
}
|
||||
$current_line =~ s/[\x{0009}\x{000a}\x{0020}]/./g;
|
||||
if ($current_line =~/\s/) {
|
||||
print STDERR "$0: The line for utterance $utt_id contains disallowed Unicode whitespaces\n";
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
# checks if the text in the file (supplied as the argument) is utf-8 compatible
|
||||
# if yes, checks if it contains only allowed whitespaces. If no, then does not
|
||||
# do anything. The function seeks to the original position in the file after
|
||||
# reading the text.
|
||||
sub check_allowed_whitespace {
|
||||
my $file = shift;
|
||||
my $filename = shift;
|
||||
my $pos = tell($file);
|
||||
(my $is_utf, my @lines) = get_utf8_or_bytestream($file);
|
||||
seek($file, $pos, SEEK_SET);
|
||||
if ($is_utf) {
|
||||
my $has_invalid_whitespaces = validate_utf8_whitespaces(\@lines);
|
||||
if ($has_invalid_whitespaces) {
|
||||
print STDERR "$0: ERROR: text file '$filename' contains disallowed UTF-8 whitespace character(s)\n";
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
|
||||
if(@ARGV != 1) {
|
||||
die "Usage: validate_text.pl <text-file>\n" .
|
||||
"e.g.: validate_text.pl data/train/text\n";
|
||||
}
|
||||
|
||||
my $text = shift @ARGV;
|
||||
|
||||
if (-z "$text") {
|
||||
print STDERR "$0: ERROR: file '$text' is empty or does not exist\n";
|
||||
exit 1;
|
||||
}
|
||||
|
||||
if(!open(FILE, "<$text")) {
|
||||
print STDERR "$0: ERROR: failed to open $text\n";
|
||||
exit 1;
|
||||
}
|
||||
|
||||
check_allowed_whitespace(\*FILE, $text) or exit 1;
|
||||
close(FILE);
|
||||
5
egs/alimeeting/sa-asr/path.sh
Executable file
5
egs/alimeeting/sa-asr/path.sh
Executable file
@ -0,0 +1,5 @@
|
||||
export FUNASR_DIR=$PWD/../../..
|
||||
|
||||
# NOTE(kan-bayashi): Use UTF-8 in Python to avoid UnicodeDecodeError when LC_ALL=C
|
||||
export PYTHONIOENCODING=UTF-8
|
||||
export PATH=$FUNASR_DIR/funasr/bin:$PATH
|
||||
50
egs/alimeeting/sa-asr/run.sh
Executable file
50
egs/alimeeting/sa-asr/run.sh
Executable file
@ -0,0 +1,50 @@
|
||||
#!/usr/bin/env bash
|
||||
# Set bash to 'debug' mode, it will exit on :
|
||||
# -e 'error', -u 'undefined variable', -o ... 'error in pipeline', -x 'print commands',
|
||||
set -e
|
||||
set -u
|
||||
set -o pipefail
|
||||
|
||||
ngpu=4
|
||||
device="0,1,2,3"
|
||||
|
||||
stage=1
|
||||
stop_stage=18
|
||||
|
||||
|
||||
train_set=Train_Ali_far
|
||||
valid_set=Eval_Ali_far
|
||||
test_sets="Test_Ali_far"
|
||||
asr_config=conf/train_asr_conformer.yaml
|
||||
sa_asr_config=conf/train_sa_asr_conformer.yaml
|
||||
inference_config=conf/decode_asr_rnn.yaml
|
||||
|
||||
lm_config=conf/train_lm_transformer.yaml
|
||||
use_lm=false
|
||||
use_wordlm=false
|
||||
./asr_local.sh \
|
||||
--device ${device} \
|
||||
--ngpu ${ngpu} \
|
||||
--stage ${stage} \
|
||||
--stop_stage ${stop_stage} \
|
||||
--gpu_inference true \
|
||||
--njob_infer 4 \
|
||||
--asr_exp exp/asr_train_multispeaker_conformer_raw_zh_char_data_alimeeting \
|
||||
--sa_asr_exp exp/sa_asr_train_conformer_raw_zh_char_data_alimeeting \
|
||||
--asr_stats_dir exp/asr_stats_multispeaker_conformer_raw_zh_char_data_alimeeting \
|
||||
--lm_exp exp/lm_train_multispeaker_transformer_zh_char_data_alimeeting \
|
||||
--lm_stats_dir exp/lm_stats_multispeaker_zh_char_data_alimeeting \
|
||||
--lang zh \
|
||||
--audio_format wav \
|
||||
--feats_type raw \
|
||||
--token_type char \
|
||||
--use_lm ${use_lm} \
|
||||
--use_word_lm ${use_wordlm} \
|
||||
--lm_config "${lm_config}" \
|
||||
--asr_config "${asr_config}" \
|
||||
--sa_asr_config "${sa_asr_config}" \
|
||||
--inference_config "${inference_config}" \
|
||||
--train_set "${train_set}" \
|
||||
--valid_set "${valid_set}" \
|
||||
--test_sets "${test_sets}" \
|
||||
--lm_train_text "data/${train_set}/text" "$@"
|
||||
50
egs/alimeeting/sa-asr/run_m2met_2023_infer.sh
Executable file
50
egs/alimeeting/sa-asr/run_m2met_2023_infer.sh
Executable file
@ -0,0 +1,50 @@
|
||||
#!/usr/bin/env bash
|
||||
# Set bash to 'debug' mode, it will exit on :
|
||||
# -e 'error', -u 'undefined variable', -o ... 'error in pipeline', -x 'print commands',
|
||||
set -e
|
||||
set -u
|
||||
set -o pipefail
|
||||
|
||||
ngpu=4
|
||||
device="0,1,2,3"
|
||||
|
||||
stage=1
|
||||
stop_stage=4
|
||||
|
||||
|
||||
train_set=Train_Ali_far
|
||||
valid_set=Eval_Ali_far
|
||||
test_sets="Test_2023_Ali_far"
|
||||
asr_config=conf/train_asr_conformer.yaml
|
||||
sa_asr_config=conf/train_sa_asr_conformer.yaml
|
||||
inference_config=conf/decode_asr_rnn.yaml
|
||||
|
||||
lm_config=conf/train_lm_transformer.yaml
|
||||
use_lm=false
|
||||
use_wordlm=false
|
||||
./asr_local_m2met_2023_infer.sh \
|
||||
--device ${device} \
|
||||
--ngpu ${ngpu} \
|
||||
--stage ${stage} \
|
||||
--stop_stage ${stop_stage} \
|
||||
--gpu_inference true \
|
||||
--njob_infer 4 \
|
||||
--asr_exp exp/asr_train_multispeaker_conformer_raw_zh_char_data_alimeeting \
|
||||
--sa_asr_exp exp/sa_asr_train_conformer_raw_zh_char_data_alimeeting \
|
||||
--asr_stats_dir exp/asr_stats_multispeaker_conformer_raw_zh_char_data_alimeeting \
|
||||
--lm_exp exp/lm_train_multispeaker_transformer_zh_char_data_alimeeting \
|
||||
--lm_stats_dir exp/lm_stats_multispeaker_zh_char_data_alimeeting \
|
||||
--lang zh \
|
||||
--audio_format wav \
|
||||
--feats_type raw \
|
||||
--token_type char \
|
||||
--use_lm ${use_lm} \
|
||||
--use_word_lm ${use_wordlm} \
|
||||
--lm_config "${lm_config}" \
|
||||
--asr_config "${asr_config}" \
|
||||
--sa_asr_config "${sa_asr_config}" \
|
||||
--inference_config "${inference_config}" \
|
||||
--train_set "${train_set}" \
|
||||
--valid_set "${valid_set}" \
|
||||
--test_sets "${test_sets}" \
|
||||
--lm_train_text "data/${train_set}/text" "$@"
|
||||
1
egs/alimeeting/sa-asr/utils
Symbolic link
1
egs/alimeeting/sa-asr/utils
Symbolic link
@ -0,0 +1 @@
|
||||
../../aishell/transformer/utils
|
||||
@ -1,7 +1,7 @@
|
||||
# Speech Recognition
|
||||
|
||||
> **Note**:
|
||||
> The modelscope pipeline supports all the models in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_models.html#pretrained-models-on-modelscope) to inference and finetine. Here we take the typic models as examples to demonstrate the usage.
|
||||
> The modelscope pipeline supports all the models in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/model_zoo/modelscope_models.html#pretrained-models-on-modelscope) to inference and finetine. Here we take the typic models as examples to demonstrate the usage.
|
||||
|
||||
## Inference
|
||||
|
||||
@ -19,22 +19,24 @@ inference_pipeline = pipeline(
|
||||
rec_result = inference_pipeline(audio_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav')
|
||||
print(rec_result)
|
||||
```
|
||||
#### [Paraformer-online Model](https://www.modelscope.cn/models/damo/speech_paraformer_asr_nat-zh-cn-16k-common-vocab8404-online/summary)
|
||||
#### [Paraformer-online Model](https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online/summary)
|
||||
```python
|
||||
inference_pipeline = pipeline(
|
||||
task=Tasks.auto_speech_recognition,
|
||||
model='damo/speech_paraformer_asr_nat-zh-cn-16k-common-vocab8404-online',
|
||||
model='damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online',
|
||||
model_revision='v1.0.4'
|
||||
)
|
||||
import soundfile
|
||||
speech, sample_rate = soundfile.read("example/asr_example.wav")
|
||||
|
||||
param_dict = {"cache": dict(), "is_final": False}
|
||||
chunk_stride = 7680# 480ms
|
||||
# first chunk, 480ms
|
||||
chunk_size = [5, 10, 5] #[5, 10, 5] 600ms, [8, 8, 4] 480ms
|
||||
param_dict = {"cache": dict(), "is_final": False, "chunk_size": chunk_size}
|
||||
chunk_stride = chunk_size[1] * 960 # 600ms、480ms
|
||||
# first chunk, 600ms
|
||||
speech_chunk = speech[0:chunk_stride]
|
||||
rec_result = inference_pipeline(audio_in=speech_chunk, param_dict=param_dict)
|
||||
print(rec_result)
|
||||
# next chunk, 480ms
|
||||
# next chunk, 600ms
|
||||
speech_chunk = speech[chunk_stride:chunk_stride+chunk_stride]
|
||||
rec_result = inference_pipeline(audio_in=speech_chunk, param_dict=param_dict)
|
||||
print(rec_result)
|
||||
@ -42,7 +44,7 @@ print(rec_result)
|
||||
Full code of demo, please ref to [demo](https://github.com/alibaba-damo-academy/FunASR/discussions/241)
|
||||
|
||||
#### [UniASR Model](https://www.modelscope.cn/models/damo/speech_UniASR_asr_2pass-zh-cn-8k-common-vocab3445-pytorch-online/summary)
|
||||
There are three decoding mode for UniASR model(`fast`、`normal`、`offline`), for more model detailes, please refer to [docs](https://www.modelscope.cn/models/damo/speech_UniASR_asr_2pass-zh-cn-8k-common-vocab3445-pytorch-online/summary)
|
||||
There are three decoding mode for UniASR model(`fast`、`normal`、`offline`), for more model details, please refer to [docs](https://www.modelscope.cn/models/damo/speech_UniASR_asr_2pass-zh-cn-8k-common-vocab3445-pytorch-online/summary)
|
||||
```python
|
||||
decoding_model = "fast" # "fast"、"normal"、"offline"
|
||||
inference_pipeline = pipeline(
|
||||
@ -59,7 +61,7 @@ Full code of demo, please ref to [demo](https://github.com/alibaba-damo-academy/
|
||||
Undo
|
||||
|
||||
#### [MFCCA Model](https://www.modelscope.cn/models/NPU-ASLP/speech_mfcca_asr-zh-cn-16k-alimeeting-vocab4950/summary)
|
||||
For more model detailes, please refer to [docs](https://www.modelscope.cn/models/NPU-ASLP/speech_mfcca_asr-zh-cn-16k-alimeeting-vocab4950/summary)
|
||||
For more model details, please refer to [docs](https://www.modelscope.cn/models/NPU-ASLP/speech_mfcca_asr-zh-cn-16k-alimeeting-vocab4950/summary)
|
||||
```python
|
||||
from modelscope.pipelines import pipeline
|
||||
from modelscope.utils.constant import Tasks
|
||||
@ -74,15 +76,15 @@ rec_result = inference_pipeline(audio_in='https://isv-data.oss-cn-hangzhou.aliyu
|
||||
print(rec_result)
|
||||
```
|
||||
|
||||
#### API-reference
|
||||
##### Define pipeline
|
||||
### API-reference
|
||||
#### Define pipeline
|
||||
- `task`: `Tasks.auto_speech_recognition`
|
||||
- `model`: model name in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_models.html#pretrained-models-on-modelscope), or model path in local disk
|
||||
- `model`: model name in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/model_zoo/modelscope_models.html#pretrained-models-on-modelscope), or model path in local disk
|
||||
- `ngpu`: `1` (Default), decoding on GPU. If ngpu=0, decoding on CPU
|
||||
- `ncpu`: `1` (Default), sets the number of threads used for intraop parallelism on CPU
|
||||
- `output_dir`: `None` (Default), the output path of results if set
|
||||
- `batch_size`: `1` (Default), batch size when decoding
|
||||
##### Infer pipeline
|
||||
#### Infer pipeline
|
||||
- `audio_in`: the input to decode, which could be:
|
||||
- wav_path, `e.g.`: asr_example.wav,
|
||||
- pcm_path, `e.g.`: asr_example.pcm,
|
||||
@ -100,20 +102,20 @@ print(rec_result)
|
||||
### Inference with multi-thread CPUs or multi GPUs
|
||||
FunASR also offer recipes [egs_modelscope/asr/TEMPLATE/infer.sh](https://github.com/alibaba-damo-academy/FunASR/blob/main/egs_modelscope/asr/TEMPLATE/infer.sh) to decode with multi-thread CPUs, or multi GPUs.
|
||||
|
||||
- Setting parameters in `infer.sh`
|
||||
- `model`: model name in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_models.html#pretrained-models-on-modelscope), or model path in local disk
|
||||
- `data_dir`: the dataset dir needs to include `wav.scp`. If `${data_dir}/text` is also exists, CER will be computed
|
||||
- `output_dir`: output dir of the recognition results
|
||||
- `batch_size`: `64` (Default), batch size of inference on gpu
|
||||
- `gpu_inference`: `true` (Default), whether to perform gpu decoding, set false for CPU inference
|
||||
- `gpuid_list`: `0,1` (Default), which gpu_ids are used to infer
|
||||
- `njob`: only used for CPU inference (`gpu_inference`=`false`), `64` (Default), the number of jobs for CPU decoding
|
||||
- `checkpoint_dir`: only used for infer finetuned models, the path dir of finetuned models
|
||||
- `checkpoint_name`: only used for infer finetuned models, `valid.cer_ctc.ave.pb` (Default), which checkpoint is used to infer
|
||||
- `decoding_mode`: `normal` (Default), decoding mode for UniASR model(fast、normal、offline)
|
||||
- `hotword_txt`: `None` (Default), hotword file for contextual paraformer model(the hotword file name ends with .txt")
|
||||
#### Settings of `infer.sh`
|
||||
- `model`: model name in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/model_zoo/modelscope_models.html#pretrained-models-on-modelscope), or model path in local disk
|
||||
- `data_dir`: the dataset dir needs to include `wav.scp`. If `${data_dir}/text` is also exists, CER will be computed
|
||||
- `output_dir`: output dir of the recognition results
|
||||
- `batch_size`: `64` (Default), batch size of inference on gpu
|
||||
- `gpu_inference`: `true` (Default), whether to perform gpu decoding, set false for CPU inference
|
||||
- `gpuid_list`: `0,1` (Default), which gpu_ids are used to infer
|
||||
- `njob`: only used for CPU inference (`gpu_inference`=`false`), `64` (Default), the number of jobs for CPU decoding
|
||||
- `checkpoint_dir`: only used for infer finetuned models, the path dir of finetuned models
|
||||
- `checkpoint_name`: only used for infer finetuned models, `valid.cer_ctc.ave.pb` (Default), which checkpoint is used to infer
|
||||
- `decoding_mode`: `normal` (Default), decoding mode for UniASR model(fast、normal、offline)
|
||||
- `hotword_txt`: `None` (Default), hotword file for contextual paraformer model(the hotword file name ends with .txt")
|
||||
|
||||
- Decode with multi GPUs:
|
||||
#### Decode with multi GPUs:
|
||||
```shell
|
||||
bash infer.sh \
|
||||
--model "damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch" \
|
||||
@ -123,7 +125,7 @@ FunASR also offer recipes [egs_modelscope/asr/TEMPLATE/infer.sh](https://github.
|
||||
--gpu_inference true \
|
||||
--gpuid_list "0,1"
|
||||
```
|
||||
- Decode with multi-thread CPUs:
|
||||
#### Decode with multi-thread CPUs:
|
||||
```shell
|
||||
bash infer.sh \
|
||||
--model "damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch" \
|
||||
@ -133,7 +135,7 @@ FunASR also offer recipes [egs_modelscope/asr/TEMPLATE/infer.sh](https://github.
|
||||
--njob 64
|
||||
```
|
||||
|
||||
- Results
|
||||
#### Results
|
||||
|
||||
The decoding results can be found in `$output_dir/1best_recog/text.cer`, which includes recognition results of each sample and the CER metric of the whole test set.
|
||||
|
||||
|
||||
@ -1,30 +0,0 @@
|
||||
# ModelScope Model
|
||||
|
||||
## How to finetune and infer using a pretrained Paraformer-large Model
|
||||
|
||||
### Finetune
|
||||
|
||||
- Modify finetune training related parameters in `finetune.py`
|
||||
- <strong>output_dir:</strong> # result dir
|
||||
- <strong>data_dir:</strong> # the dataset dir needs to include files: train/wav.scp, train/text; validation/wav.scp, validation/text.
|
||||
- <strong>batch_bins:</strong> # batch size
|
||||
- <strong>max_epoch:</strong> # number of training epoch
|
||||
- <strong>lr:</strong> # learning rate
|
||||
|
||||
- Then you can run the pipeline to finetune with:
|
||||
```python
|
||||
python finetune.py
|
||||
```
|
||||
|
||||
### Inference
|
||||
|
||||
Or you can use the finetuned model for inference directly.
|
||||
|
||||
- Setting parameters in `infer.py`
|
||||
- <strong>audio_in:</strong> # support wav, url, bytes, and parsed audio format.
|
||||
- <strong>output_dir:</strong> # If the input format is wav.scp, it needs to be set.
|
||||
|
||||
- Then you can run the pipeline to infer with:
|
||||
```python
|
||||
python infer.py
|
||||
```
|
||||
@ -0,0 +1 @@
|
||||
../../TEMPLATE/README.md
|
||||
@ -0,0 +1,14 @@
|
||||
from modelscope.pipelines import pipeline
|
||||
from modelscope.utils.constant import Tasks
|
||||
|
||||
if __name__ == '__main__':
|
||||
audio_in = 'https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav'
|
||||
output_dir = None
|
||||
inference_pipeline = pipeline(
|
||||
task=Tasks.auto_speech_recognition,
|
||||
model="damo/speech_conformer_asr_nat-zh-cn-16k-aishell1-vocab4234-pytorch",
|
||||
output_dir=output_dir,
|
||||
)
|
||||
rec_result = inference_pipeline(audio_in=audio_in)
|
||||
print(rec_result)
|
||||
|
||||
@ -1,14 +0,0 @@
|
||||
from modelscope.pipelines import pipeline
|
||||
from modelscope.utils.constant import Tasks
|
||||
|
||||
if __name__ == '__main__':
|
||||
audio_in = 'https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav'
|
||||
output_dir = None
|
||||
inference_pipline = pipeline(
|
||||
task=Tasks.auto_speech_recognition,
|
||||
model="damo/speech_conformer_asr_nat-zh-cn-16k-aishell1-vocab4234-pytorch",
|
||||
output_dir=output_dir,
|
||||
)
|
||||
rec_result = inference_pipline(audio_in=audio_in)
|
||||
print(rec_result)
|
||||
|
||||
@ -0,0 +1 @@
|
||||
../../TEMPLATE/infer.py
|
||||
@ -0,0 +1 @@
|
||||
../../TEMPLATE/infer.sh
|
||||
@ -0,0 +1 @@
|
||||
../../TEMPLATE/README.md
|
||||
@ -0,0 +1,13 @@
|
||||
from modelscope.pipelines import pipeline
|
||||
from modelscope.utils.constant import Tasks
|
||||
|
||||
if __name__ == "__main__":
|
||||
audio_in = "https://modelscope.oss-cn-beijing.aliyuncs.com/test/audios/asr_example.wav"
|
||||
output_dir = "./results"
|
||||
inference_pipeline = pipeline(
|
||||
task=Tasks.auto_speech_recognition,
|
||||
model="damo/speech_conformer_asr_nat-zh-cn-16k-aishell2-vocab5212-pytorch",
|
||||
output_dir=output_dir,
|
||||
)
|
||||
rec_result = inference_pipeline(audio_in=audio_in)
|
||||
print(rec_result)
|
||||
@ -1,13 +0,0 @@
|
||||
from modelscope.pipelines import pipeline
|
||||
from modelscope.utils.constant import Tasks
|
||||
|
||||
if __name__ == "__main__":
|
||||
audio_in = "https://modelscope.oss-cn-beijing.aliyuncs.com/test/audios/asr_example.wav"
|
||||
output_dir = "./results"
|
||||
inference_pipline = pipeline(
|
||||
task=Tasks.auto_speech_recognition,
|
||||
model="damo/speech_conformer_asr_nat-zh-cn-16k-aishell2-vocab5212-pytorch",
|
||||
output_dir=output_dir,
|
||||
)
|
||||
rec_result = inference_pipline(audio_in=audio_in)
|
||||
print(rec_result)
|
||||
@ -0,0 +1 @@
|
||||
../../TEMPLATE/infer.py
|
||||
@ -0,0 +1 @@
|
||||
../../TEMPLATE/infer.sh
|
||||
@ -16,13 +16,13 @@ def modelscope_infer_core(output_dir, split_dir, njob, idx):
|
||||
os.environ['CUDA_VISIBLE_DEVICES'] = str(gpu_list[gpu_id])
|
||||
else:
|
||||
os.environ['CUDA_VISIBLE_DEVICES'] = str(gpu_id)
|
||||
inference_pipline = pipeline(
|
||||
inference_pipeline = pipeline(
|
||||
task=Tasks.auto_speech_recognition,
|
||||
model="damo/speech_data2vec_pretrain-paraformer-zh-cn-aishell2-16k",
|
||||
output_dir=output_dir_job,
|
||||
)
|
||||
audio_in = os.path.join(split_dir, "wav.{}.scp".format(idx))
|
||||
inference_pipline(audio_in=audio_in)
|
||||
inference_pipeline(audio_in=audio_in)
|
||||
|
||||
|
||||
def modelscope_infer(params):
|
||||
|
||||
@ -16,13 +16,13 @@ def modelscope_infer_core(output_dir, split_dir, njob, idx):
|
||||
os.environ['CUDA_VISIBLE_DEVICES'] = str(gpu_list[gpu_id])
|
||||
else:
|
||||
os.environ['CUDA_VISIBLE_DEVICES'] = str(gpu_id)
|
||||
inference_pipline = pipeline(
|
||||
inference_pipeline = pipeline(
|
||||
task=Tasks.auto_speech_recognition,
|
||||
model="damo/speech_data2vec_pretrain-zh-cn-aishell2-16k-pytorch",
|
||||
output_dir=output_dir_job,
|
||||
)
|
||||
audio_in = os.path.join(split_dir, "wav.{}.scp".format(idx))
|
||||
inference_pipline(audio_in=audio_in)
|
||||
inference_pipeline(audio_in=audio_in)
|
||||
|
||||
|
||||
def modelscope_infer(params):
|
||||
|
||||
@ -0,0 +1,11 @@
|
||||
from modelscope.pipelines import pipeline
|
||||
from modelscope.utils.constant import Tasks
|
||||
|
||||
inference_pipeline = pipeline(
|
||||
task=Tasks.auto_speech_recognition,
|
||||
model='NPU-ASLP/speech_mfcca_asr-zh-cn-16k-alimeeting-vocab4950',
|
||||
model_revision='v3.0.0'
|
||||
)
|
||||
|
||||
rec_result = inference_pipeline(audio_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav')
|
||||
print(rec_result)
|
||||
@ -1,67 +0,0 @@
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
|
||||
from modelscope.pipelines import pipeline
|
||||
from modelscope.utils.constant import Tasks
|
||||
|
||||
from funasr.utils.compute_wer import compute_wer
|
||||
|
||||
|
||||
def modelscope_infer_after_finetune(params):
|
||||
# prepare for decoding
|
||||
pretrained_model_path = os.path.join(os.environ["HOME"], ".cache/modelscope/hub", params["modelscope_model_name"])
|
||||
for file_name in params["required_files"]:
|
||||
if file_name == "configuration.json":
|
||||
with open(os.path.join(pretrained_model_path, file_name)) as f:
|
||||
config_dict = json.load(f)
|
||||
config_dict["model"]["am_model_name"] = params["decoding_model_name"]
|
||||
with open(os.path.join(params["output_dir"], "configuration.json"), "w") as f:
|
||||
json.dump(config_dict, f, indent=4, separators=(',', ': '))
|
||||
else:
|
||||
shutil.copy(os.path.join(pretrained_model_path, file_name),
|
||||
os.path.join(params["output_dir"], file_name))
|
||||
decoding_path = os.path.join(params["output_dir"], "decode_results")
|
||||
if os.path.exists(decoding_path):
|
||||
shutil.rmtree(decoding_path)
|
||||
os.mkdir(decoding_path)
|
||||
|
||||
# decoding
|
||||
inference_pipeline = pipeline(
|
||||
task=Tasks.auto_speech_recognition,
|
||||
model=params["output_dir"],
|
||||
output_dir=decoding_path,
|
||||
batch_size=1
|
||||
)
|
||||
audio_in = os.path.join(params["data_dir"], "wav.scp")
|
||||
inference_pipeline(audio_in=audio_in)
|
||||
|
||||
# computer CER if GT text is set
|
||||
text_in = os.path.join(params["data_dir"], "text")
|
||||
if text_in is not None:
|
||||
text_proc_file = os.path.join(decoding_path, "1best_recog/token")
|
||||
text_proc_file2 = os.path.join(decoding_path, "1best_recog/token_nosep")
|
||||
with open(text_proc_file, 'r') as hyp_reader:
|
||||
with open(text_proc_file2, 'w') as hyp_writer:
|
||||
for line in hyp_reader:
|
||||
new_context = line.strip().replace("src","").replace(" "," ").replace(" "," ").strip()
|
||||
hyp_writer.write(new_context+'\n')
|
||||
text_in2 = os.path.join(decoding_path, "1best_recog/ref_text_nosep")
|
||||
with open(text_in, 'r') as ref_reader:
|
||||
with open(text_in2, 'w') as ref_writer:
|
||||
for line in ref_reader:
|
||||
new_context = line.strip().replace("src","").replace(" "," ").replace(" "," ").strip()
|
||||
ref_writer.write(new_context+'\n')
|
||||
|
||||
|
||||
compute_wer(text_in, text_proc_file, os.path.join(decoding_path, "text.sp.cer"))
|
||||
compute_wer(text_in2, text_proc_file2, os.path.join(decoding_path, "text.nosp.cer"))
|
||||
|
||||
if __name__ == '__main__':
|
||||
params = {}
|
||||
params["modelscope_model_name"] = "NPU-ASLP/speech_mfcca_asr-zh-cn-16k-alimeeting-vocab4950"
|
||||
params["required_files"] = ["feats_stats.npz", "decoding.yaml", "configuration.json"]
|
||||
params["output_dir"] = "./checkpoint"
|
||||
params["data_dir"] = "./example_data/validation"
|
||||
params["decoding_model_name"] = "valid.acc.ave.pb"
|
||||
modelscope_infer_after_finetune(params)
|
||||
@ -1,19 +0,0 @@
|
||||
# ModelScope Model
|
||||
|
||||
## How to infer using a pretrained Paraformer-large Model
|
||||
|
||||
### Inference
|
||||
|
||||
You can use the pretrain model for inference directly.
|
||||
|
||||
- Setting parameters in `infer.py`
|
||||
- <strong>audio_in:</strong> # Support wav, url, bytes, and parsed audio format.
|
||||
- <strong>output_dir:</strong> # If the input format is wav.scp, it needs to be set.
|
||||
- <strong>batch_size:</strong> # Set batch size in inference.
|
||||
- <strong>param_dict:</strong> # Set the hotword list in inference.
|
||||
|
||||
- Then you can run the pipeline to infer with:
|
||||
```python
|
||||
python infer.py
|
||||
```
|
||||
|
||||
@ -0,0 +1 @@
|
||||
../../TEMPLATE/README.md
|
||||
@ -0,0 +1,37 @@
|
||||
import os
|
||||
|
||||
from modelscope.metainfo import Trainers
|
||||
from modelscope.trainers import build_trainer
|
||||
|
||||
from funasr.datasets.ms_dataset import MsDataset
|
||||
from funasr.utils.modelscope_param import modelscope_args
|
||||
|
||||
|
||||
def modelscope_finetune(params):
|
||||
if not os.path.exists(params.output_dir):
|
||||
os.makedirs(params.output_dir, exist_ok=True)
|
||||
# dataset split ["train", "validation"]
|
||||
ds_dict = MsDataset.load(params.data_path)
|
||||
kwargs = dict(
|
||||
model=params.model,
|
||||
model_revision="v1.0.2",
|
||||
data_dir=ds_dict,
|
||||
dataset_type=params.dataset_type,
|
||||
work_dir=params.output_dir,
|
||||
batch_bins=params.batch_bins,
|
||||
max_epoch=params.max_epoch,
|
||||
lr=params.lr)
|
||||
trainer = build_trainer(Trainers.speech_asr_trainer, default_args=kwargs)
|
||||
trainer.train()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
params = modelscope_args(model="damo/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404", data_path="./data")
|
||||
params.output_dir = "./checkpoint" # 模型保存路径
|
||||
params.data_path = "./example_data/" # 数据路径
|
||||
params.dataset_type = "large" # finetune contextual paraformer模型只能使用large dataset
|
||||
params.batch_bins = 200000 # batch size,如果dataset_type="small",batch_bins单位为fbank特征帧数,如果dataset_type="large",batch_bins单位为毫秒,
|
||||
params.max_epoch = 20 # 最大训练轮数
|
||||
params.lr = 0.0002 # 设置学习率
|
||||
|
||||
modelscope_finetune(params)
|
||||
@ -12,7 +12,7 @@ output_dir="./results"
|
||||
batch_size=64
|
||||
gpu_inference=true # whether to perform gpu decoding
|
||||
gpuid_list="0,1" # set gpus, e.g., gpuid_list="0,1"
|
||||
njob=64 # the number of jobs for CPU decoding, if gpu_inference=false, use CPU decoding, please set njob
|
||||
njob=10 # the number of jobs for CPU decoding, if gpu_inference=false, use CPU decoding, please set njob
|
||||
checkpoint_dir=
|
||||
checkpoint_name="valid.cer_ctc.ave.pb"
|
||||
hotword_txt=None
|
||||
@ -55,8 +55,8 @@ if [ $stage -le 1 ] && [ $stop_stage -ge 1 ];then
|
||||
--audio_in ${output_dir}/split/wav.$JOB.scp \
|
||||
--output_dir ${output_dir}/output.$JOB \
|
||||
--batch_size ${batch_size} \
|
||||
--gpuid ${gpuid} \
|
||||
--hotword_txt ${hotword_txt}
|
||||
--hotword_txt ${hotword_txt} \
|
||||
--gpuid ${gpuid}
|
||||
}&
|
||||
done
|
||||
wait
|
||||
|
||||
@ -19,11 +19,15 @@ if __name__ == '__main__':
|
||||
os.makedirs(work_dir)
|
||||
wav_file_path = os.path.join(work_dir, "wav.scp")
|
||||
|
||||
counter = 0
|
||||
with codecs.open(wav_file_path, 'w') as fin:
|
||||
for line in ds_dict:
|
||||
counter += 1
|
||||
wav = line["Audio:FILE"]
|
||||
idx = wav.split("/")[-1].split(".")[0]
|
||||
fin.writelines(idx + " " + wav + "\n")
|
||||
if counter == 50:
|
||||
break
|
||||
audio_in = wav_file_path
|
||||
|
||||
inference_pipeline = pipeline(
|
||||
|
||||
@ -0,0 +1,39 @@
|
||||
import os
|
||||
import logging
|
||||
import torch
|
||||
import soundfile
|
||||
|
||||
from modelscope.pipelines import pipeline
|
||||
from modelscope.utils.constant import Tasks
|
||||
from modelscope.utils.logger import get_logger
|
||||
|
||||
logger = get_logger(log_level=logging.CRITICAL)
|
||||
logger.setLevel(logging.CRITICAL)
|
||||
|
||||
os.environ["MODELSCOPE_CACHE"] = "./"
|
||||
inference_pipeline = pipeline(
|
||||
task=Tasks.auto_speech_recognition,
|
||||
model='damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online',
|
||||
model_revision='v1.0.4'
|
||||
)
|
||||
|
||||
model_dir = os.path.join(os.environ["MODELSCOPE_CACHE"], "damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online")
|
||||
speech, sample_rate = soundfile.read(os.path.join(model_dir, "example/asr_example.wav"))
|
||||
speech_length = speech.shape[0]
|
||||
|
||||
sample_offset = 0
|
||||
chunk_size = [5, 10, 5] #[5, 10, 5] 600ms, [8, 8, 4] 480ms
|
||||
stride_size = chunk_size[1] * 960
|
||||
param_dict = {"cache": dict(), "is_final": False, "chunk_size": chunk_size}
|
||||
final_result = ""
|
||||
|
||||
for sample_offset in range(0, speech_length, min(stride_size, speech_length - sample_offset)):
|
||||
if sample_offset + stride_size >= speech_length - 1:
|
||||
stride_size = speech_length - sample_offset
|
||||
param_dict["is_final"] = True
|
||||
rec_result = inference_pipeline(audio_in=speech[sample_offset: sample_offset + stride_size],
|
||||
param_dict=param_dict)
|
||||
if len(rec_result) != 0:
|
||||
final_result += rec_result['text'] + " "
|
||||
print(rec_result)
|
||||
print(final_result)
|
||||
@ -1,76 +0,0 @@
|
||||
# ModelScope Model
|
||||
|
||||
## How to finetune and infer using a pretrained Paraformer-large Model
|
||||
|
||||
### Finetune
|
||||
|
||||
- Modify finetune training related parameters in `finetune.py`
|
||||
- <strong>output_dir:</strong> # result dir
|
||||
- <strong>data_dir:</strong> # the dataset dir needs to include files: `train/wav.scp`, `train/text`; `validation/wav.scp`, `validation/text`
|
||||
- <strong>dataset_type:</strong> # for dataset larger than 1000 hours, set as `large`, otherwise set as `small`
|
||||
- <strong>batch_bins:</strong> # batch size. For dataset_type is `small`, `batch_bins` indicates the feature frames. For dataset_type is `large`, `batch_bins` indicates the duration in ms
|
||||
- <strong>max_epoch:</strong> # number of training epoch
|
||||
- <strong>lr:</strong> # learning rate
|
||||
|
||||
- Then you can run the pipeline to finetune with:
|
||||
```python
|
||||
python finetune.py
|
||||
```
|
||||
|
||||
### Inference
|
||||
|
||||
Or you can use the finetuned model for inference directly.
|
||||
|
||||
- Setting parameters in `infer.sh`
|
||||
- <strong>model:</strong> # model name on ModelScope
|
||||
- <strong>data_dir:</strong> # the dataset dir needs to include `${data_dir}/wav.scp`. If `${data_dir}/text` is also exists, CER will be computed
|
||||
- <strong>output_dir:</strong> # result dir
|
||||
- <strong>batch_size:</strong> # batchsize of inference
|
||||
- <strong>gpu_inference:</strong> # whether to perform gpu decoding, set false for cpu decoding
|
||||
- <strong>gpuid_list:</strong> # set gpus, e.g., gpuid_list="0,1"
|
||||
- <strong>njob:</strong> # the number of jobs for CPU decoding, if `gpu_inference`=false, use CPU decoding, please set `njob`
|
||||
|
||||
- Decode with multi GPUs:
|
||||
```shell
|
||||
bash infer.sh \
|
||||
--model "damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch" \
|
||||
--data_dir "./data/test" \
|
||||
--output_dir "./results" \
|
||||
--batch_size 64 \
|
||||
--gpu_inference true \
|
||||
--gpuid_list "0,1"
|
||||
```
|
||||
|
||||
- Decode with multi-thread CPUs:
|
||||
```shell
|
||||
bash infer.sh \
|
||||
--model "damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch" \
|
||||
--data_dir "./data/test" \
|
||||
--output_dir "./results" \
|
||||
--gpu_inference false \
|
||||
--njob 64
|
||||
```
|
||||
|
||||
- Results
|
||||
|
||||
The decoding results can be found in `${output_dir}/1best_recog/text.cer`, which includes recognition results of each sample and the CER metric of the whole test set.
|
||||
|
||||
If you decode the SpeechIO test sets, you can use textnorm with `stage`=3, and `DETAILS.txt`, `RESULTS.txt` record the results and CER after text normalization.
|
||||
|
||||
### Inference using local finetuned model
|
||||
|
||||
- Modify inference related parameters in `infer_after_finetune.py`
|
||||
- <strong>modelscope_model_name: </strong> # model name on ModelScope
|
||||
- <strong>output_dir:</strong> # result dir
|
||||
- <strong>data_dir:</strong> # the dataset dir needs to include `test/wav.scp`. If `test/text` is also exists, CER will be computed
|
||||
- <strong>decoding_model_name:</strong> # set the checkpoint name for decoding, e.g., `valid.cer_ctc.ave.pb`
|
||||
- <strong>batch_size:</strong> # batchsize of inference
|
||||
|
||||
- Then you can run the pipeline to finetune with:
|
||||
```python
|
||||
python infer_after_finetune.py
|
||||
```
|
||||
|
||||
- Results
|
||||
|
||||
The decoding results can be found in `$output_dir/decoding_results/text.cer`, which includes recognition results of each sample and the CER metric of the whole test set.
|
||||
@ -0,0 +1 @@
|
||||
../TEMPLATE/README.md
|
||||
@ -1,103 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -e
|
||||
set -u
|
||||
set -o pipefail
|
||||
|
||||
stage=1
|
||||
stop_stage=2
|
||||
model="damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch"
|
||||
data_dir="./data/test"
|
||||
output_dir="./results"
|
||||
batch_size=64
|
||||
gpu_inference=true # whether to perform gpu decoding
|
||||
gpuid_list="0,1" # set gpus, e.g., gpuid_list="0,1"
|
||||
njob=64 # the number of jobs for CPU decoding, if gpu_inference=false, use CPU decoding, please set njob
|
||||
checkpoint_dir=
|
||||
checkpoint_name="valid.cer_ctc.ave.pb"
|
||||
|
||||
. utils/parse_options.sh || exit 1;
|
||||
|
||||
if ${gpu_inference} == "true"; then
|
||||
nj=$(echo $gpuid_list | awk -F "," '{print NF}')
|
||||
else
|
||||
nj=$njob
|
||||
batch_size=1
|
||||
gpuid_list=""
|
||||
for JOB in $(seq ${nj}); do
|
||||
gpuid_list=$gpuid_list"-1,"
|
||||
done
|
||||
fi
|
||||
|
||||
mkdir -p $output_dir/split
|
||||
split_scps=""
|
||||
for JOB in $(seq ${nj}); do
|
||||
split_scps="$split_scps $output_dir/split/wav.$JOB.scp"
|
||||
done
|
||||
perl utils/split_scp.pl ${data_dir}/wav.scp ${split_scps}
|
||||
|
||||
if [ -n "${checkpoint_dir}" ]; then
|
||||
python utils/prepare_checkpoint.py ${model} ${checkpoint_dir} ${checkpoint_name}
|
||||
model=${checkpoint_dir}/${model}
|
||||
fi
|
||||
|
||||
if [ $stage -le 1 ] && [ $stop_stage -ge 1 ];then
|
||||
echo "Decoding ..."
|
||||
gpuid_list_array=(${gpuid_list//,/ })
|
||||
for JOB in $(seq ${nj}); do
|
||||
{
|
||||
id=$((JOB-1))
|
||||
gpuid=${gpuid_list_array[$id]}
|
||||
mkdir -p ${output_dir}/output.$JOB
|
||||
python infer.py \
|
||||
--model ${model} \
|
||||
--audio_in ${output_dir}/split/wav.$JOB.scp \
|
||||
--output_dir ${output_dir}/output.$JOB \
|
||||
--batch_size ${batch_size} \
|
||||
--gpuid ${gpuid}
|
||||
}&
|
||||
done
|
||||
wait
|
||||
|
||||
mkdir -p ${output_dir}/1best_recog
|
||||
for f in token score text; do
|
||||
if [ -f "${output_dir}/output.1/1best_recog/${f}" ]; then
|
||||
for i in $(seq "${nj}"); do
|
||||
cat "${output_dir}/output.${i}/1best_recog/${f}"
|
||||
done | sort -k1 >"${output_dir}/1best_recog/${f}"
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
if [ $stage -le 2 ] && [ $stop_stage -ge 2 ];then
|
||||
echo "Computing WER ..."
|
||||
cp ${output_dir}/1best_recog/text ${output_dir}/1best_recog/text.proc
|
||||
cp ${data_dir}/text ${output_dir}/1best_recog/text.ref
|
||||
python utils/compute_wer.py ${output_dir}/1best_recog/text.ref ${output_dir}/1best_recog/text.proc ${output_dir}/1best_recog/text.cer
|
||||
tail -n 3 ${output_dir}/1best_recog/text.cer
|
||||
fi
|
||||
|
||||
if [ $stage -le 3 ] && [ $stop_stage -ge 3 ];then
|
||||
echo "SpeechIO TIOBE textnorm"
|
||||
echo "$0 --> Normalizing REF text ..."
|
||||
./utils/textnorm_zh.py \
|
||||
--has_key --to_upper \
|
||||
${data_dir}/text \
|
||||
${output_dir}/1best_recog/ref.txt
|
||||
|
||||
echo "$0 --> Normalizing HYP text ..."
|
||||
./utils/textnorm_zh.py \
|
||||
--has_key --to_upper \
|
||||
${output_dir}/1best_recog/text.proc \
|
||||
${output_dir}/1best_recog/rec.txt
|
||||
grep -v $'\t$' ${output_dir}/1best_recog/rec.txt > ${output_dir}/1best_recog/rec_non_empty.txt
|
||||
|
||||
echo "$0 --> computing WER/CER and alignment ..."
|
||||
./utils/error_rate_zh \
|
||||
--tokenizer char \
|
||||
--ref ${output_dir}/1best_recog/ref.txt \
|
||||
--hyp ${output_dir}/1best_recog/rec_non_empty.txt \
|
||||
${output_dir}/1best_recog/DETAILS.txt | tee ${output_dir}/1best_recog/RESULTS.txt
|
||||
rm -rf ${output_dir}/1best_recog/rec.txt ${output_dir}/1best_recog/rec_non_empty.txt
|
||||
fi
|
||||
|
||||
@ -0,0 +1 @@
|
||||
../TEMPLATE/infer.sh
|
||||
@ -1,48 +0,0 @@
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
|
||||
from modelscope.pipelines import pipeline
|
||||
from modelscope.utils.constant import Tasks
|
||||
from modelscope.hub.snapshot_download import snapshot_download
|
||||
|
||||
from funasr.utils.compute_wer import compute_wer
|
||||
|
||||
def modelscope_infer_after_finetune(params):
|
||||
# prepare for decoding
|
||||
|
||||
try:
|
||||
pretrained_model_path = snapshot_download(params["modelscope_model_name"], cache_dir=params["output_dir"])
|
||||
except BaseException:
|
||||
raise BaseException(f"Please download pretrain model from ModelScope firstly.")
|
||||
shutil.copy(os.path.join(params["output_dir"], params["decoding_model_name"]), os.path.join(pretrained_model_path, "model.pb"))
|
||||
decoding_path = os.path.join(params["output_dir"], "decode_results")
|
||||
if os.path.exists(decoding_path):
|
||||
shutil.rmtree(decoding_path)
|
||||
os.mkdir(decoding_path)
|
||||
|
||||
# decoding
|
||||
inference_pipeline = pipeline(
|
||||
task=Tasks.auto_speech_recognition,
|
||||
model=pretrained_model_path,
|
||||
output_dir=decoding_path,
|
||||
batch_size=params["batch_size"]
|
||||
)
|
||||
audio_in = os.path.join(params["data_dir"], "wav.scp")
|
||||
inference_pipeline(audio_in=audio_in)
|
||||
|
||||
# computer CER if GT text is set
|
||||
text_in = os.path.join(params["data_dir"], "text")
|
||||
if os.path.exists(text_in):
|
||||
text_proc_file = os.path.join(decoding_path, "1best_recog/text")
|
||||
compute_wer(text_in, text_proc_file, os.path.join(decoding_path, "text.cer"))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
params = {}
|
||||
params["modelscope_model_name"] = "damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch"
|
||||
params["output_dir"] = "./checkpoint"
|
||||
params["data_dir"] = "./data/test"
|
||||
params["decoding_model_name"] = "valid.acc.ave_10best.pb"
|
||||
params["batch_size"] = 64
|
||||
modelscope_infer_after_finetune(params)
|
||||
@ -16,14 +16,14 @@ def modelscope_infer_core(output_dir, split_dir, njob, idx):
|
||||
os.environ['CUDA_VISIBLE_DEVICES'] = str(gpu_list[gpu_id])
|
||||
else:
|
||||
os.environ['CUDA_VISIBLE_DEVICES'] = str(gpu_id)
|
||||
inference_pipline = pipeline(
|
||||
inference_pipeline = pipeline(
|
||||
task=Tasks.auto_speech_recognition,
|
||||
model="damo/speech_paraformer-tiny-commandword_asr_nat-zh-cn-16k-vocab544-pytorch",
|
||||
output_dir=output_dir_job,
|
||||
batch_size=64
|
||||
)
|
||||
audio_in = os.path.join(split_dir, "wav.{}.scp".format(idx))
|
||||
inference_pipline(audio_in=audio_in)
|
||||
inference_pipeline(audio_in=audio_in)
|
||||
|
||||
|
||||
def modelscope_infer(params):
|
||||
|
||||
@ -1,30 +0,0 @@
|
||||
# ModelScope Model
|
||||
|
||||
## How to finetune and infer using a pretrained Paraformer-large Model
|
||||
|
||||
### Finetune
|
||||
|
||||
- Modify finetune training related parameters in `finetune.py`
|
||||
- <strong>output_dir:</strong> # result dir
|
||||
- <strong>data_dir:</strong> # the dataset dir needs to include files: train/wav.scp, train/text; validation/wav.scp, validation/text.
|
||||
- <strong>batch_bins:</strong> # batch size
|
||||
- <strong>max_epoch:</strong> # number of training epoch
|
||||
- <strong>lr:</strong> # learning rate
|
||||
|
||||
- Then you can run the pipeline to finetune with:
|
||||
```python
|
||||
python finetune.py
|
||||
```
|
||||
|
||||
### Inference
|
||||
|
||||
Or you can use the finetuned model for inference directly.
|
||||
|
||||
- Setting parameters in `infer.py`
|
||||
- <strong>audio_in:</strong> # support wav, url, bytes, and parsed audio format.
|
||||
- <strong>output_dir:</strong> # If the input format is wav.scp, it needs to be set.
|
||||
|
||||
- Then you can run the pipeline to infer with:
|
||||
```python
|
||||
python infer.py
|
||||
```
|
||||
@ -0,0 +1 @@
|
||||
../TEMPLATE/README.md
|
||||
@ -0,0 +1,15 @@
|
||||
from modelscope.pipelines import pipeline
|
||||
from modelscope.utils.constant import Tasks
|
||||
|
||||
if __name__ == '__main__':
|
||||
audio_in = 'https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav'
|
||||
output_dir = None
|
||||
inference_pipeline = pipeline(
|
||||
task=Tasks.auto_speech_recognition,
|
||||
model="damo/speech_paraformer_asr_nat-zh-cn-16k-aishell1-vocab4234-pytorch",
|
||||
output_dir=output_dir,
|
||||
batch_size=1,
|
||||
)
|
||||
rec_result = inference_pipeline(audio_in=audio_in)
|
||||
print(rec_result)
|
||||
|
||||
@ -1,15 +0,0 @@
|
||||
from modelscope.pipelines import pipeline
|
||||
from modelscope.utils.constant import Tasks
|
||||
|
||||
if __name__ == '__main__':
|
||||
audio_in = 'https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav'
|
||||
output_dir = None
|
||||
inference_pipline = pipeline(
|
||||
task=Tasks.auto_speech_recognition,
|
||||
model="damo/speech_paraformer_asr_nat-zh-cn-16k-aishell1-vocab4234-pytorch",
|
||||
output_dir=output_dir,
|
||||
batch_size=32,
|
||||
)
|
||||
rec_result = inference_pipline(audio_in=audio_in)
|
||||
print(rec_result)
|
||||
|
||||
@ -0,0 +1 @@
|
||||
../TEMPLATE/infer.py
|
||||
@ -0,0 +1 @@
|
||||
../TEMPLATE/infer.sh
|
||||
@ -0,0 +1 @@
|
||||
../TEMPLATE/README.md
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user