* tests : add a new benchmark test for long-form audio
Based on "Earnings-21" corpus by Del Rio et al.
Earnings-21: A Practical Benchmark for ASR in the Wild (2021)
https://arxiv.org/abs/2104.11348
This dataset contains 39 hours of long-form speech, sourced from public
earning calls. Each recording contains roughly 50 minutes of English
dialogues between multiple speakers (2-20 persons).
This benchmark suite should allow us to evaluate the performance of
whisper.cpp on long-form audio data.
Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>
* tests : apply PR feedback to 'earnings21/README.md'
Based on feedback from Daniel Bevenius.
- Simplify how to download & prepare a Silero VAD model.
- Fix typo: inferece -> inference
Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>
* tests : avoid crashing on non-UTF-8 characters
Based on feedback from Daniel Bevenius.
Add 'errors' parameter to open() in order to avoid unhandled
exception on invalid UTF-8 bytes.
Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>
* tests : try to interpret the hypothesis as Windows-1252
Based on the discussion in PR#3185.
Evidently Whisper.cpp can represent a quotation mark as '0x93', which
implifies Windows-1252 (Microsoft's ASCII excention), and cannot be
decoded by UTF-8.
Add an explicit decoding loop to address the issue.
Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>
---------
Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>
|
||
|---|---|---|
| .. | ||
| normalizers | ||
| .gitignore | ||
| eval.mk | ||
| eval.py | ||
| Makefile | ||
| README.md | ||
| requirements.txt | ||
whisper.cpp/tests/earnings21
Earnings-21 is a real-world benchmark dataset that contains 39-hours of long-form English speech, sourced from public earning calls.
This directory contains a set of scripts to evaluate the performance of whisper.cpp on Earnings-21 corpus.
Quick Start
-
(Pre-requirement) Compile
whisper-cliand prepare the Whisper model inggmlformat.$ # Execute the commands below in the project root dir. $ cmake -B build $ cmake --build build --config Release $ ./models/download-ggml-model.sh tinyConsult whisper.cpp/README.md for more details.
-
Download the audio files.
$ make get-audio -
Set up the environment to compute WER score.
$ pip install -r requirements.txtFor example, if you use
virtualenv, you can set up it as follows:$ python3 -m venv venv $ . venv/bin/activate $ pip install -r requirements.txt -
Run the benchmark test.
$ make
How-to guides
How to change the inference parameters
Create eval.conf and override variables.
WHISPER_MODEL = large-v3-turbo
WHISPER_FLAGS = --no-prints --threads 8 --language en --output-txt
Check out eval.mk for more details.
How to perform the benchmark test on a 10-hour subset
Earnings-21 provides a small but representative subset (approximately 10-hour audio data) to evaluate ASR systems quickly.
To switch to the subset, create eval.conf and add the following line:
EARNINGS21_EVAL10 = yes
How to run the benchmark test using VAD
First, you need to download a VAD model:
$ # Execute the commands below in the project root dir.
$ ./models/download-vad-model.sh silero-v5.1.2
Create eval.conf with the following content:
WHISPER_FLAGS = --no-prints --language en --output-txt --vad --vad-model ../../models/ggml-silero-v5.1.2.bin