Earnings-22: A Practical Benchmark for Accents in the Wild

03/29/2022
by   Miguel Del Rio, et al.
0

Modern automatic speech recognition (ASR) systems have achieved superhuman Word Error Rate (WER) on many common corpora despite lacking adequate performance on speech in the wild. Beyond that, there is a lack of real-world, accented corpora to properly benchmark academic and commercial models. To ensure this type of speech is represented in ASR benchmarking, we present Earnings-22, a 125 file, 119 hour corpus of English-language earnings calls gathered from global companies. We run a comparison across 4 commercial models showing the variation in performance when taking country of origin into consideration. Looking at hypothesis transcriptions, we explore errors common to all ASR systems tested. By examining Individual Word Error Rate (IWER), we find that key speech features impact model performance more for certain accents than others. Earnings-22 provides a free-to-use benchmark of real-world, accented audio to bridge academic and industrial research.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/22/2021

Earnings-21: A Practical Benchmark for ASR in the Wild

Commonly used speech corpora inadequately challenge academic and commerc...
research
06/13/2022

Toward Zero Oracle Word Error Rate on the Switchboard Benchmark

The "Switchboard benchmark" is a very well-known test set in automatic s...
research
09/18/2023

HypR: A comprehensive study for ASR hypothesis revising with a reference corpus

With the development of deep learning, automatic speech recognition (ASR...
research
10/14/2021

CORAA: a large corpus of spontaneous and prepared speech manually validated for speech recognition in Brazilian Portuguese

Automatic Speech recognition (ASR) is a complex and challenging task. In...
research
05/09/2021

English Accent Accuracy Analysis in a State-of-the-Art Automatic Speech Recognition System

Nowadays, research in speech technologies has gotten a lot out thanks to...
research
12/19/2019

Statistical Testing on ASR Performance via Blockwise Bootstrap

A common question being raised in automatic speech recognition (ASR) eva...
research
10/22/2020

Rethinking Evaluation in ASR: Are Our Models Robust Enough?

Is pushing numbers on a single benchmark valuable in automatic speech re...

Please sign up or login with your details

Forgot password? Click here to reset