Wav2Vec2.0 on the Edge: Performance Evaluation

02/12/2022
by   Santosh Gondi, et al.
0

Wav2Vec2.0 is a state-of-the-art model which learns speech representations through unlabeled speech data, aka, self supervised learning. The pretrained model is then fine tuned on small amounts of labeled data to use it for speech-to-text and machine translation tasks. Wav2Vec 2.0 is a transformative solution for low resource languages as it is mainly developed using unlabeled audio data. Getting large amounts of labeled data is resource intensive and especially challenging to do for low resource languages such as Swahilli, Tatar, etc. Furthermore, Wav2Vec2.0 word-error-rate(WER) matches or surpasses the very recent supervised learning algorithms while using 100x less labeled data. Given its importance and enormous potential in enabling speech based tasks on world's 7000 languages, it is key to evaluate the accuracy, latency and efficiency of this model on low resource and low power edge devices and investigate the feasibility of using it in such devices for private, secure and reliable speech based tasks. On-device speech tasks preclude sending audio data to the server hence inherently providing privacy, reduced latency and enhanced reliability. In this paper, Wav2Vec2.0 model's accuracy and latency has been evaluated on Raspberry Pi along with the KenLM language model for speech recognition tasks. How to tune certain parameters to achieve desired level of WER rate and latency while meeting the CPU, memory and energy budgets of the product has been discussed.

READ FULL TEXT

page 5

page 8

research
05/24/2021

Unsupervised Speech Recognition

Despite rapid progress in the recent past, current speech recognition sy...
research
07/01/2022

Improving Low-Resource Speech Recognition with Pretrained Speech Models: Continued Pretraining vs. Semi-Supervised Training

Self-supervised Transformer based models, such as wav2vec 2.0 and HuBERT...
research
11/02/2022

SLICER: Learning universal audio representations using low-resource self-supervised pre-training

We present a new Self-Supervised Learning (SSL) approach to pre-train en...
research
12/22/2020

Applying wav2vec2.0 to Speech Recognition in various low-resource languages

Several domains own corresponding widely used feature extractors, such a...
research
12/13/2020

Discriminative Pre-training for Low Resource Title Compression in Conversational Grocery

The ubiquity of smart voice assistants has made conversational shopping ...
research
04/16/2022

STRATA: Word Boundaries Phoneme Recognition From Continuous Urdu Speech using Transfer Learning, Attention, Data Augmentation

Phoneme recognition is a largely unsolved problem in NLP, especially for...
research
04/09/2015

Leveraging Twitter for Low-Resource Conversational Speech Language Modeling

In applications involving conversational speech, data sparsity is a limi...

Please sign up or login with your details

Forgot password? Click here to reset