Attention-Free Keyword Spotting

10/14/2021
by   Mashrur M. Morshed, et al.
0

Till now, attention-based models have been used with great success in the keyword spotting problem domain. However, in light of recent advances in deep learning, the question arises whether self-attention is truly irreplaceable for recognizing speech keywords. We thus explore the usage of gated MLPs – previously shown to be alternatives to transformers in vision tasks – for the keyword spotting task. We verify our approach on the Google Speech Commands V2-35 dataset and show that it is possible to obtain performance comparable to the state of the art without any apparent usage of self-attention.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/01/2021

Keyword Transformer: A Self-Attention Model for Keyword Spotting

The Transformer architecture has been successful across many domains, in...
research
07/25/2020

Few-Shot Keyword Spotting With Prototypical Networks

Recognizing a particular command or a keyword, keyword spotting has been...
research
03/07/2023

Self-supervised speech representation learning for keyword-spotting with light-weight transformers

Self-supervised speech representation learning (S3RL) is revolutionizing...
research
06/16/2019

Theoretical Limitations of Self-Attention in Neural Sequence Models

Transformers are emerging as the new workhorse of NLP, showing great suc...
research
02/06/2022

On Using Transformers for Speech-Separation

Transformers have enabled major improvements in deep learning. They ofte...
research
06/23/2022

QbyE-MLPMixer: Query-by-Example Open-Vocabulary Keyword Spotting using MLPMixer

Current keyword spotting systems are typically trained with a large amou...
research
08/09/2022

An Anchor-Free Detector for Continuous Speech Keyword Spotting

Continuous Speech Keyword Spotting (CSKWS) is a task to detect predefine...

Please sign up or login with your details

Forgot password? Click here to reset