Orthogonality Constrained Multi-Head Attention For Keyword Spotting

10/10/2019
by   Mingu Lee, et al.
0

Multi-head attention mechanism is capable of learning various representations from sequential data while paying attention to different subsequences, e.g., word-pieces or syllables in a spoken word. From the subsequences, it retrieves richer information than a single-head attention which only summarizes the whole sequence into one context vector. However, a naive use of the multi-head attention does not guarantee such richness as the attention heads may have positional and representational redundancy. In this paper, we propose a regularization technique for multi-head attention mechanism in an end-to-end neural keyword spotting system. Augmenting regularization terms which penalize positional and contextual non-orthogonality between the attention heads encourages to output different representations from separate subsequences, which in turn enables leveraging structured information without explicit sequence models such as hidden Markov models. In addition, intra-head contextual non-orthogonality regularization encourages each attention head to have similar representations across keyword examples, which helps classification by reducing feature variability. The experimental results demonstrate that the proposed regularization technique significantly improves the keyword spotting performance for the keyword "Hey Snapdragon".

READ FULL TEXT
research
09/20/2020

Repulsive Attention: Rethinking Multi-head Attention as Bayesian Inference

The neural attention mechanism plays an important role in many natural l...
research
02/14/2021

Query-by-Example Keyword Spotting system using Multi-head Attention and Softtriple Loss

This paper proposes a neural network architecture for tackling the query...
research
04/28/2020

Scheduled DropHead: A Regularization Method for Transformer Models

In this paper, we introduce DropHead, a structured dropout method specif...
research
10/24/2018

Multi-Head Attention with Disagreement Regularization

Multi-head attention is appealing for the ability to jointly attend to i...
research
03/13/2021

Approximating How Single Head Attention Learns

Why do models often attend to salient words, and how does this evolve th...
research
03/29/2018

Attention-based End-to-End Models for Small-Footprint Keyword Spotting

In this paper, we propose an attention-based end-to-end neural approach ...
research
11/01/2018

End-to-end Models with auditory attention in Multi-channel Keyword Spotting

In this paper, we propose an attention-based end-to-end model for multi-...

Please sign up or login with your details

Forgot password? Click here to reset