Hand-crafted Attention is All You Need? A Study of Attention on Self-supervised Audio Transformer

06/09/2020
by   Tsung-Han Wu, et al.
0

In this paper, we seek to reduce the computation complexity of transformer-based models for speech representation learning. We evaluate 10 attention mechanisms; then, we pre-train the transformer-based model with those attentions in a self-supervised fashion and use them as feature extractors on downstream tasks, including phoneme classification and speaker classification. We find that the proposed approach, which only uses hand-crafted and learnable attentions, is comparable with the full self-attention.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/18/2020

Audio ALBERT: A Lite BERT for Self-supervised Learning of Audio Representation

For self-supervised speech processing, it is crucial to use pretrained m...
research
06/05/2020

Understanding Self-Attention of Self-Supervised Audio Transformers

Self-supervised Audio Transformers (SAT) enable great success in many do...
research
11/16/2022

L2 proficiency assessment using self-supervised speech representations

There has been a growing demand for automated spoken language assessment...
research
04/08/2021

Layer Reduction: Accelerating Conformer-Based Self-Supervised Model via Layer Consistency

Transformer-based self-supervised models are trained as feature extracto...
research
07/09/2021

Dropout Regularization for Self-Supervised Learning of Transformer Encoder Speech Representation

Predicting the altered acoustic frames is an effective way of self-super...
research
12/29/2020

Kaleidoscope: An Efficient, Learnable Representation For All Structured Linear Maps

Modern neural network architectures use structured linear transformation...
research
03/08/2022

Dynamic Group Transformer: A General Vision Transformer Backbone with Dynamic Group Attention

Recently, Transformers have shown promising performance in various visio...

Please sign up or login with your details

Forgot password? Click here to reset