Parameter Efficient Transfer Learning for Various Speech Processing Tasks

12/06/2022
by   Shinta Otake, et al.
0

Fine-tuning of self-supervised models is a powerful transfer learning method in a variety of fields, including speech processing, since it can utilize generic feature representations obtained from large amounts of unlabeled data. Fine-tuning, however, requires a new parameter set for each downstream task, which is parameter inefficient. Adapter architecture is proposed to partially solve this issue by inserting lightweight learnable modules into a frozen pre-trained model. However, existing adapter architectures fail to adaptively leverage low- to high-level features stored in different layers, which is necessary for solving various kinds of speech processing tasks. Thus, we propose a new adapter architecture to acquire feature representations more flexibly for various speech tasks. In experiments, we applied this adapter to WavLM on four speech tasks. It performed on par or better than naive fine-tuning, with only 11 existing adapter architecture.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/02/2023

Evaluating Parameter-Efficient Transfer Learning Approaches on SURE Benchmark for Speech Understanding

Fine-tuning is widely used as the default algorithm for transfer learnin...
research
05/18/2023

Self-supervised Fine-tuning for Improved Content Representations by Speaker-invariant Clustering

Self-supervised speech representation models have succeeded in various t...
research
01/13/2020

Parameter-Efficient Transfer from Sequential Behaviors for User Profiling and Recommendation

Inductive transfer learning has greatly impacted the computer vision and...
research
02/02/2019

Parameter-Efficient Transfer Learning for NLP

Fine-tuning large pre-trained models is an effective transfer mechanism ...
research
12/01/2022

CHAPTER: Exploiting Convolutional Neural Network Adapters for Self-supervised Speech Models

Self-supervised learning (SSL) is a powerful technique for learning repr...
research
03/01/2023

Rethinking Efficient Tuning Methods from a Unified Perspective

Parameter-efficient transfer learning (PETL) based on large-scale pre-tr...
research
05/28/2023

One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning

Fine-tuning pre-trained language models for multiple tasks tends to be e...

Please sign up or login with your details

Forgot password? Click here to reset