Parameter-efficient transfer learning of pre-trained Transformer models for speaker verification using adapters

10/28/2022
by   Junyi Peng, et al.
0

Recently, the pre-trained Transformer models have received a rising interest in the field of speech processing thanks to their great success in various downstream tasks. However, most fine-tuning approaches update all the parameters of the pre-trained model, which becomes prohibitive as the model size grows and sometimes results in overfitting on small datasets. In this paper, we conduct a comprehensive analysis of applying parameter-efficient transfer learning (PETL) methods to reduce the required learnable parameters for adapting to speaker verification tasks. Specifically, during the fine-tuning process, the pre-trained models are frozen, and only lightweight modules inserted in each Transformer block are trainable (a method known as adapters). Moreover, to boost the performance in a cross-language low-resource scenario, the Transformer model is further tuned on a large intermediate dataset before directly fine-tuning it on a small dataset. With updating fewer than 4 performances with full fine-tuning methods (Vox1-O: 0.55 Vox1-H:1.73

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/02/2023

Evaluating Parameter-Efficient Transfer Learning Approaches on SURE Benchmark for Speech Understanding

Fine-tuning is widely used as the default algorithm for transfer learnin...
research
05/23/2022

When does Parameter-Efficient Transfer Learning Work for Machine Translation?

Parameter-efficient fine-tuning methods (PEFTs) offer the promise of ada...
research
11/04/2022

Integrated Parameter-Efficient Tuning for General-Purpose Audio Models

The advent of hyper-scale and general-purpose pre-trained models is shif...
research
06/13/2022

LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning

Fine-tuning large pre-trained models on downstream tasks has been adopte...
research
11/30/2022

MSV Challenge 2022: NPU-HC Speaker Verification System for Low-resource Indian Languages

This report describes the NPU-HC speaker verification system submitted t...
research
02/01/2023

An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning

Parameter-efficient fine-tuning approaches have recently garnered a lot ...
research
08/15/2022

Conv-Adapter: Exploring Parameter Efficient Transfer Learning for ConvNets

While parameter efficient tuning (PET) methods have shown great potentia...

Please sign up or login with your details

Forgot password? Click here to reset