A Scalable Model Specialization Framework for Training and Inference using Submodels and its Application to Speech Model Personalization

03/23/2022
by   Fadi Biadsy, et al.
0

Model fine-tuning and adaptation have become a common approach for model specialization for downstream tasks or domains. Fine-tuning the entire model or a subset of the parameters using light-weight adaptation has shown considerable success across different specialization tasks. Fine-tuning a model for a large number of domains typically requires starting a new training job for every domain posing scaling limitations. Once these models are trained, deploying them also poses significant scalability challenges for inference for real-time applications. In this paper, building upon prior light-weight adaptation techniques, we propose a modular framework that enables us to substantially improve scalability for model training and inference. We introduce Submodels that can be quickly and dynamically loaded for on-the-fly inference. We also propose multiple approaches for training those Submodels in parallel using an embedding space in the same training job. We test our framework on an extreme use-case which is speech model personalization for atypical speech, requiring a Submodel for each user. We obtain 128x Submodel throughput with a fixed computation budget without a loss of accuracy. We also show that learning a speaker-embedding space can scale further and reduce the amount of personalization training data required per speaker.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2021

LoRA: Low-Rank Adaptation of Large Language Models

The dominant paradigm of natural language processing consists of large-s...
research
09/10/2021

How Does Fine-tuning Affect the Geometry of Embedding Space: A Case Study on Isotropy

It is widely accepted that fine-tuning pre-trained language models usual...
research
09/19/2023

Test-Time Training for Speech

In this paper, we study the application of Test-Time Training (TTT) as a...
research
07/06/2023

Efficient Domain Adaptation of Sentence Embeddings using Adapters

Sentence embeddings enable us to capture the semantic similarity of shor...
research
10/28/2022

Residual Adapters for Few-Shot Text-to-Speech Speaker Adaptation

Adapting a neural text-to-speech (TTS) model to a target speaker typical...
research
10/18/2022

Using Language to Extend to Unseen Domains

It is expensive to collect training data for every possible domain that ...
research
08/13/2022

Online Refinement of a Scene Recognition Model for Mobile Robots by Observing Human's Interaction with Environments

This paper describes a method of online refinement of a scene recognitio...

Please sign up or login with your details

Forgot password? Click here to reset