Device Tuning for Multi-Task Large Model

02/21/2023
by   Penghao Jiang, et al.
0

Unsupervised pre-training approaches have achieved great success in many fields such as Computer Vision (CV), Natural Language Processing (NLP) and so on. However, compared to typical deep learning models, pre-training or even fine-tuning the state-of-the-art self-attention models is extremely expensive, as they require much more computational and memory resources. It severely limits their applications and success in a variety of domains, especially for multi-task learning. To improve the efficiency, we propose Device Tuning for the efficient multi-task model, which is a massively multitask framework across the cloud and device and is designed to encourage learning of representations that generalize better to many different tasks. Specifically, we design Device Tuning architecture of a multi-task model that benefits both cloud modelling and device modelling, which reduces the communication between device and cloud by representation compression. Experimental results demonstrate the effectiveness of our proposed method.

READ FULL TEXT
research
01/26/2021

Muppet: Massive Multi-task Representations with Pre-Finetuning

We propose pre-finetuning, an additional large-scale learning stage betw...
research
06/08/2023

LCT-1 at SemEval-2023 Task 10: Pre-training and Multi-task Learning for Sexism Detection and Classification

Misogyny and sexism are growing problems in social media. Advances have ...
research
05/29/2020

Massive Choice, Ample Tasks (MaChAmp):A Toolkit for Multi-task Learning in NLP

Transfer learning, particularly approaches that combine multi-task learn...
research
01/29/2023

Unifying Molecular and Textual Representations via Multi-task Language Modelling

The recent advances in neural language models have also been successfull...
research
02/24/2021

Generalized and Transferable Patient Language Representation for Phenotyping with Limited Data

The paradigm of representation learning through transfer learning has th...
research
05/10/2023

iLab at SemEval-2023 Task 11 Le-Wi-Di: Modelling Disagreement or Modelling Perspectives?

There are two competing approaches for modelling annotator disagreement:...
research
02/24/2022

TrimBERT: Tailoring BERT for Trade-offs

Models based on BERT have been extremely successful in solving a variety...

Please sign up or login with your details

Forgot password? Click here to reset