Exceeding the Limits of Visual-Linguistic Multi-Task Learning

07/27/2021
by   Cameron R. Wolfe, et al.
0

By leveraging large amounts of product data collected across hundreds of live e-commerce websites, we construct 1000 unique classification tasks that share similarly-structured input data, comprised of both text and images. These classification tasks focus on learning the product hierarchy of different e-commerce websites, causing many of them to be correlated. Adopting a multi-modal transformer model, we solve these tasks in unison using multi-task learning (MTL). Extensive experiments are presented over an initial 100-task dataset to reveal best practices for "large-scale MTL" (i.e., MTL with more than 100 tasks). From these experiments, a final, unified methodology is derived, which is composed of both best practices and new proposals such as DyPa, a simple heuristic for automatically allocating task-specific parameters to tasks that could benefit from extra capacity. Using our large-scale MTL methodology, we successfully train a single model across all 1000 tasks in our dataset while using minimal task specific parameters, thereby showing that it is possible to extend several orders of magnitude beyond current efforts in MTL.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/19/2017

Adversarial Multi-task Learning for Text Classification

Neural network models have shown their promising opportunities for multi...
research
04/03/2020

Context-Aware Multi-Task Learning for Traffic Scene Recognition in Autonomous Vehicles

Traffic scene recognition, which requires various visual classification ...
research
07/23/2021

Rethinking Hard-Parameter Sharing in Multi-Task Learning

Hard parameter sharing in multi-task learning (MTL) allows tasks to shar...
research
04/22/2018

Same Representation, Different Attentions: Shareable Sentence Representation Learning from Multiple Tasks

Distributed representation plays an important role in deep learning base...
research
04/09/2022

Efficient Extraction of Pathologies from C-Spine Radiology Reports using Multi-Task Learning

Pretrained Transformer based models finetuned on domain specific corpora...
research
05/21/2020

Team Neuro at SemEval-2020 Task 8: Multi-Modal Fine Grain Emotion Classification of Memes using Multitask Learning

In this article, we describe the system that we used for the memotion an...
research
08/09/2022

A Boring-yet-effective Approach for the Product Ranking Task of the Amazon KDD Cup 2022

In this work we describe our submission to the product ranking task of t...

Please sign up or login with your details

Forgot password? Click here to reset