Analysis and Prediction of NLP Models Via Task Embeddings

12/10/2021
by   Damien Sileo, et al.
0

Task embeddings are low-dimensional representations that are trained to capture task properties. In this paper, we propose MetaEval, a collection of 101 NLP tasks. We fit a single transformer to all MetaEval tasks jointly while conditioning it on learned embeddings. The resulting task embeddings enable a novel analysis of the space of tasks. We then show that task aspects can be mapped to task embeddings for new tasks without using any annotated examples. Predicted embeddings can modulate the encoder for zero-shot inference and outperform a zero-shot baseline on GLUE tasks. The provided multitask setup can function as a benchmark for future transfer learning research.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/29/2022

Prompt Consistency for Zero-Shot Task Generalization

One of the most impressive results of recent NLP history is the ability ...
research
07/20/2020

Fantastic Embeddings and How to Align Them: Zero-Shot Inference in a Multi-Shop Scenario

This paper addresses the challenge of leveraging multiple embedding spac...
research
05/07/2022

Odor Descriptor Understanding through Prompting

Embeddings from contemporary natural language processing (NLP) models ar...
research
10/08/2019

AutoML using Metadata Language Embeddings

As a human choosing a supervised learning algorithm, it is natural to be...
research
08/09/2022

Where's the Learning in Representation Learning for Compositional Semantics and the Case of Thematic Fit

Observing that for certain NLP tasks, such as semantic role prediction o...
research
10/21/2021

CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIP

Contrastive learning with the InfoNCE objective is exceptionally success...
research
06/04/2019

Pitfalls in the Evaluation of Sentence Embeddings

Deep learning models continuously break new records across different NLP...

Please sign up or login with your details

Forgot password? Click here to reset