Analyzing Text Representations by Measuring Task Alignment

Textual representations based on pre-trained language models are key, especially in few-shot learning scenarios. What makes a representation good for text classification? Is it due to the geometric properties of the space or because it is well aligned with the task? We hypothesize the second claim. To test it, we develop a task alignment score based on hierarchical clustering that measures alignment at different levels of granularity. Our experiments on text classification validate our hypothesis by showing that task alignment can explain the classification performance of a given representation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2022

Analyzing Text Representations under Tight Annotation Budgets: Measuring Structural Alignment

Annotating large collections of textual data can be time consuming and e...
research
08/22/2019

Improving Few-shot Text Classification via Pretrained Language Representations

Text classification tends to be difficult when the data is deficient or ...
research
06/15/2023

MetricPrompt: Prompting Model as a Relevance Metric for Few-shot Text Classification

Prompting methods have shown impressive performance in a variety of text...
research
06/18/2023

Evolutionary Verbalizer Search for Prompt-based Few Shot Text Classification

Recent advances for few-shot text classification aim to wrap textual inp...
research
01/27/2023

Alignment with human representations supports robust few-shot learning

Should we care whether AI systems have representations of the world that...
research
10/14/2019

Updating Pre-trained Word Vectors and Text Classifiers using Monolingual Alignment

In this paper, we focus on the problem of adapting word vector-based mod...
research
06/05/2023

Text-To-KG Alignment: Comparing Current Methods on Classification Tasks

In contrast to large text corpora, knowledge graphs (KG) provide dense a...

Please sign up or login with your details

Forgot password? Click here to reset