DirectProbe: Studying Representations without Classifiers

04/13/2021
by   Yichu Zhou, et al.
0

Understanding how linguistic structures are encoded in contextualized embedding could help explain their impressive performance across NLP@. Existing approaches for probing them usually call for training classifiers and use the accuracy, mutual information, or complexity as a proxy for the representation's goodness. In this work, we argue that doing so can be unreliable because different representations may need different classifiers. We develop a heuristic, DirectProbe, that directly studies the geometry of a representation by building upon the notion of a version space for a task. Experiments with several linguistic tasks and contextualized embeddings show that, even without training classifiers, DirectProbe can shine light into how an embedding space represents labels, and also anticipate classifier performance for the representation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/13/2021

On the Impact of Knowledge-based Linguistic Annotations in the Quality of Scientific Embeddings

In essence, embedding algorithms work by optimizing the distance between...
research
10/09/2021

An Isotropy Analysis in the Multilingual BERT Embedding Space

Several studies have explored various advantages of multilingual pre-tra...
research
01/20/2022

A Latent-Variable Model for Intrinsic Probing

The success of pre-trained contextualized representations has prompted r...
research
10/16/2018

INFODENS: An Open-source Framework for Learning Text Representations

The advent of representation learning methods enabled large performance ...
research
10/05/2019

On Dimensional Linguistic Properties of the Word Embedding Space

Word embeddings have become a staple of several natural language process...
research
07/19/2018

Isolating effects of age with fair representation learning when assessing dementia

One of the most prevalent symptoms among the elderly population, dementi...
research
06/29/2020

Measuring Memorization Effect in Word-Level Neural Networks Probing

Multiple studies have probed representations emerging in neural networks...

Please sign up or login with your details

Forgot password? Click here to reset