Probing BERT in Hyperbolic Spaces

04/08/2021
by   Boli Chen, et al.
0

Recently, a variety of probing tasks are proposed to discover linguistic properties learned in contextualized word embeddings. Many of these works implicitly assume these embeddings lay in certain metric spaces, typically the Euclidean space. This work considers a family of geometrically special spaces, the hyperbolic spaces, that exhibit better inductive biases for hierarchical structures and may better reveal linguistic hierarchies encoded in contextualized representations. We introduce a Poincare probe, a structural probe projecting these embeddings into a Poincare subspace with explicitly defined hierarchies. We focus on two probing objectives: (a) dependency trees where the hierarchy is defined as head-dependent structures; (b) lexical sentiments where the hierarchy is defined as the polarity of words (positivity and negativity). We argue that a key desideratum of a probe is its sensitivity to the existence of linguistic structures. We apply our probes on BERT, a typical contextualized embedding model. In a syntactic subspace, our probe better recovers tree structures than Euclidean probes, revealing the possibility that the geometry of BERT syntax may not necessarily be Euclidean. In a sentiment subspace, we reveal two possible meta-embeddings for positive and negative sentiments and show how lexically-controlled contextualization would change the geometric localization of embeddings. We demonstrate the findings with our Poincare probe via extensive experiments and visualization. Our results can be reproduced at https://github.com/FranxYao/PoincareProbe.

READ FULL TEXT
10/15/2018

Poincaré GloVe: Hyperbolic Word Embeddings

Words are not created equal. In fact, they form an aristocratic graph wi...
06/12/2018

Embedding Text in Hyperbolic Spaces

Natural language text exhibits hierarchical structure in a variety of re...
03/17/2022

Finding Structural Knowledge in Multimodal-BERT

In this work, we investigate the knowledge learned in the embeddings of ...
10/06/2020

Intrinsic Probing through Dimension Selection

Most modern NLP systems make use of pre-trained contextual representatio...
04/17/2021

Frequency-based Distortions in Contextualized Word Embeddings

How does word frequency in pre-training data affect the behavior of simi...
03/07/2022

Provably Accurate and Scalable Linear Classifiers in Hyperbolic Spaces

Many high-dimensional practical data sets have hierarchical structures i...
12/30/2020

Introducing Orthogonal Constraint in Structural Probes

With the recent success of pre-trained models in NLP, a significant focu...