DeepAI
Log In Sign Up

Visually Analyzing Contextualized Embeddings

09/05/2020
by   Matthew Berger, et al.
0

In this paper we introduce a method for visually analyzing contextualized embeddings produced by deep neural network-based language models. Our approach is inspired by linguistic probes for natural language processing, where tasks are designed to probe language models for linguistic structure, such as parts-of-speech and named entities. These approaches are largely confirmatory, however, only enabling a user to test for information known a priori. In this work, we eschew supervised probing tasks, and advocate for unsupervised probes, coupled with visual exploration techniques, to assess what is learned by language models. Specifically, we cluster contextualized embeddings produced from a large text corpus, and introduce a visualization design based on this clustering and textual structure - cluster co-occurrences, cluster spans, and cluster-word membership - to help elicit the functionality of, and relationship between, individual clusters. User feedback highlights the benefits of our design in discovering different types of linguistic structures.

READ FULL TEXT
07/20/2022

Integrating Linguistic Theory and Neural Language Models

Transformer-based language models have recently achieved remarkable resu...
11/20/2016

Visualizing Linguistic Shift

Neural network based models are a very powerful tool for creating word e...
11/01/2018

Understanding Learning Dynamics Of Language Models with SVCCA

Recent work has demonstrated that neural language models encode linguist...
08/19/2021

A Framework for Neural Topic Modeling of Text Corpora

Topic Modeling refers to the problem of discovering the main topics that...
10/14/2021

On the Pitfalls of Analyzing Individual Neurons in Language Models

While many studies have shown that linguistic information is encoded in ...
10/23/2022

RuCoLA: Russian Corpus of Linguistic Acceptability

Linguistic acceptability (LA) attracts the attention of the research com...
02/08/2021

RECAST: Enabling User Recourse and Interpretability of Toxicity Detection Models with Interactive Visualization

With the widespread use of toxic language online, platforms are increasi...