Attention Flows: Analyzing and Comparing Attention Mechanisms in Language Models

09/03/2020
by   Joseph F DeRose, et al.
0

Advances in language modeling have led to the development of deep attention-based models that are performant across a wide variety of natural language processing (NLP) problems. These language models are typified by a pre-training process on large unlabeled text corpora and subsequently fine-tuned for specific tasks. Although considerable work has been devoted to understanding the attention mechanisms of pre-trained models, it is less understood how a model's attention mechanisms change when trained for a target NLP task. In this paper, we propose a visual analytics approach to understanding fine-tuning in attention-based language models. Our visualization, Attention Flows, is designed to support users in querying, tracing, and comparing attention within layers, across layers, and amongst attention heads in Transformer-based language models. To help users gain insight on how a classification decision is made, our design is centered on depicting classification-based attention at the deepest layer and how attention from prior layers flows throughout words in the input. Attention Flows supports the analysis of a single model, as well as the visual comparison between pre-trained and fine-tuned models via their similarities and differences. We use Attention Flows to study attention mechanisms in various sentence understanding tasks and highlight how attention evolves to address the nuances of solving these tasks.

READ FULL TEXT
research
05/30/2023

Preserving Pre-trained Features Helps Calibrate Fine-tuned Language Models

Large pre-trained language models (PLMs) have demonstrated strong perfor...
research
10/18/2022

Fine-mixing: Mitigating Backdoors in Fine-tuned Language Models

Deep Neural Networks (DNNs) are known to be vulnerable to backdoor attac...
research
10/13/2022

Predicting Fine-Tuning Performance with Probing

Large NLP models have recently shown impressive performance in language ...
research
09/12/2023

Recovering from Privacy-Preserving Masking with Large Language Models

Model adaptation is crucial to handle the discrepancy between proxy trai...
research
11/01/2022

The future is different: Large pre-trained language models fail in prediction tasks

Large pre-trained language models (LPLM) have shown spectacular success ...
research
01/03/2020

On the comparability of Pre-trained Language Models

Recent developments in unsupervised representation learning have success...
research
04/04/2023

Can BERT eat RuCoLA? Topological Data Analysis to Explain

This paper investigates how Transformer language models (LMs) fine-tuned...

Please sign up or login with your details

Forgot password? Click here to reset