All Roads Lead to Rome? Exploring the Invariance of Transformers' Representations

05/23/2023
by   Yuxin Ren, et al.
2

Transformer models bring propelling advances in various NLP tasks, thus inducing lots of interpretability research on the learned representations of the models. However, we raise a fundamental question regarding the reliability of the representations. Specifically, we investigate whether transformers learn essentially isomorphic representation spaces, or those that are sensitive to the random seeds in their pretraining process. In this work, we formulate the Bijection Hypothesis, which suggests the use of bijective methods to align different models' representation spaces. We propose a model based on invertible neural networks, BERT-INN, to learn the bijection more effectively than other existing bijective methods such as the canonical correlation analysis (CCA). We show the advantage of BERT-INN both theoretically and through extensive experiments, and apply it to align the reproduced BERT embeddings to draw insights that are meaningful to the interpretability research. Our code is at https://github.com/twinkle0331/BERT-similarity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/09/2020

VisBERT: Hidden-State Visualizations for Transformers

Explainability and interpretability are two important concepts, the abse...
research
03/25/2019

Fine-tune BERT for Extractive Summarization

BERT, a pre-trained Transformer model, has achieved ground-breaking perf...
research
10/12/2022

Foundation Transformers

A big convergence of model architectures across language, vision, speech...
research
06/19/2019

Pre-Training with Whole Word Masking for Chinese BERT

Bidirectional Encoder Representations from Transformers (BERT) has shown...
research
05/02/2022

BERTops: Studying BERT Representations under a Topological Lens

Proposing scoring functions to effectively understand, analyze and learn...
research
01/12/2023

Tracr: Compiled Transformers as a Laboratory for Interpretability

Interpretability research aims to build tools for understanding machine ...
research
06/30/2021

The MultiBERTs: BERT Reproductions for Robustness Analysis

Experiments with pretrained models such as BERT are often based on a sin...

Please sign up or login with your details

Forgot password? Click here to reset