The Topological BERT: Transforming Attention into Topology for Natural Language Processing

06/30/2022
by   Ilan Perez, et al.
208

In recent years, the introduction of the Transformer models sparked a revolution in natural language processing (NLP). BERT was one of the first text encoders using only the attention mechanism without any recurrent parts to achieve state-of-the-art results on many NLP tasks. This paper introduces a text classifier using topological data analysis. We use BERT's attention maps transformed into attention graphs as the only input to that classifier. The model can solve tasks such as distinguishing spam from ham messages, recognizing whether a sentence is grammatically correct, or evaluating a movie review as negative or positive. It performs comparably to the BERT baseline and outperforms it on some tasks. Additionally, we propose a new method to reduce the number of BERT's attention heads considered by the topological classifier, which allows us to prune the number of heads from 144 down to as few as ten with no reduction in performance. Our work also shows that the topological model displays higher robustness against adversarial attacks than the original BERT model, which is maintained during the pruning process. To the best of our knowledge, this work is the first to confront topological-based models with adversarial attacks in the context of NLP.

READ FULL TEXT

page 5

page 13

page 16

page 17

page 18

page 20

page 31

research
07/05/2022

Betti numbers of attention graphs is all you really need

We apply methods of topological analysis to the attention graphs, calcul...
research
03/12/2022

A Survey in Adversarial Defences and Robustness in NLP

In recent years, it has been seen that deep neural networks are lacking ...
research
05/19/2022

Acceptability Judgements via Examining the Topology of Attention Maps

The role of the attention mechanism in encoding linguistic knowledge has...
research
02/27/2020

Adv-BERT: BERT is not robust on misspellings! Generating nature adversarial samples on BERT

There is an increasing amount of literature that claims the brittleness ...
research
04/13/2021

NPE: An FPGA-based Overlay Processor for Natural Language Processing

In recent years, transformer-based models have shown state-of-the-art re...
research
10/28/2021

Pruning Attention Heads of Transformer Models Using A* Search: A Novel Approach to Compress Big NLP Architectures

Recent years have seen a growing adoption of Transformer models such as ...
research
09/30/2020

AUBER: Automated BERT Regularization

How can we effectively regularize BERT? Although BERT proves its effecti...

Please sign up or login with your details

Forgot password? Click here to reset