A Study of the Attention Abnormality in Trojaned BERTs

05/13/2022
by   Weimin Lyu, et al.
10

Trojan attacks raise serious security concerns. In this paper, we investigate the underlying mechanism of Trojaned BERT models. We observe the attention focus drifting behavior of Trojaned models, i.e., when encountering an poisoned input, the trigger token hijacks the attention focus regardless of the context. We provide a thorough qualitative and quantitative analysis of this phenomenon, revealing insights into the Trojan mechanism. Based on the observation, we propose an attention-based Trojan detector to distinguish Trojaned models from clean ones. To the best of our knowledge, this is the first paper to analyze the Trojan mechanism and to develop a Trojan detector based on the transformer's attention.

READ FULL TEXT
research
08/09/2022

Attention Hijacking in Trojan Transformers

Trojan attacks pose a severe threat to AI systems. Recent works on Trans...
research
10/08/2021

Adversarial Token Attacks on Vision Transformers

Vision transformers rely on a patch token based self attention mechanism...
research
04/10/2020

Telling BERT's full story: from Local Attention to Global Aggregation

We take a deep look into the behavior of self-attention heads in the tra...
research
07/01/2019

Do Transformer Attention Heads Provide Transparency in Abstractive Summarization?

Learning algorithms become more powerful, often at the cost of increased...
research
11/02/2020

How Far Does BERT Look At:Distance-based Clustering and Analysis of BERT's Attention

Recent research on the multi-head attention mechanism, especially that i...
research
08/21/2019

Revealing the Dark Secrets of BERT

BERT-based architectures currently give state-of-the-art performance on ...
research
09/24/2022

In-context Learning and Induction Heads

"Induction heads" are attention heads that implement a simple algorithm ...

Please sign up or login with your details

Forgot password? Click here to reset