Why do universal adversarial attacks work on large language models?: Geometry might be the answer

09/01/2023
by   Varshini Subhash, et al.
0

Transformer based large language models with emergent capabilities are becoming increasingly ubiquitous in society. However, the task of understanding and interpreting their internal workings, in the context of adversarial attacks, remains largely unsolved. Gradient-based universal adversarial attacks have been shown to be highly effective on large language models and potentially dangerous due to their input-agnostic nature. This work presents a novel geometric perspective explaining universal adversarial attacks on large language models. By attacking the 117M parameter GPT-2 model, we find evidence indicating that universal adversarial triggers could be embedding vectors which merely approximate the semantic information in their adversarial training region. This hypothesis is supported by white-box model analysis comprising dimensionality reduction and similarity measurement of hidden representations. We believe this new geometric perspective on the underlying mechanism driving universal attacks could help us gain deeper insight into the internal workings and failure modes of LLMs, thus enabling their mitigation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/27/2023

Universal and Transferable Adversarial Attacks on Aligned Language Models

Because "out-of-the-box" large language models are capable of generating...
research
01/30/2021

Cortical Features for Defense Against Adversarial Audio Attacks

We propose using a computational model of the auditory cortex as a defen...
research
09/25/2021

MINIMAL: Mining Models for Data Free Universal Adversarial Triggers

It is well known that natural language models are vulnerable to adversar...
research
03/01/2023

Competence-Based Analysis of Language Models

Despite the recent success of large pretrained language models (LMs) on ...
research
09/15/2023

Adversarial Attacks on Tables with Entity Swap

The capabilities of large language models (LLMs) have been successfully ...
research
04/29/2022

Logically Consistent Adversarial Attacks for Soft Theorem Provers

Recent efforts within the AI community have yielded impressive results t...
research
11/28/2022

Attack on Unfair ToS Clause Detection: A Case Study using Universal Adversarial Triggers

Recent work has demonstrated that natural language processing techniques...

Please sign up or login with your details

Forgot password? Click here to reset