Are aligned neural networks adversarially aligned?

06/26/2023
by   Nicholas Carlini, et al.
0

Large language models are now tuned to align with the goals of their creators, namely to be "helpful and harmless." These models should respond helpfully to user questions, but refuse to answer requests that could cause harm. However, adversarial users can construct inputs which circumvent attempts at alignment. In this work, we study to what extent these models remain aligned, even when interacting with an adversarial user who constructs worst-case inputs (adversarial examples). These inputs are designed to cause the model to emit harmful content that would otherwise be prohibited. We show that existing NLP-based optimization attacks are insufficiently powerful to reliably attack aligned text models: even when current NLP-based attacks fail, we can find adversarial inputs with brute force. As a result, the failure of current attacks should not be seen as proof that aligned text models remain aligned under adversarial inputs. However the recent trend in large-scale ML models is multimodal models that allow users to provide images that influence the text that is generated. We show these models can be easily attacked, i.e., induced to perform arbitrary un-aligned behavior through adversarial perturbation of the input image. We conjecture that improved NLP attacks may demonstrate this same level of adversarial control over text-only models.

READ FULL TEXT

page 18

page 19

page 20

page 21

page 22

research
08/14/2023

LLM Self Defense: By Self Examination, LLMs Know They Are Being Tricked

Large language models (LLMs) have skyrocketed in popularity in recent ye...
research
07/27/2023

Universal and Transferable Adversarial Attacks on Aligned Language Models

Because "out-of-the-box" large language models are capable of generating...
research
04/17/2022

Residue-Based Natural Language Adversarial Attack Detection

Deep learning based systems are susceptible to adversarial attacks, wher...
research
05/24/2023

How do humans perceive adversarial text? A reality check on the validity and naturalness of word-based adversarial attacks

Natural Language Processing (NLP) models based on Machine Learning (ML) ...
research
07/05/2019

Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions

Adversarial examples raise questions about whether neural network models...
research
12/29/2021

Repairing Adversarial Texts through Perturbation

It is known that neural networks are subject to attacks through adversar...
research
10/14/2022

Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values

Many NLP classification tasks, such as sexism/racism detection or toxici...

Please sign up or login with your details

Forgot password? Click here to reset