A LLM Assisted Exploitation of AI-Guardian

07/20/2023
by   Nicholas Carlini, et al.
0

Large language models (LLMs) are now highly capable at a diverse range of tasks. This paper studies whether or not GPT-4, one such LLM, is capable of assisting researchers in the field of adversarial machine learning. As a case study, we evaluate the robustness of AI-Guardian, a recent defense to adversarial examples published at IEEE S P 2023, a top computer security conference. We completely break this defense: the proposed scheme does not increase robustness compared to an undefended baseline. We write none of the code to attack this model, and instead prompt GPT-4 to implement all attack algorithms following our instructions and guidance. This process was surprisingly effective and efficient, with the language model at times producing code from ambiguous instructions faster than the author of this paper could have done. We conclude by discussing (1) the warning signs present in the evaluation that suggested to us AI-Guardian would be broken, and (2) our experience with designing attacks and performing novel research using the most recent advances in language modeling.

READ FULL TEXT

page 13

page 14

page 15

page 16

page 17

page 18

page 19

page 21

research
10/15/2021

Adversarial Attacks on ML Defense Models Competition

Due to the vulnerability of deep neural networks (DNNs) to adversarial e...
research
07/26/2018

Evaluating and Understanding the Robustness of Adversarial Logit Pairing

We evaluate the robustness of Adversarial Logit Pairing, a recently prop...
research
09/23/2020

A Partial Break of the Honeypots Defense to Catch Adversarial Attacks

A recent defense proposes to inject "honeypots" into neural networks in ...
research
06/22/2023

Visual Adversarial Examples Jailbreak Large Language Models

Recently, there has been a surge of interest in introducing vision into ...
research
08/21/2019

Testing Robustness Against Unforeseen Adversaries

Considerable work on adversarial defense has studied robustness to a fix...
research
07/23/2020

AI Data poisoning attack: Manipulating game AI of Go

With the extensive use of AI in various fields, the issue of AI security...
research
07/21/2023

Bibliometric Analysis of Publisher and Journal Instructions to Authors on Generative-AI in Academic and Scientific Publishing

We aim to determine the extent and content of guidance for authors regar...

Please sign up or login with your details

Forgot password? Click here to reset