More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models

02/23/2023
by   Kai Greshake, et al.
0

We are currently witnessing dramatic advances in the capabilities of Large Language Models (LLMs). They are already being adopted in practice and integrated into many systems, including integrated development environments (IDEs) and search engines. The functionalities of current LLMs can be modulated via natural language prompts, while their exact internal functionality remains implicit and unassessable. This property, which makes them adaptable to even unseen tasks, might also make them susceptible to targeted adversarial prompting. Recently, several ways to misalign LLMs using Prompt Injection (PI) attacks have been introduced. In such attacks, an adversary can prompt the LLM to produce malicious content or override the original instructions and the employed filtering schemes. Recent work showed that these attacks are hard to mitigate, as state-of-the-art LLMs are instruction-following. So far, these attacks assumed that the adversary is directly prompting the LLM. In this work, we show that augmenting LLMs with retrieval and API calling capabilities (so-called Application-Integrated LLMs) induces a whole new set of attack vectors. These LLMs might process poisoned content retrieved from the Web that contains malicious prompts pre-injected and selected by adversaries. We demonstrate that an attacker can indirectly perform such PI attacks. Based on this key insight, we systematically analyze the resulting threat landscape of Application-Integrated LLMs and discuss a variety of new attack vectors. To demonstrate the practical viability of our attacks, we implemented specific demonstrations of the proposed attacks within synthetic applications. In summary, our work calls for an urgent evaluation of current mitigation techniques and an investigation of whether new techniques are needed to defend LLMs against these threats.

READ FULL TEXT
research
02/11/2023

Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks

Recent advances in instruction-following large language models (LLMs) ha...
research
08/03/2023

From Prompt Injections to SQL Injection Attacks: How Protected is Your LLM-Integrated Web Application?

Large Language Models (LLMs) have found widespread applications in vario...
research
09/18/2023

Modulation to the Rescue: Identifying Sub-Circuitry in the Transistor Morass for Targeted Analysis

Physical attacks form one of the most severe threats against secure comp...
research
12/15/2017

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

Deep learning models have achieved high performance on many tasks, and t...
research
10/23/2022

A Secure Design Pattern Approach Toward Tackling Lateral-Injection Attacks

Software weaknesses that create attack surfaces for adversarial exploits...
research
06/28/2023

On the Exploitability of Instruction Tuning

Instruction tuning is an effective technique to align large language mod...
research
04/29/2020

Towards Understanding Man-on-the-Side Attacks (MotS) in SCADA Networks

We describe a new class of packet injection attacks called Man-on-the-Si...

Please sign up or login with your details

Forgot password? Click here to reset