RatGPT: Turning online LLMs into Proxies for Malware Attacks

08/17/2023
by   Mika Beckerich, et al.
0

The evolution of Generative AI and the capabilities of the newly released Large Language Models (LLMs) open new opportunities in software engineering. However, they also lead to new challenges in cybersecurity. Recently, researchers have shown the possibilities of using LLMs such as ChatGPT to generate malicious content that can directly be exploited or guide inexperienced hackers to weaponize tools and code. These studies covered scenarios that still require the attacker to be in the middle of the loop. In this study, we leverage openly available plugins and use an LLM as proxy between the attacker and the victim. We deliver a proof-of-concept where ChatGPT is used for the dissemination of malicious software while evading detection, alongside establishing the communication to a command and control (C2) server to receive commands to interact with a victim's system. Finally, we present the general approach as well as essential elements in order to stay undetected and make the attack a success. This proof-of-concept highlights significant cybersecurity issues with openly available plugins and LLMs, which require the development of security guidelines, controls, and mitigation strategies.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/03/2023

From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy

Undoubtedly, the evolution of Generative AI (GenAI) models has been the ...
research
10/24/2018

On the Effectiveness of Type-based Control Flow Integrity

Control flow integrity (CFI) has received significant attention in the c...
research
10/15/2022

How security professionals are being attacked: A study of malicious CVE proof of concept exploits in GitHub

Proof-of-concept (PoC) of exploits for known vulnerabilities are widely ...
research
05/24/2023

A New Era in Software Security: Towards Self-Healing Software via Large Language Models and Formal Verification

In this paper we present a novel solution that combines the capabilities...
research
09/11/2020

Defending Against Malicious Reorgs in Tezos Proof-of-Stake

Blockchains are intended to be immutable, so an attacker who is able to ...
research
05/24/2023

From Text to MITRE Techniques: Exploring the Malicious Use of Large Language Models for Generating Cyber Attack Payloads

This research article critically examines the potential risks and implic...
research
08/30/2019

Social Engineering in a Post-Phishing Era: Ambient Tactical Deception Attacks

It is an ordinary day working from home, and you are part of a team that...

Please sign up or login with your details

Forgot password? Click here to reset