From Text to MITRE Techniques: Exploring the Malicious Use of Large Language Models for Generating Cyber Attack Payloads

05/24/2023
by   P. V. Sai Charan, et al.
0

This research article critically examines the potential risks and implications arising from the malicious utilization of large language models(LLM), focusing specifically on ChatGPT and Google's Bard. Although these large language models have numerous beneficial applications, the misuse of this technology by cybercriminals for creating offensive payloads and tools is a significant concern. In this study, we systematically generated implementable code for the top-10 MITRE Techniques prevalent in 2022, utilizing ChatGPT, and conduct a comparative analysis of its performance with Google's Bard. Our experimentation reveals that ChatGPT has the potential to enable attackers to accelerate the operation of more targeted and sophisticated attacks. Additionally, the technology provides amateur attackers with more capabilities to perform a wide range of attacks and empowers script kiddies to develop customized tools that contribute to the acceleration of cybercrime. Furthermore, LLMs significantly benefits malware authors, particularly ransomware gangs, in generating sophisticated variants of wiper and ransomware attacks with ease. On a positive note, our study also highlights how offensive security researchers and pentesters can make use of LLMs to simulate realistic attack scenarios, identify potential vulnerabilities, and better protect organizations. Overall, we conclude by emphasizing the need for increased vigilance in mitigating the risks associated with LLMs. This includes implementing robust security measures, increasing awareness and education around the potential risks of this technology, and collaborating with security experts to stay ahead of emerging threats.

READ FULL TEXT

page 5

page 6

page 9

page 14

page 18

page 19

page 20

page 23

research
07/03/2023

From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy

Undoubtedly, the evolution of Generative AI (GenAI) models has been the ...
research
05/13/2023

Beyond the Safeguards: Exploring the Security Risks of ChatGPT

The increasing popularity of large language models (LLMs) such as ChatGP...
research
05/24/2023

Adversarial Demonstration Attacks on Large Language Models

With the emergence of more powerful large language models (LLMs), such a...
research
02/11/2023

Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks

Recent advances in instruction-following large language models (LLMs) ha...
research
08/28/2023

Identifying and Mitigating the Security Risks of Generative AI

Every major technical invention resurfaces the dual-use dilemma – the ne...
research
07/17/2023

LogPrécis: Unleashing Language Models for Automated Shell Log Analysis

The collection of security-related logs holds the key to understanding a...
research
08/17/2023

RatGPT: Turning online LLMs into Proxies for Malware Attacks

The evolution of Generative AI and the capabilities of the newly release...

Please sign up or login with your details

Forgot password? Click here to reset