BertRLFuzzer: A BERT and Reinforcement Learning based Fuzzer

05/21/2023
by   Piyush Jha, et al.
0

We present a novel tool BertRLFuzzer, a BERT and Reinforcement Learning (RL) based fuzzer aimed at finding security vulnerabilities. BertRLFuzzer works as follows: given a list of seed inputs, the fuzzer performs grammar-adhering and attack-provoking mutation operations on them to generate candidate attack vectors. The key insight of BertRLFuzzer is the combined use of two machine learning concepts. The first one is the use of semi-supervised learning with language models (e.g., BERT) that enables BertRLFuzzer to learn (relevant fragments of) the grammar of a victim application as well as attack patterns, without requiring the user to specify it explicitly. The second one is the use of RL with BERT model as an agent to guide the fuzzer to efficiently learn grammar-adhering and attack-provoking mutation operators. The RL-guided feedback loop enables BertRLFuzzer to automatically search the space of attack vectors to exploit the weaknesses of the given victim application without the need to create labeled training data. Furthermore, these two features together enable BertRLFuzzer to be extensible, i.e., the user can extend BertRLFuzzer to a variety of victim applications and attack vectors automatically (i.e., without explicitly modifying the fuzzer or providing a grammar). In order to establish the efficacy of BertRLFuzzer we compare it against a total of 13 black box and white box fuzzers over a benchmark of 9 victim websites. We observed a significant improvement in terms of time to first attack (54 vulnerabilities (40-60 (4.4 experiments show that the combination of the BERT model and RL-based learning makes BertRLFuzzer an effective, adaptive, easy-to-use, automatic, and extensible fuzzer.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/09/2021

Provably Efficient Black-Box Action Poisoning Attacks Against Reinforcement Learning

Due to the broad range of applications of reinforcement learning (RL), u...
research
02/16/2021

Reward Poisoning in Reinforcement Learning: Attacks Against Unknown Learners in Unknown Environments

We study black-box reward poisoning attacks against reinforcement learni...
research
05/17/2021

RL-GRIT: Reinforcement Learning for Grammar Inference

When working to understand usage of a data format, examples of the data ...
research
06/20/2023

Learning to Generate Better Than Your LLM

Reinforcement learning (RL) has emerged as a powerful paradigm for fine-...
research
08/17/2018

Reinforcement Learning for Autonomous Defence in Software-Defined Networking

Despite the successful application of machine learning (ML) in a wide ra...
research
03/25/2023

Context Matters: Adaptive Mutation for Grammars

This work proposes Adaptive Facilitated Mutation, a self-adaptive mutati...
research
05/12/2021

Snipuzz: Black-box Fuzzing of IoT Firmware via Message Snippet Inference

The proliferation of Internet of Things (IoT) devices has made people's ...

Please sign up or login with your details

Forgot password? Click here to reset