Attacking Neural Binary Function Detection

08/24/2022
by   Joshua Bundt, et al.
0

Binary analyses based on deep neural networks (DNNs), or neural binary analyses (NBAs), have become a hotly researched topic in recent years. DNNs have been wildly successful at pushing the performance and accuracy envelopes in the natural language and image processing domains. Thus, DNNs are highly promising for solving binary analysis problems that are typically hard due to a lack of complete information resulting from the lossy compilation process. Despite this promise, it is unclear that the prevailing strategy of repurposing embeddings and model architectures originally developed for other problem domains is sound given the adversarial contexts under which binary analysis often operates. In this paper, we empirically demonstrate that the current state of the art in neural function boundary detection is vulnerable to both inadvertent and deliberate adversarial attacks. We proceed from the insight that current generation NBAs are built upon embeddings and model architectures intended to solve syntactic problems. We devise a simple, reproducible, and scalable black-box methodology for exploring the space of inadvertent attacks - instruction sequences that could be emitted by common compiler toolchains and configurations - that exploits this syntactic design focus. We then show that these inadvertent misclassifications can be exploited by an attacker, serving as the basis for a highly effective black-box adversarial example generation process. We evaluate this methodology against two state-of-the-art neural function boundary detectors: XDA and DeepDi. We conclude with an analysis of the evaluation data and recommendations for how future research might avoid succumbing to similar attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/28/2021

AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value Analysis

Deep neural networks (DNNs) are proved to be vulnerable against backdoor...
research
03/20/2023

Adversarial Attacks against Binary Similarity Systems

In recent years, binary analysis gained traction as a fundamental approa...
research
09/06/2022

Instance Attack:An Explanation-based Vulnerability Analysis Framework Against DNNs for Malware Detection

Deep neural networks (DNNs) are increasingly being applied in malware de...
research
07/20/2018

Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors

We introduce a framework that unifies the existing work on black-box adv...
research
06/16/2019

Defending Against Adversarial Attacks Using Random Forests

As deep neural networks (DNNs) have become increasingly important and po...
research
03/24/2020

PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural Networks

The black-box nature of deep neural networks (DNNs) facilitates attacker...
research
10/23/2018

Unsupervised Features Extraction for Binary Similarity Using Graph Embedding Neural Networks

In this paper we consider the binary similarity problem that consists in...

Please sign up or login with your details

Forgot password? Click here to reset