Adversarial Attacks against Binary Similarity Systems

03/20/2023
by   Gianluca Capozzi, et al.
0

In recent years, binary analysis gained traction as a fundamental approach to inspect software and guarantee its security. Due to the exponential increase of devices running software, much research is now moving towards new autonomous solutions based on deep learning models, as they have been showing state-of-the-art performances in solving binary analysis problems. One of the hot topics in this context is binary similarity, which consists in determining if two functions in assembly code are compiled from the same source code. However, it is unclear how deep learning models for binary similarity behave in an adversarial context. In this paper, we study the resilience of binary similarity models against adversarial examples, showing that they are susceptible to both targeted and untargeted attacks (w.r.t. similarity goals) performed by black-box and white-box attackers. In more detail, we extensively test three current state-of-the-art solutions for binary similarity against two black-box greedy attacks, including a new technique that we call Spatial Greedy, and one white-box attack in which we repurpose a gradient-guided strategy used in attacks to image classifiers.

READ FULL TEXT

page 16

page 18

research
08/25/2020

Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer Learning

Transfer learning has become a common practice for training deep learnin...
research
05/01/2019

POBA-GA: Perturbation Optimized Black-Box Adversarial Attacks via Genetic Algorithm

Most deep learning models are easily vulnerable to adversarial attacks. ...
research
08/24/2022

Attacking Neural Binary Function Detection

Binary analyses based on deep neural networks (DNNs), or neural binary a...
research
09/28/2020

STRATA: Building Robustness with a Simple Method for Generating Black-box Adversarial Attacks for Models of Code

Adversarial examples are imperceptible perturbations in the input to a n...
research
08/30/2022

A Black-Box Attack on Optical Character Recognition Systems

Adversarial machine learning is an emerging area showing the vulnerabili...
research
09/21/2018

Adversarial Binaries for Authorship Identification

Binary code authorship identification determines authors of a binary pro...
research
10/22/2020

Adversarial Attacks on Binary Image Recognition Systems

We initiate the study of adversarial attacks on models for binary (i.e. ...

Please sign up or login with your details

Forgot password? Click here to reset