STRATA: Building Robustness with a Simple Method for Generating Black-box Adversarial Attacks for Models of Code

09/28/2020
by   Jacob M. Springer, et al.
0

Adversarial examples are imperceptible perturbations in the input to a neural model that result in misclassification. Generating adversarial examples for source code poses an additional challenge compared to the domains of images and natural language, because source code perturbations must adhere to strict semantic guidelines so the resulting programs retain the functional meaning of the code. We propose a simple and efficient black-box method for generating state-of-the-art adversarial examples on models of code. Our method generates untargeted and targeted attacks, and empirically outperforms competing gradient-based methods with less information and less computational effort. We also use adversarial training to construct a model robust to these attacks; our attack reduces the F1 score of code2seq by 42 F1 score on adversarial examples up to 99

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/21/2018

Simultaneous Adversarial Training - Learn from Others Mistakes

Adversarial examples are maliciously tweaked images that can easily fool...
research
09/17/2019

Generating Black-Box Adversarial Examples for Text Classifiers Using a Deep Reinforced Model

Recently, generating adversarial examples has become an important means ...
research
06/21/2019

Adversarial Examples to Fool Iris Recognition Systems

Adversarial examples have recently proven to be able to fool deep learni...
research
06/13/2023

I See Dead People: Gray-Box Adversarial Attack on Image-To-Text Models

Modern image-to-text systems typically adopt the encoder-decoder framewo...
research
12/22/2019

AdvCodec: Towards A Unified Framework for Adversarial Text Generation

While there has been great interest in generating imperceptible adversar...
research
06/20/2023

Reversible Adversarial Examples with Beam Search Attack and Grayscale Invariance

Reversible adversarial examples (RAE) combine adversarial attacks and re...
research
03/20/2023

Adversarial Attacks against Binary Similarity Systems

In recent years, binary analysis gained traction as a fundamental approa...

Please sign up or login with your details

Forgot password? Click here to reset