POBA-GA: Perturbation Optimized Black-Box Adversarial Attacks via Genetic Algorithm

05/01/2019
by   Jinyin Chen, et al.
0

Most deep learning models are easily vulnerable to adversarial attacks. Various adversarial attacks are designed to evaluate the robustness of models and develop defense model. Currently, adversarial attacks are brought up to attack their own target model with their own evaluation metrics. And most of the black-box adversarial attack algorithms cannot achieve the expected success rate compared with white-box attacks. In this paper, comprehensive evaluation metrics are brought up for different adversarial attack methods. A novel perturbation optimized black-box adversarial attack based on genetic algorithm (POBA-GA) is proposed for achieving white-box comparable attack performances. Approximate optimal adversarial examples are evolved through evolutionary operations including initialization, selection, crossover and mutation. Fitness function is specifically designed to evaluate the example individual in both aspects of attack ability and perturbation control. Population diversity strategy is brought up in evolutionary process to promise the approximate optimal perturbations obtained. Comprehensive experiments are carried out to testify POBA-GA's performances. Both simulation and application results prove that our method is better than current state-of-art black-box attack methods in aspects of attack capability and perturbation control.

READ FULL TEXT
research
04/10/2023

Certifiable Black-Box Attack: Ensuring Provably Successful Attack for Adversarial Examples

Black-box adversarial attacks have shown strong potential to subvert mac...
research
07/01/2020

Determining Sequence of Image Processing Technique (IPT) to Detect Adversarial Attacks

Developing secure machine learning models from adversarial examples is c...
research
09/04/2023

Open Sesame! Universal Black Box Jailbreaking of Large Language Models

Large language models (LLMs), designed to provide helpful and safe respo...
research
03/20/2023

Adversarial Attacks against Binary Similarity Systems

In recent years, binary analysis gained traction as a fundamental approa...
research
06/08/2023

A Melting Pot of Evolution and Learning

We survey eight recent works by our group, involving the successful blen...
research
08/25/2022

Semantic Preserving Adversarial Attack Generation with Autoencoder and Genetic Algorithm

Widely used deep learning models are found to have poor robustness. Litt...
research
04/04/2022

RobustSense: Defending Adversarial Attack for Secure Device-Free Human Activity Recognition

Deep neural networks have empowered accurate device-free human activity ...

Please sign up or login with your details

Forgot password? Click here to reset