Enhancing Gradient-based Attacks with Symbolic Intervals

06/05/2019
by   Shiqi Wang, et al.
0

Recent breakthroughs in defenses against adversarial examples, like adversarial training, make the neural networks robust against various classes of attackers (e.g., first-order gradient-based attacks). However, it is an open question whether the adversarially trained networks are truly robust under unknown attacks. In this paper, we present interval attacks, a new technique to find adversarial examples to evaluate the robustness of neural networks. Interval attacks leverage symbolic interval propagation, a bound propagation technique that can exploit a broader view around the current input to locate promising areas containing adversarial instances, which in turn can be searched with existing gradient-guided attacks. We can obtain such a broader view using sound bound propagation methods to track and over-approximate the errors of the network within given input ranges. Our results show that, on state-of-the-art adversarially trained networks, interval attack can find on average 47 relatively more violations than the state-of-the-art gradient-guided PGD attack.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2021

Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent

Evading adversarial example detection defenses requires finding adversar...
research
12/15/2022

Optimized Symbolic Interval Propagation for Neural Network Verification

Neural networks are increasingly applied in safety critical domains, the...
research
03/25/2019

Robust Neural Networks using Randomized Adversarial Training

Since the discovery of adversarial examples in machine learning, researc...
research
11/12/2017

Machine vs Machine: Minimax-Optimal Defense Against Adversarial Examples

Recently, researchers have discovered that the state-of-the-art object c...
research
07/04/2019

Adversarial Robustness through Local Linearization

Adversarial training is an effective methodology for training deep neura...
research
06/15/2018

Random depthwise signed convolutional neural networks

Random weights in convolutional neural networks have shown promising res...
research
01/29/2023

Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering

Most existing methods to detect backdoored machine learning (ML) models ...

Please sign up or login with your details

Forgot password? Click here to reset