A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning

10/15/2020
by   Hongjun Wang, et al.
0

Although deep convolutional neural networks (CNNs) have demonstrated remarkable performance on multiple computer vision tasks, researches on adversarial learning have shown that deep models are vulnerable to adversarial examples, which are crafted by adding visually imperceptible perturbations to the input images. Most of the existing adversarial attack methods only create a single adversarial example for the input, which just gives a glimpse of the underlying data manifold of adversarial examples. An attractive solution is to explore the solution space of the adversarial examples and generate a diverse bunch of them, which could potentially improve the robustness of real-world systems and help prevent severe security threats and vulnerabilities. In this paper, we present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples. To improve the efficiency of HMC, we propose a new regime to automatically control the length of trajectories, which allows the algorithm to move with adaptive step sizes along the search direction at different positions. Moreover, we revisit the reason for high computational cost of adversarial training under the view of MCMC and design a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples with only few iterations by building from small modifications of the standard Contrastive Divergence (CD) and achieve a trade-off between efficiency and accuracy. Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.

READ FULL TEXT

page 1

page 13

research
07/25/2022

SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness

Deep neural network-based image classifications are vulnerable to advers...
research
11/01/2022

The Enemy of My Enemy is My Friend: Exploring Inverse Adversaries for Improving Adversarial Training

Although current deep learning techniques have yielded superior performa...
research
02/01/2020

AdvJND: Generating Adversarial Examples with Just Noticeable Difference

Compared with traditional machine learning models, deep neural networks ...
research
06/27/2019

Using Intuition from Empirical Properties to Simplify Adversarial Training Defense

Due to the surprisingly good representation power of complex distributio...
research
01/01/2022

Adversarial Attack via Dual-Stage Network Erosion

Deep neural networks are vulnerable to adversarial examples, which can f...
research
07/18/2018

Gradient Band-based Adversarial Training for Generalized Attack Immunity of A3C Path Finding

As adversarial attacks pose a serious threat to the security of AI syste...
research
02/09/2023

Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples

Diffusion Models (DMs) achieve state-of-the-art performance in generativ...

Please sign up or login with your details

Forgot password? Click here to reset