Harmonic Adversarial Attack Method

07/18/2018
by   Wen Heng, et al.
0

Adversarial attacks find perturbations that can fool models into misclassifying images. Previous works had successes in generating noisy/edge-rich adversarial perturbations, at the cost of degradation of image quality. Such perturbations, even when they are small in scale, are usually easily spottable by human vision. In contrast, we propose Harmonic Adversar- ial Attack Methods (HAAM), that generates edge-free perturbations by using harmonic functions. The property of edge-free guarantees that the generated adversarial images can still preserve visual quality, even when perturbations are of large magnitudes. Experiments also show that adversaries generated by HAAM often have higher rates of success when transferring between models. In addition, we find harmonic perturbations can simulate natural phenomena like natural lighting and shadows. It would then be possible to help find corner cases for given models, as a first step to improving them.

READ FULL TEXT

page 1

page 2

page 4

page 6

page 8

research
10/30/2020

Perception Improvement for Free: Exploring Imperceptible Black-box Adversarial Attacks on Image Classification

Deep neural networks are vulnerable to adversarial attacks. White-box ad...
research
08/05/2019

A principled approach for generating adversarial images under non-smooth dissimilarity metrics

Deep neural networks are vulnerable to adversarial perturbations: small ...
research
01/01/2023

ExploreADV: Towards exploratory attack for Neural Networks

Although deep learning has made remarkable progress in processing variou...
research
01/16/2020

A Little Fog for a Large Turn

Small, carefully crafted perturbations called adversarial perturbations ...
research
05/15/2023

Attacking Perceptual Similarity Metrics

Perceptual similarity metrics have progressively become more correlated ...
research
12/21/2021

Melody Harmonization with Controllable Harmonic Rhythm

Melody harmonization, namely generating a chord progression for a user-g...
research
06/20/2021

Attack to Fool and Explain Deep Networks

Deep visual models are susceptible to adversarial perturbations to input...

Please sign up or login with your details

Forgot password? Click here to reset