Preemptive Image Robustification for Protecting Users against Man-in-the-Middle Adversarial Attacks

12/10/2021
by   Seungyong Moon, et al.
0

Deep neural networks have become the driving force of modern image recognition systems. However, the vulnerability of neural networks against adversarial attacks poses a serious threat to the people affected by these systems. In this paper, we focus on a real-world threat model where a Man-in-the-Middle adversary maliciously intercepts and perturbs images web users upload online. This type of attack can raise severe ethical concerns on top of simple performance degradation. To prevent this attack, we devise a novel bi-level optimization algorithm that finds points in the vicinity of natural images that are robust to adversarial perturbations. Experiments on CIFAR-10 and ImageNet show our method can effectively robustify natural images within the given modification budget. We also show the proposed method can improve robustness when jointly used with randomized smoothing.

READ FULL TEXT
research
10/23/2019

Wasserstein Smoothing: Certified Robustness against Wasserstein Adversarial Attacks

In the last couple of years, several adversarial attack methods based on...
research
03/22/2023

State-of-the-art optical-based physical adversarial attacks for deep learning computer vision systems

Adversarial attacks can mislead deep learning models to make false predi...
research
10/16/2019

A New Defense Against Adversarial Images: Turning a Weakness into a Strength

Natural images are virtually surrounded by low-density misclassified reg...
research
11/21/2019

Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation

Recently, techniques have been developed to provably guarantee the robus...
research
12/16/2018

Trust Region Based Adversarial Attack on Neural Networks

Deep Neural Networks are quite vulnerable to adversarial perturbations. ...
research
02/09/2021

Benford's law: what does it say on adversarial images?

Convolutional neural networks (CNNs) are fragile to small perturbations ...
research
08/21/2018

Are You Tampering With My Data?

We propose a novel approach towards adversarial attacks on neural networ...

Please sign up or login with your details

Forgot password? Click here to reset