Defending from adversarial examples with a two-stream architecture

12/30/2019
by   Hao Ge, et al.
7

In recent years, deep learning has shown impressive performance on many tasks. However, recent researches showed that deep learning systems are vulnerable to small, specially crafted perturbations that are imperceptible to humans. Images with such perturbations are the so called adversarial examples, which have proven to be an indisputable threat to the DNN based applications. The lack of better understanding of the DNNs has prevented the development of efficient defenses against adversarial examples. In this paper, we propose a two-stream architecture to protect CNN from attacking by adversarial examples. Our model draws on the idea of "two-stream" which commonly used in the security field, and successfully defends different kinds of attack methods by the differences of "high-resolution" and "low-resolution" networks in feature extraction. We provide a reasonable interpretation on why our two-stream architecture is difficult to defeat, and show experimentally that our method is hard to defeat with state-of-the-art attacks. We demonstrate that our two-stream architecture is robust to adversarial examples built by currently known attacking algorithms.

READ FULL TEXT

page 2

page 3

page 4

page 7

research
04/14/2018

On the Limitation of MagNet Defense against L_1-based Adversarial Examples

In recent years, defending adversarial perturbations to natural examples...
research
03/05/2020

Search Space of Adversarial Perturbations against Image Filters

The superiority of deep learning performance is threatened by safety iss...
research
02/06/2022

Pipe Overflow: Smashing Voice Authentication for Fun and Profit

Recent years have seen a surge of popularity of acoustics-enabled person...
research
04/01/2017

SafetyNet: Detecting and Rejecting Adversarial Examples Robustly

We describe a method to produce a network where current methods such as ...
research
08/23/2017

Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid

Deep neural networks have been widely adopted in recent years, exhibitin...
research
03/11/2021

DAFAR: Defending against Adversaries by Feedback-Autoencoder Reconstruction

Deep learning has shown impressive performance on challenging perceptual...
research
03/14/2018

Defensive Collaborative Multi-task Training - Defending against Adversarial Attack towards Deep Neural Networks

Deep neural network (DNNs) has shown impressive performance on hard perc...

Please sign up or login with your details

Forgot password? Click here to reset