PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks

05/31/2018
by   Jan Svoboda, et al.
0

Deep learning systems have become ubiquitous in many aspects of our lives. Unfortunately, it has been shown that such systems are vulnerable to adversarial attacks, making them prone to potential unlawful uses. Designing deep neural networks that are robust to adversarial attacks is a fundamental step in making such systems safer and deployable in a broader variety of applications (e.g. autonomous driving), but more importantly is a necessary step to design novel and more advanced architectures built on new computational paradigms rather than marginally building on the existing ones. In this paper we introduce PeerNets, a novel family of convolutional networks alternating classical Euclidean convolutions with graph convolutions to harness information from a graph of peer samples. This results in a form of non-local forward propagation in the model, where latent features are conditioned on the global structure induced by the graph, that is up to 3 times more robust to a variety of white- and black-box adversarial attacks compared to conventional architectures with almost no drop in accuracy.

READ FULL TEXT

page 4

page 6

page 7

page 13

page 14

research
10/23/2020

Learn Robust Features via Orthogonal Multi-Path

It is now widely known that by adversarial attacks, clean images with in...
research
10/10/2021

Universal Adversarial Attacks on Neural Networks for Power Allocation in a Massive MIMO System

Deep learning (DL) architectures have been successfully used in many app...
research
04/15/2018

Adversarial Attacks Against Medical Deep Learning Systems

The discovery of adversarial examples has raised concerns about the prac...
research
02/22/2019

Physical Adversarial Attacks Against End-to-End Autoencoder Communication Systems

We show that end-to-end learning of communication systems through deep n...
research
03/10/2020

Using an ensemble color space model to tackle adversarial examples

Minute pixel changes in an image drastically change the prediction that ...
research
05/26/2019

Non-Determinism in Neural Networks for Adversarial Robustness

Recent breakthroughs in the field of deep learning have led to advanceme...
research
03/04/2021

Structure-Preserving Progressive Low-rank Image Completion for Defending Adversarial Attacks

Deep neural networks recognize objects by analyzing local image details ...

Please sign up or login with your details

Forgot password? Click here to reset