An Adversarial Robustness Perspective on the Topology of Neural Networks

11/04/2022
by   Morgane Goibert, et al.
0

In this paper, we investigate the impact of neural networks (NNs) topology on adversarial robustness. Specifically, we study the graph produced when an input traverses all the layers of a NN, and show that such graphs are different for clean and adversarial inputs. We find that graphs from clean inputs are more centralized around highway edges, whereas those from adversaries are more diffuse, leveraging under-optimized edges. Through experiments on a variety of datasets and architectures, we show that these under-optimized edges are a source of adversarial vulnerability and that they can be used to detect adversarial inputs.

READ FULL TEXT

page 4

page 20

page 22

research
08/20/2019

Robust Graph Neural Network Against Poisoning Attacks via Transfer Learning

Graph neural networks (GNNs) are widely used in many applications. Howev...
research
07/01/2018

Towards Adversarial Training with Moderate Performance Improvement for Neural Network Classification

It has been demonstrated that deep neural networks are prone to noisy ex...
research
12/02/2017

Where Classification Fails, Interpretation Rises

An intriguing property of deep neural networks is their inherent vulnera...
research
04/17/2017

Adversarial and Clean Data Are Not Twins

Adversarial attack has cast a shadow on the massive success of deep neur...
research
06/07/2023

Adversarial Sample Detection Through Neural Network Transport Dynamics

We propose a detector of adversarial samples that is based on the view o...
research
11/28/2017

Adversary Detection in Neural Networks via Persistent Homology

We outline a detection method for adversarial inputs to deep neural netw...
research
02/05/2018

Adversarial Vulnerability of Neural Networks Increases With Input Dimension

Over the past four years, neural networks have proven vulnerable to adve...

Please sign up or login with your details

Forgot password? Click here to reset