Exploring Robustness of Neural Networks through Graph Measures

06/30/2021
by   Asim Waqas, et al.
0

Motivated by graph theory, artificial neural networks (ANNs) are traditionally structured as layers of neurons (nodes), which learn useful information by the passage of data through interconnections (edges). In the machine learning realm, graph structures (i.e., neurons and connections) of ANNs have recently been explored using various graph-theoretic measures linked to their predictive performance. On the other hand, in network science (NetSci), certain graph measures including entropy and curvature are known to provide insight into the robustness and fragility of real-world networks. In this work, we use these graph measures to explore the robustness of various ANNs to adversarial attacks. To this end, we (1) explore the design space of inter-layer and intra-layers connectivity regimes of ANNs in the graph domain and record their predictive performance after training under different types of adversarial attacks, (2) use graph representations for both inter-layer and intra-layers connectivity regimes to calculate various graph-theoretic measures, including curvature and entropy, and (3) analyze the relationship between these graph measures and the adversarial performance of ANNs. We show that curvature and entropy, while operating in the graph domain, can quantify the robustness of ANNs without having to train these ANNs. Our results suggest that the real-world networks, including brain networks, financial networks, and social networks may provide important clues to the neural architecture search for robust ANNs. We propose a search strategy that efficiently finds robust ANNs amongst a set of well-performing ANNs without having a need to train all of these ANNs.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

07/13/2021

Correlation Analysis between the Robustness of Sparse Neural Networks and their Random Hidden Structural Priors

Deep learning models have been shown to be vulnerable to adversarial att...
07/13/2020

Graph Structure of Neural Networks

Neural networks are often represented as graphs of connections between n...
06/27/2019

Evolving Robust Neural Architectures to Defend from Adversarial Attacks

Deep neural networks were shown to misclassify slightly modified input i...
06/13/2021

ATRAS: Adversarially Trained Robust Architecture Search

In this paper, we explore the effect of architecture completeness on adv...
11/25/2019

When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks

Recent advances in adversarial attacks uncover the intrinsic vulnerabili...
04/30/2020

Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness

Mode connectivity provides novel geometric insights on analyzing loss la...
06/20/2021

Opportunities and challenges in partitioning the graph measure space of real-world networks

Based on a large dataset containing thousands of real-world networks ran...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.