Correlation Analysis between the Robustness of Sparse Neural Networks and their Random Hidden Structural Priors

07/13/2021
by   M. Ben Amor, et al.
0

Deep learning models have been shown to be vulnerable to adversarial attacks. This perception led to analyzing deep learning models not only from the perspective of their performance measures but also their robustness to certain types of adversarial attacks. We take another step forward in relating the architectural structure of neural networks from a graph theoretic perspective to their robustness. We aim to investigate any existing correlations between graph theoretic properties and the robustness of Sparse Neural Networks. Our hypothesis is, that graph theoretic properties as a prior of neural network structures are related to their robustness. To answer to this hypothesis, we designed an empirical study with neural network models obtained through random graphs used as sparse structural priors for the networks. We additionally investigated the evaluation of a randomly pruned fully connected network as a point of reference. We found that robustness measures are independent of initialization methods but show weak correlations with graph properties: higher graph densities correlate with lower robustness, but higher average path lengths and average node eccentricities show negative correlations with robustness measures. We hope to motivate further empirical and analytical research to tightening an answer to our hypothesis.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/02/2019

Weight Map Layer for Noise and Adversarial Attack Robustness

Convolutional neural networks (CNNs) are known for their good performanc...
research
06/30/2021

Exploring Robustness of Neural Networks through Graph Measures

Motivated by graph theory, artificial neural networks (ANNs) are traditi...
research
07/19/2020

Adversarial Immunization for Improving Certifiable Robustness on Graphs

Despite achieving strong performance in the semi-supervised node classif...
research
05/31/2023

Graph-based methods coupled with specific distributional distances for adversarial attack detection

Artificial neural networks are prone to being fooled by carefully pertur...
research
05/21/2018

Adversarial Attacks on Neural Networks for Graph Data

Deep learning models for graphs have achieved strong performance for the...
research
08/21/2023

Measuring the Effect of Causal Disentanglement on the Adversarial Robustness of Neural Network Models

Causal Neural Network models have shown high levels of robustness to adv...
research
09/20/2023

It's Simplex! Disaggregating Measures to Improve Certified Robustness

Certified robustness circumvents the fragility of defences against adver...

Please sign up or login with your details

Forgot password? Click here to reset