Weight Map Layer for Noise and Adversarial Attack Robustness

05/02/2019
by   Mohammed Amer, et al.
0

Convolutional neural networks (CNNs) are known for their good performance and generalization in vision-related tasks and have become state-of-the-art in both application and research-based domains. However, just like other neural network models, they suffer from a susceptibility to noise and adversarial attacks. An adversarial defence aims at reducing a neural network's susceptibility to adversarial attacks through learning or architectural modifications. We propose a weight map layer (WM) as a generic architectural addition to CNNs and show that it can increase their robustness to noise and adversarial attacks. We further explain the enhanced robustness of the two WM variants introduced via an adaptive noise-variance amplification (ANVA) hypothesis and provide evidence and insights in support of it. We show that the WM layer can be integrated into scaled up models to increase their noise and adversarial attack robustness, while achieving the same or similar accuracy levels.

READ FULL TEXT
research
12/03/2017

Improving Network Robustness against Adversarial Attacks with Compact Convolution

Though Convolutional Neural Networks (CNNs) have surpassed human-level p...
research
11/17/2020

Extreme Value Preserving Networks

Recent evidence shows that convolutional neural networks (CNNs) are bias...
research
07/13/2021

Correlation Analysis between the Robustness of Sparse Neural Networks and their Random Hidden Structural Priors

Deep learning models have been shown to be vulnerable to adversarial att...
research
12/19/2019

Model Weight Theft With Just Noise Inputs: The Curious Case of the Petulant Attacker

This paper explores the scenarios under which an attacker can claim that...
research
03/16/2021

Bio-inspired Robustness: A Review

Deep convolutional neural networks (DCNNs) have revolutionized computer ...
research
10/08/2020

Improve Adversarial Robustness via Weight Penalization on Classification Layer

It is well-known that deep neural networks are vulnerable to adversarial...
research
11/29/2020

Architectural Adversarial Robustness: The Case for Deep Pursuit

Despite their unmatched performance, deep neural networks remain suscept...

Please sign up or login with your details

Forgot password? Click here to reset