Spartan Networks: Self-Feature-Squeezing Neural Networks for increased robustness in adversarial settings

12/17/2018
by   François Menet, et al.
0

Deep learning models are vulnerable to adversarial examples which are input samples modified in order to maximize the error on the system. We introduce Spartan Networks, resistant deep neural networks that do not require input preprocessing nor adversarial training. These networks have an adversarial layer designed to discard some information of the network, thus forcing the system to focus on relevant input. This is done using a new activation function to discard data. The added layer trains the neural network to filter-out usually-irrelevant parts of its input. Our performance evaluation shows that Spartan Networks have a slightly lower precision but report a higher robustness under attack when compared to unprotected models. Results of this study of Adversarial AI as a new attack vector are based on tests conducted on the MNIST dataset.

READ FULL TEXT
research
07/01/2018

Towards Adversarial Training with Moderate Performance Improvement for Neural Network Classification

It has been demonstrated that deep neural networks are prone to noisy ex...
research
04/13/2020

Adversarial robustness guarantees for random deep neural networks

The reliability of most deep learning algorithms is fundamentally challe...
research
04/22/2022

How Sampling Impacts the Robustness of Stochastic Neural Networks

Stochastic neural networks (SNNs) are random functions and predictions a...
research
03/06/2023

Adversarial Sampling for Fairness Testing in Deep Neural Network

In this research, we focus on the usage of adversarial sampling to test ...
research
09/20/2022

Audit and Improve Robustness of Private Neural Networks on Encrypted Data

Performing neural network inference on encrypted data without decryption...
research
10/14/2015

Improving Back-Propagation by Adding an Adversarial Gradient

The back-propagation algorithm is widely used for learning in artificial...
research
03/11/2023

Improving the Robustness of Deep Convolutional Neural Networks Through Feature Learning

Deep convolutional neural network (DCNN for short) models are vulnerable...

Please sign up or login with your details

Forgot password? Click here to reset