Can Intelligent Hyperparameter Selection Improve Resistance to Adversarial Examples?

02/14/2019
by   Cody Burkard, et al.
0

Convolutional Neural Networks and Deep Learning classification systems in general have been shown to be vulnerable to attack by specially crafted data samples that appear to belong to one class but are instead classified as another, commonly known as adversarial examples. A variety of attack strategies have been proposed to craft these samples; however, there is no standard model that is used to compare the success of each type of attack. Furthermore, there is no literature currently available that evaluates how common hyperparameters and optimization strategies may impact a model's ability to resist these samples. This research bridges that lack of awareness and provides a means for the selection of training and model parameters in future research on evasion attacks against convolutional neural networks. The findings of this work indicate that the selection of model hyperparameters does impact the ability of a model to resist attack, although they alone cannot prevent the existence of adversarial examples.

READ FULL TEXT
research
08/16/2016

Towards Evaluating the Robustness of Neural Networks

Neural networks provide state-of-the-art results for most machine learni...
research
06/29/2018

Adversarial Examples in Deep Learning: Characterization and Divergence

The burgeoning success of deep learning has raised the security and priv...
research
08/27/2020

Minimal Adversarial Examples for Deep Learning on 3D Point Clouds

With the recent developments of convolutional neural networks, deep lear...
research
12/22/2016

Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics

Deep learning has greatly improved visual recognition in recent years. H...
research
10/13/2021

Identification of Attack-Specific Signatures in Adversarial Examples

The adversarial attack literature contains a myriad of algorithms for cr...
research
08/03/2021

On the Exploitability of Audio Machine Learning Pipelines to Surreptitious Adversarial Examples

Machine learning (ML) models are known to be vulnerable to adversarial e...
research
06/21/2021

Friendly Training: Neural Networks Can Adapt Data To Make Learning Easier

In the last decade, motivated by the success of Deep Learning, the scien...

Please sign up or login with your details

Forgot password? Click here to reset