SSCNets: A Selective Sobel Convolution-based Technique to Enhance the Robustness of Deep Neural Networks against Security Attacks

11/04/2018
by   Hammad Tariq, et al.
0

Recent studies have shown that slight perturbations in the input data can significantly affect the robustness of Deep Neural Networks (DNNs), leading to misclassification and confidence reduction. In this paper, we introduce a novel technique based on the Selective Sobel Convolution (SSC) operation in the training loop, that increases the robustness of a given DNN by allowing it to learn important edges in the input in a controlled fashion. This is achieved by introducing a trainable parameter, which acts as a threshold for eliminating the weaker edges. We validate our technique against the attacks of Cleverhans library on Convolutional DNNs against adversarial attacks. Our experimental results on the MNIST and CIFAR10 datasets illustrate that this controlled learning considerably increases the accuracy of the DNNs by 1.53 subjected to adversarial attacks.

READ FULL TEXT

page 1

page 4

research
11/04/2018

QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks

Deep Neural Networks (DNNs) have recently been shown vulnerable to adver...
research
04/21/2020

EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness against Adversarial Attacks

Ensuring robustness of Deep Neural Networks (DNNs) is crucial to their a...
research
05/08/2017

Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression

Deep neural networks (DNNs) have achieved great success in solving a var...
research
05/23/2023

Impact of Light and Shadow on Robustness of Deep Neural Networks

Deep neural networks (DNNs) have made remarkable strides in various comp...
research
11/18/2022

A Tale of Two Cities: Data and Configuration Variances in Robust Deep Learning

Deep neural networks (DNNs), are widely used in many industries such as ...
research
03/11/2018

Combating Adversarial Attacks Using Sparse Representations

It is by now well-known that small adversarial perturbations can induce ...

Please sign up or login with your details

Forgot password? Click here to reset