Improving Network Robustness against Adversarial Attacks with Compact Convolution

12/03/2017
by   Rajeev Ranjan, et al.
0

Though Convolutional Neural Networks (CNNs) have surpassed human-level performance on tasks such as object classification and face verification, they can easily be fooled by adversarial attacks. These attacks add a small perturbation to the input image that causes the network to mis-classify the sample. In this paper, we focus on neutralizing adversarial attacks by exploring the effect of different loss functions such as CenterLoss and L2-Softmax Loss for enhanced robustness to adversarial perturbations. Additionally, we propose power convolution, a novel method of convolution that when incorporated in conventional CNNs improve their robustness. Power convolution ensures that features at every layer are bounded and close to each other. Extensive experiments show that Power Convolutional Networks (PCNs) neutralize multiple types of attacks, and perform better than existing methods for defending adversarial attacks.

READ FULL TEXT
research
05/21/2021

Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks

Progress in making neural networks more robust against adversarial attac...
research
05/02/2019

Weight Map Layer for Noise and Adversarial Attack Robustness

Convolutional neural networks (CNNs) are known for their good performanc...
research
11/27/2019

Orthogonal Convolutional Neural Networks

The instability and feature redundancy in CNNs hinders further performan...
research
06/21/2023

Evaluating Adversarial Robustness of Convolution-based Human Motion Prediction

Human motion prediction has achieved a brilliant performance with the he...
research
11/06/2017

HyperNetworks with statistical filtering for defending adversarial examples

Deep learning algorithms have been known to be vulnerable to adversarial...
research
06/18/2021

Less is More: Feature Selection for Adversarial Robustness with Compressive Counter-Adversarial Attacks

A common observation regarding adversarial attacks is that they mostly g...
research
06/24/2018

SSIMLayer: Towards Robust Deep Representation Learning via Nonlinear Structural Similarity

Deeper convolutional neural networks provide more capacity to approximat...

Please sign up or login with your details

Forgot password? Click here to reset