Color Channel Perturbation Attacks for Fooling Convolutional Neural Networks and A Defense Against Such Attacks

12/20/2020
by   Jayendra Kantipudi, et al.
0

The Convolutional Neural Networks (CNNs) have emerged as a very powerful data dependent hierarchical feature extraction method. It is widely used in several computer vision problems. The CNNs learn the important visual features from training samples automatically. It is observed that the network overfits the training samples very easily. Several regularization methods have been proposed to avoid the overfitting. In spite of this, the network is sensitive to the color distribution within the images which is ignored by the existing approaches. In this paper, we discover the color robustness problem of CNN by proposing a Color Channel Perturbation (CCP) attack to fool the CNNs. In CCP attack new images are generated with new channels created by combining the original channels with the stochastic weights. Experiments were carried out over widely used CIFAR10, Caltech256 and TinyImageNet datasets in the image classification framework. The VGG, ResNet and DenseNet models are used to test the impact of the proposed attack. It is observed that the performance of the CNNs degrades drastically under the proposed CCP attack. Result show the effect of the proposed simple CCP attack over the robustness of the CNN trained model. The results are also compared with existing CNN fooling approaches to evaluate the accuracy drop. We also propose a primary defense mechanism to this problem by augmenting the training dataset with the proposed CCP attack. The state-of-the-art performance using the proposed solution in terms of the CNN robustness under CCP attack is observed in the experiments. The code is made publicly available at <https://github.com/jayendrakantipudi/Color-Channel-Perturbation-Attack>.

READ FULL TEXT
research
01/06/2021

DeepPoison: Feature Transfer Based Stealthy Poisoning Attack

Deep neural networks are susceptible to poisoning attacks by purposely p...
research
12/14/2020

Improving Adversarial Robustness via Probabilistically Compact Loss with Logit Constraints

Convolutional neural networks (CNNs) have achieved state-of-the-art perf...
research
06/24/2019

Cross-Channel Correlation Preserved Three-Stream Lightweight CNNs for Demosaicking

Demosaicking is a procedure to reconstruct full RGB images from Color Fi...
research
05/31/2022

An Effective Fusion Method to Enhance the Robustness of CNN

With the development of technology rapidly, applications of convolutiona...
research
02/03/2023

A Systematic Evaluation of Backdoor Trigger Characteristics in Image Classification

Deep learning achieves outstanding results in many machine learning task...
research
10/16/2020

Input-Aware Dynamic Backdoor Attack

In recent years, neural backdoor attack has been considered to be a pote...
research
07/03/2018

Stochastic Channel Decorrelation Network and Its Application to Visual Tracking

Deep convolutional neural networks (CNNs) have dominated many computer v...

Please sign up or login with your details

Forgot password? Click here to reset