Contextual Fusion For Adversarial Robustness

11/18/2020
by   Aiswarya Akumalla, et al.
0

Mammalian brains handle complex reasoning tasks in a gestalt manner by integrating information from regions of the brain that are specialised to individual sensory modalities. This allows for improved robustness and better generalisation ability. In contrast, deep neural networks are usually designed to process one particular information stream and susceptible to various types of adversarial perturbations. While many methods exist for detecting and defending against adversarial attacks, they do not generalise across a range of attacks and negatively affect performance on clean, unperturbed data. We developed a fusion model using a combination of background and foreground features extracted in parallel from Places-CNN and Imagenet-CNN. We tested the benefits of the fusion approach on preserving adversarial robustness for human perceivable (e.g., Gaussian blur) and network perceivable (e.g., gradient-based) attacks for CIFAR-10 and MS COCO data sets. For gradient based attacks, our results show that fusion allows for significant improvements in classification without decreasing performance on unperturbed data and without need to perform adversarial retraining. Our fused model revealed improvements for Gaussian blur type perturbations as well. The increase in performance from fusion approach depended on the variability of the image contexts; larger increases were seen for classes of images with larger differences in their contexts. We also demonstrate the effect of regularization to bias the classifier decision in the presence of a known adversary. We propose that this biologically inspired approach to integrate information across multiple modalities provides a new way to improve adversarial robustness that can be complementary to current state of the art approaches.

READ FULL TEXT

page 5

page 7

research
04/05/2017

Comment on "Biologically inspired protection of deep networks from adversarial attacks"

A recent paper suggests that Deep Neural Networks can be protected from ...
research
07/01/2019

Accurate, reliable and fast robustness evaluation

Throughout the past five years, the susceptibility of neural networks to...
research
02/02/2022

An Eye for an Eye: Defending against Gradient-based Attacks with Gradients

Deep learning models have been shown to be vulnerable to adversarial att...
research
08/08/2023

PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant Semantic Segmentation

Infrared and visible image fusion is a powerful technique that combines ...
research
11/27/2020

Rethinking Uncertainty in Deep Learning: Whether and How it Improves Robustness

Deep neural networks (DNNs) are known to be prone to adversarial attacks...
research
07/08/2020

On the relationship between class selectivity, dimensionality, and robustness

While the relative trade-offs between sparse and distributed representat...
research
10/20/2021

Combining Different V1 Brain Model Variants to Improve Robustness to Image Corruptions in CNNs

While some convolutional neural networks (CNNs) have surpassed human vis...

Please sign up or login with your details

Forgot password? Click here to reset