DDDM: a Brain-Inspired Framework for Robust Classification

05/01/2022
by   Xiyuan Chen, et al.
0

Despite their outstanding performance in a broad spectrum of real-world tasks, deep artificial neural networks are sensitive to input noises, particularly adversarial perturbations. On the contrary, human and animal brains are much less vulnerable. In contrast to the one-shot inference performed by most deep neural networks, the brain often solves decision-making with an evidence accumulation mechanism that may trade time for accuracy when facing noisy inputs. The mechanism is well described by the Drift-Diffusion Model (DDM). In the DDM, decision-making is modeled as a process in which noisy evidence is accumulated toward a threshold. Drawing inspiration from the DDM, we propose the Dropout-based Drift-Diffusion Model (DDDM) that combines test-phase dropout and the DDM for improving the robustness for arbitrary neural networks. The dropouts create temporally uncorrelated noises in the network that counter perturbations, while the evidence accumulation mechanism guarantees a reasonable decision accuracy. Neural networks enhanced with the DDDM tested in image, speech, and text classification tasks all significantly outperform their native counterparts, demonstrating the DDDM as a task-agnostic defense against adversarial attacks.

READ FULL TEXT
research
05/30/2022

Guided Diffusion Model for Adversarial Purification

With wider application of deep neural networks (DNNs) in various algorit...
research
01/17/2023

Denoising Diffusion Probabilistic Models as a Defense against Adversarial Attacks

Neural Networks are infamously sensitive to small perturbations in their...
research
03/01/2017

Detecting Adversarial Samples from Artifacts

Deep neural networks (DNNs) are powerful nonlinear architectures that ar...
research
06/17/2020

Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning

Deep neural networks are being increasingly used in real world applicati...
research
01/11/2020

Exploring and Improving Robustness of Multi Task Deep Neural Networks via Domain Agnostic Defenses

In this paper, we explore the robustness of the Multi-Task Deep Neural N...
research
06/15/2022

Human Eyes Inspired Recurrent Neural Networks are More Robust Against Adversarial Noises

Compared to human vision, computer vision based on convolutional neural ...

Please sign up or login with your details

Forgot password? Click here to reset