Dense Associative Memory is Robust to Adversarial Inputs

01/04/2017
by   Dmitry Krotov, et al.
0

Deep neural networks (DNN) trained in a supervised way suffer from two known problems. First, the minima of the objective function used in learning correspond to data points (also known as rubbish examples or fooling images) that lack semantic similarity with the training data. Second, a clean input can be changed by a small, and often imperceptible for human vision, perturbation, so that the resulting deformed input is misclassified by the network. These findings emphasize the differences between the ways DNN and humans classify patterns, and raise a question of designing learning algorithms that more accurately mimic human perception compared to the existing methods. Our paper examines these questions within the framework of Dense Associative Memory (DAM) models. These models are defined by the energy function, with higher order (higher than quadratic) interactions between the neurons. We show that in the limit when the power of the interaction vertex in the energy function is sufficiently large, these models have the following three properties. First, the minima of the objective function are free from rubbish images, so that each minimum is a semantically meaningful pattern. Second, artificial patterns poised precisely at the decision boundary look ambiguous to human subjects and share aspects of both classes that are separated by that decision boundary. Third, adversarial images constructed by models with small power of the interaction vertex, which are equivalent to DNN with rectified linear units (ReLU), fail to transfer to and fool the models with higher order interactions. This opens up a possibility to use higher order models for detecting and stopping malicious adversarial attacks. The presented results suggest that DAM with higher order energy functions are closer to human visual perception than DNN with ReLUs.

READ FULL TEXT

page 4

page 5

page 6

page 8

page 9

research
08/06/2021

Higher-order motif analysis in hypergraphs

A deluge of new data on social, technological and biological networked s...
research
06/05/2023

Adversarial alignment: Breaking the trade-off between the strength of an attack and its relevance to human perception

Deep neural networks (DNNs) are known to have a fundamental sensitivity ...
research
08/29/2023

Imperceptible Adversarial Attack on Deep Neural Networks from Image Boundary

Although Deep Neural Networks (DNNs), such as the convolutional neural n...
research
11/18/2022

Adversarial Detection by Approximation of Ensemble Boundary

A spectral approximation of a Boolean function is proposed for approxima...
research
06/10/2021

Neural Higher-order Pattern (Motif) Prediction in Temporal Networks

Dynamic systems that consist of a set of interacting elements can be abs...
research
02/24/2020

Utilizing a null class to restrict decision spaces and defend against neural network adversarial attacks

Despite recent progress, deep neural networks generally continue to be v...
research
10/09/2020

Deep Autoencoder based Energy Method for the Bending, Vibration, and Buckling Analysis of Kirchhoff Plates

In this paper, we present a deep autoencoder based energy method (DAEM) ...

Please sign up or login with your details

Forgot password? Click here to reset