Isometric Representations in Neural Networks Improve Robustness

11/02/2022
by   Kosio Beshkov, et al.
0

Artificial and biological agents cannon learn given completely random and unstructured data. The structure of data is encoded in the metric relationships between data points. In the context of neural networks, neuronal activity within a layer forms a representation reflecting the transformation that the layer implements on its inputs. In order to utilize the structure in the data in a truthful manner, such representations should reflect the input distances and thus be continuous and isometric. Supporting this statement, recent findings in neuroscience propose that generalization and robustness are tied to neural representations being continuously differentiable. In machine learning, most algorithms lack robustness and are generally thought to rely on aspects of the data that differ from those that humans use, as is commonly seen in adversarial attacks. During cross-entropy classification, the metric and structural properties of network representations are usually broken both between and within classes. This side effect from training can lead to instabilities under perturbations near locations where such structure is not preserved. One of the standard solutions to obtain robustness is to add ad hoc regularization terms, but to our knowledge, forcing representations to preserve the metric structure of the input data as a stabilising mechanism has not yet been studied. In this work, we train neural networks to perform classification while simultaneously maintaining within-class metric structure, leading to isometric within-class representations. Such network representations turn out to be beneficial for accurate and robust inference. By stacking layers with this property we create a network architecture that facilitates hierarchical manipulation of internal neural representations. Finally, we verify that isometric regularization improves the robustness to adversarial attacks on MNIST.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2020

Adversarial Feature Desensitization

Deep neural networks can now perform many tasks that were once thought t...
research
12/08/2020

On 1/n neural representation and robustness

Understanding the nature of representation in neural networks is a goal ...
research
08/07/2019

Robust Learning with Jacobian Regularization

Design of reliable systems must guarantee stability against input pertur...
research
02/27/2019

Disentangled Deep Autoencoding Regularization for Robust Image Classification

In spite of achieving revolutionary successes in machine learning, deep ...
research
07/08/2020

On the relationship between class selectivity, dimensionality, and robustness

While the relative trade-offs between sparse and distributed representat...
research
06/30/2021

Exploring Robustness of Neural Networks through Graph Measures

Motivated by graph theory, artificial neural networks (ANNs) are traditi...
research
03/29/2021

Capsule Network is Not More Robust than Convolutional Network

The Capsule Network is widely believed to be more robust than Convolutio...

Please sign up or login with your details

Forgot password? Click here to reset