Different Spectral Representations in Optimized Artificial Neural Networks and Brains

08/22/2022
by   Richard C. Gerum, et al.
0

Recent studies suggest that artificial neural networks (ANNs) that match the spectral properties of the mammalian visual cortex – namely, the ∼ 1/n eigenspectrum of the covariance matrix of neural activities – achieve higher object recognition performance and robustness to adversarial attacks than those that do not. To our knowledge, however, no previous work systematically explored how modifying the ANN's spectral properties affects performance. To fill this gap, we performed a systematic search over spectral regularizers, forcing the ANN's eigenspectrum to follow 1/n^α power laws with different exponents α. We found that larger powers (around 2–3) lead to better validation accuracy and more robustness to adversarial attacks on dense networks. This surprising finding applied to both shallow and deep networks and it overturns the notion that the brain-like spectrum (corresponding to α∼ 1) always optimizes ANN performance and/or robustness. For convolutional networks, the best α values depend on the task complexity and evaluation metric: lower α values optimized validation accuracy and robustness to adversarial attack for networks performing a simple object recognition task (categorizing MNIST images of handwritten digits); for a more complex task (categorizing CIFAR-10 natural images), we found that lower α values optimized validation accuracy whereas higher α values optimized adversarial robustness. These results have two main implications. First, they cast doubt on the notion that brain-like spectral properties (α∼ 1) always optimize ANN performance. Second, they demonstrate the potential for fine-tuned spectral regularizers to optimize a chosen design metric, i.e., accuracy and/or robustness.

READ FULL TEXT
research
05/07/2019

A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks

In this era of machine learning models, their functionality is being thr...
research
12/08/2019

Exploring the Back Alleys: Analysing The Robustness of Alternative Neural Network Architectures against Adversarial Attacks

Recent discoveries in the field of adversarial machine learning have sho...
research
10/20/2022

Chaos Theory and Adversarial Robustness

Neural Networks, being susceptible to adversarial attacks, should face a...
research
09/06/2022

Improving the Accuracy and Robustness of CNNs Using a Deep CCA Neural Data Regularizer

As convolutional neural networks (CNNs) become more accurate at object r...
research
12/08/2020

On 1/n neural representation and robustness

Understanding the nature of representation in neural networks is a goal ...
research
09/13/2019

Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs

Deep convolutional artificial neural networks (ANNs) are the leading cla...

Please sign up or login with your details

Forgot password? Click here to reset