On sparse connectivity, adversarial robustness, and a novel model of the artificial neuron

06/16/2020
by   Sergey Bochkanov, et al.
0

Deep neural networks have achieved human-level accuracy on almost all perceptual benchmarks. It is interesting that these advances were made using two ideas that are decades old: (a) an artificial neuron based on a linear summator and (b) SGD training. However, there are important metrics beyond accuracy: computational efficiency and stability against adversarial perturbations. In this paper, we propose two closely connected methods to improve these metrics on contour recognition tasks: (a) a novel model of an artificial neuron, a "strong neuron," with low hardware requirements and inherent robustness against adversarial perturbations and (b) a novel constructive training algorithm that generates sparse networks with O(1) connections per neuron. We demonstrate the feasibility of our approach through experiments on SVHN and GTSRB benchmarks. We achieved an impressive 10x-100x reduction in operations count (10x when compared with other sparsification approaches, 100x when compared with dense networks) and a substantial reduction in hardware requirements (8-bit fixed-point math was used) with no reduction in model accuracy. Superior stability against adversarial perturbations (exceeding that of adversarial training) was achieved without any counteradversarial measures, relying on the robustness of strong neurons alone. We also proved that constituent blocks of our strong neuron are the only activation functions with perfect stability against adversarial attacks.

READ FULL TEXT
research
12/19/2019

Adversarial Perturbations on the Perceptual Ball

We present a simple regularisation of Adversarial Perturbations based up...
research
03/22/2023

Revisiting DeepFool: generalization and improvement

Deep neural networks have been known to be vulnerable to adversarial exa...
research
10/20/2020

Robust Neural Networks inspired by Strong Stability Preserving Runge-Kutta methods

Deep neural networks have achieved state-of-the-art performance in a var...
research
08/07/2023

Fixed Inter-Neuron Covariability Induces Adversarial Robustness

The vulnerability to adversarial perturbations is a major flaw of Deep N...
research
12/24/2021

NIP: Neuron-level Inverse Perturbation Against Adversarial Attacks

Although deep learning models have achieved unprecedented success, their...
research
03/23/2019

Improving Adversarial Robustness via Guided Complement Entropy

Model robustness has been an important issue, since adding small adversa...

Please sign up or login with your details

Forgot password? Click here to reset