And/or trade-off in artificial neurons: impact on adversarial robustness

02/15/2021
by   Alessandro Fontana, et al.
14

Since its discovery in 2013, the phenomenon of adversarial examples has attracted a growing amount of attention from the machine learning community. A deeper understanding of the problem could lead to a better comprehension of how information is processed and encoded in neural networks and, more in general, could help to solve the issue of interpretability in machine learning. Our idea to increase adversarial resilience starts with the observation that artificial neurons can be divided in two broad categories: AND-like neurons and OR-like neurons. Intuitively, the former are characterised by a relatively low number of combinations of input values which trigger neuron activation, while for the latter the opposite is true. Our hypothesis is that the presence in a network of a sufficiently high number of OR-like neurons could lead to classification "brittleness" and increase the network's susceptibility to adversarial attacks. After constructing an operational definition of a neuron AND-like behaviour, we proceed to introduce several measures to increase the proportion of AND-like neurons in the network: L1 norm weight normalisation; application of an input filter; comparison between the neuron output's distribution obtained when the network is fed with the actual data set and the distribution obtained when the network is fed with a randomised version of the former called "scrambled data set". Tests performed on the MNIST data set hint that the proposed measures could represent an interesting direction to explore.

READ FULL TEXT

page 2

page 3

page 4

page 9

research
09/12/2019

Inspecting adversarial examples using the Fisher information

Adversarial examples are slight perturbations that are designed to fool ...
research
04/26/2017

A New Type of Neurons for Machine Learning

In machine learning, the use of an artificial neural network is the main...
research
05/11/2019

Deep Learning: a new definition of artificial neuron with double weight

Deep learning is a subset of a broader family of machine learning method...
research
11/10/2018

PolyNeuron: Automatic Neuron Discovery via Learned Polyharmonic Spline Activations

Automated deep neural network architecture design has received a signifi...
research
12/24/2021

NIP: Neuron-level Inverse Perturbation Against Adversarial Attacks

Although deep learning models have achieved unprecedented success, their...
research
05/02/2022

Revisiting Gaussian Neurons for Online Clustering with Unknown Number of Clusters

Despite the recent success of artificial neural networks, more biologica...
research
05/03/2015

Some Theoretical Properties of a Network of Discretely Firing Neurons

The problem of optimising a network of discretely firing neurons is addr...

Please sign up or login with your details

Forgot password? Click here to reset