An Analytic Framework for Robust Training of Artificial Neural Networks

05/26/2022
by   Ramin Barati, et al.
0

The reliability of a learning model is key to the successful deployment of machine learning in various industries. Creating a robust model, particularly one unaffected by adversarial attacks, requires a comprehensive understanding of the adversarial examples phenomenon. However, it is difficult to describe the phenomenon due to the complicated nature of the problems in machine learning. Consequently, many studies investigate the phenomenon by proposing a simplified model of how adversarial examples occur and validate it by predicting some aspect of the phenomenon. While these studies cover many different characteristics of the adversarial examples, they have not reached a holistic approach to the geometric and analytic modeling of the phenomenon. This paper propose a formal framework to study the phenomenon in learning theory and make use of complex analysis and holomorphicity to offer a robust learning rule for artificial neural networks. With the help of complex analysis, we can effortlessly move between geometric and analytic perspectives of the phenomenon and offer further insights on the phenomenon by revealing its connection with harmonic functions. Using our model, we can explain some of the most intriguing characteristics of adversarial examples, including transferability of adversarial examples, and pave the way for novel approaches to mitigate the effects of the phenomenon.

READ FULL TEXT

page 6

page 20

page 22

page 24

research
07/22/2021

Towards Explaining Adversarial Examples Phenomenon in Artificial Neural Networks

In this paper, we study the adversarial examples existence and adversari...
research
10/23/2018

One Bit Matters: Understanding Adversarial Examples as the Abuse of Redundancy

Despite the great success achieved in machine learning (ML), adversarial...
research
06/18/2022

Comment on Transferability and Input Transformation with Additive Noise

Adversarial attacks have verified the existence of the vulnerability of ...
research
12/04/2020

Advocating for Multiple Defense Strategies against Adversarial Examples

It has been empirically observed that defense mechanisms designed to pro...
research
08/27/2016

A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples

Deep neural networks have been shown to suffer from a surprising weaknes...
research
09/27/2019

Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks

As the will to deploy neural networks models on embedded systems grows, ...
research
10/19/2020

Verifying the Causes of Adversarial Examples

The robustness of neural networks is challenged by adversarial examples ...

Please sign up or login with your details

Forgot password? Click here to reset