Fault Tolerance of Neural Networks in Adversarial Settings

10/30/2019
by   Vasisht Duddu, et al.
0

Artificial Intelligence systems require a through assessment of different pillars of trust, namely, fairness, interpretability, data and model privacy, reliability (safety) and robustness against against adversarial attacks. While these research problems have been extensively studied in isolation, an understanding of the trade-off between different pillars of trust is lacking. To this extent, the trade-off between fault tolerance, privacy and adversarial robustness is evaluated for the specific case of Deep Neural Networks, by considering two adversarial settings under a security and a privacy threat model. Specifically, this work studies the impact of the fault tolerance of the Neural Network on training the model by adding noise to the input (Adversarial Robustness) and noise to the gradients (Differential Privacy). While training models with noise to inputs, gradients or weights enhances fault tolerance, it is observed that adversarial robustness and fault tolerance are at odds with each other. On the other hand, (ϵ,δ)-Differentially Private models enhance the fault tolerance, measured using generalisation error, theoretically has an upper bound of e^ϵ - 1 + δ. This novel study of the trade-off between different elements of trust is pivotal for training a model which satisfies the requirements for different pillars of trust simultaneously.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/14/2020

Robustness Threats of Differential Privacy

Differential privacy is a powerful and gold-standard concept of measurin...
research
02/05/2019

Fatal Brain Damage

The loss of a few neurons in a brain often does not result in a visible ...
research
11/30/2018

Adversarial Examples as an Input-Fault Tolerance Problem

We analyze the adversarial examples problem in terms of a model's fault ...
research
05/24/2022

Functional Network: A Novel Framework for Interpretability of Deep Neural Networks

The layered structure of deep neural networks hinders the use of numerou...
research
12/03/2019

FANNet: Formal Analysis of Noise Tolerance, Training Bias and Input Sensitivity in Neural Networks

With a constant improvement in the network architectures and training me...
research
02/05/2019

Enhancing Fault Tolerance of Neural Networks for Security-Critical Applications

Neural Networks (NN) have recently emerged as backbone of several sensit...
research
08/18/2022

Verifiable Differential Privacy For When The Curious Become Dishonest

Many applications seek to produce differentially private statistics on s...

Please sign up or login with your details

Forgot password? Click here to reset