FANNet: Formal Analysis of Noise Tolerance, Training Bias and Input Sensitivity in Neural Networks

12/03/2019
by   Mahum Naseer, et al.
0

With a constant improvement in the network architectures and training methodologies, Neural Networks (NNs) are increasingly being deployed in real-world Machine Learning systems. However, despite their impressive performance on "known inputs", these NNs can fail absurdly on the "unseen inputs", especially if these real-time inputs deviate from the training dataset distributions, or contain certain types of input noise. This indicates the low noise tolerance of NNs, which is a major reason for the recent increase of adversarial attacks. This is a serious concern, particularly for safety-critical applications, where inaccurate results lead to dire consequences. We propose a novel methodology that leverages model checking for the Formal Analysis of Neural Network (FANNet) under different input noise ranges. Our methodology allows us to rigorously analyze the noise tolerance of NNs, their input node sensitivity, and the effects of training bias on their performance, e.g., in terms of classification accuracy. For evaluation, we use a feed-forward fully-connected NN architecture trained for the Leukemia classification. Our experimental results show ± 11% noise tolerance for the given trained network, identify the most sensitive input nodes, and confirm the biasness of the available training dataset.

READ FULL TEXT

page 1

page 4

research
02/24/2023

UnbiasedNets: A Dataset Diversification Framework for Robustness Bias Alleviation in Neural Networks

Performance of trained neural network (NN) models, in terms of testing a...
research
09/14/2020

Into the unknown: Active monitoring of neural networks

Machine-learning techniques achieve excellent performance in modern appl...
research
09/19/2018

Efficient Formal Safety Analysis of Neural Networks

Neural networks are increasingly deployed in real-world safety-critical ...
research
10/30/2019

Fault Tolerance of Neural Networks in Adversarial Settings

Artificial Intelligence systems require a through assessment of differen...
research
06/29/2023

Scaling Model Checking for DNN Analysis via State-Space Reduction and Input Segmentation (Extended Version)

Owing to their remarkable learning capabilities and performance in real-...
research
02/05/2019

Enhancing Fault Tolerance of Neural Networks for Security-Critical Applications

Neural Networks (NN) have recently emerged as backbone of several sensit...
research
06/25/2019

Quantitative Verification of Neural Networks And its Security Applications

Neural networks are increasingly employed in safety-critical domains. Th...

Please sign up or login with your details

Forgot password? Click here to reset