Towards Formal XAI: Formally Approximate Minimal Explanations of Neural Networks

10/25/2022
by   Shahaf Bassan, et al.
0

With the rapid growth of machine learning, deep neural networks (DNNs) are now being used in numerous domains. Unfortunately, DNNs are "black-boxes", and cannot be interpreted by humans, which is a substantial concern in safety-critical systems. To mitigate this issue, researchers have begun working on explainable AI (XAI) methods, which can identify a subset of input features that are the cause of a DNN's decision for a given input. Most existing techniques are heuristic, and cannot guarantee the correctness of the explanation provided. In contrast, recent and exciting attempts have shown that formal methods can be used to generate provably correct explanations. Although these methods are sound, the computational complexity of the underlying verification problem limits their scalability; and the explanations they produce might sometimes be overly complex. Here, we propose a novel approach to tackle these limitations. We (1) suggest an efficient, verification-based method for finding minimal explanations, which constitute a provable approximation of the global, minimum explanation; (2) show how DNN verification can assist in calculating lower and upper bounds on the optimal explanation; (3) propose heuristics that significantly improve the scalability of the verification process; and (4) suggest the use of bundles, which allows us to arrive at more succinct and interpretable explanations. Our evaluation shows that our approach significantly outperforms state-of-the-art techniques, and produces explanations that are more useful to humans. We thus regard this work as a step toward leveraging verification technology in producing DNNs that are more reliable and comprehensible.

READ FULL TEXT
research
07/31/2023

Formally Explaining Neural Networks within Reactive Systems

Deep neural networks (DNNs) are increasingly being used as controllers i...
research
07/16/2020

Accelerating Robustness Verification of Deep Neural Networks Guided by Target Labels

Deep Neural Networks (DNNs) have become key components of many safety-cr...
research
09/16/2022

Computing Abductive Explanations for Boosted Trees

Boosted trees is a dominant ML model, exhibiting high accuracy. However,...
research
10/08/2018

Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values

Explaining the output of a complicated machine learning model like a dee...
research
08/23/2023

Approximating Score-based Explanation Techniques Using Conformal Regression

Score-based explainable machine-learning techniques are often used to un...
research
08/18/2023

Enumerating Safe Regions in Deep Neural Networks with Provable Probabilistic Guarantees

Identifying safe areas is a key point to guarantee trust for systems tha...
research
05/28/2021

Pruning and Slicing Neural Networks using Formal Verification

Deep neural networks (DNNs) play an increasingly important role in vario...

Please sign up or login with your details

Forgot password? Click here to reset