VeriX: Towards Verified Explainability of Deep Neural Networks

12/02/2022
by   Min Wu, et al.
0

We present VeriX, a first step towards verified explainability of machine learning models in safety-critical applications. Specifically, our sound and optimal explanations can guarantee prediction invariance against bounded perturbations. We utilise constraint solving techniques together with feature sensitivity ranking to efficiently compute these explanations. We evaluate our approach on image recognition benchmarks and a real-world scenario of autonomous aircraft taxiing.

READ FULL TEXT

page 1

page 5

page 6

page 7

research
11/09/2022

On the Robustness of Explanations of Deep Neural Network Models: A Survey

Explainability has been widely stated as a cornerstone of the responsibl...
research
09/07/2019

Explainable Deep Learning for Video Recognition Tasks: A Framework Recommendations

The popularity of Deep Learning for real-world applications is ever-grow...
research
02/15/2022

Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis

A variety of methods have been proposed to try to explain how deep neura...
research
12/03/2018

Sensitivity based Neural Networks Explanations

Although neural networks can achieve very high predictive performance on...
research
08/06/2020

Improving Explainability of Image Classification in Scenarios with Class Overlap: Application to COVID-19 and Pneumonia

Trust in predictions made by machine learning models is increased if the...
research
03/02/2023

DeepSaDe: Learning Neural Networks that Guarantee Domain Constraint Satisfaction

As machine learning models, specifically neural networks, are becoming i...
research
08/05/2022

Parameter Averaging for Robust Explainability

Neural Networks are known to be sensitive to initialisation. The explana...

Please sign up or login with your details

Forgot password? Click here to reset