On Formal Feature Attribution and Its Approximation

07/07/2023
by   Jinqiang Yu, et al.
0

Recent years have witnessed the widespread use of artificial intelligence (AI) algorithms and machine learning (ML) models. Despite their tremendous success, a number of vital problems like ML model brittleness, their fairness, and the lack of interpretability warrant the need for the active developments in explainable artificial intelligence (XAI) and formal ML model verification. The two major lines of work in XAI include feature selection methods, e.g. Anchors, and feature attribution techniques, e.g. LIME and SHAP. Despite their promise, most of the existing feature selection and attribution approaches are susceptible to a range of critical issues, including explanation unsoundness and out-of-distribution sampling. A recent formal approach to XAI (FXAI) although serving as an alternative to the above and free of these issues suffers from a few other limitations. For instance and besides the scalability limitation, the formal approach is unable to tackle the feature attribution problem. Additionally, a formal explanation despite being formally sound is typically quite large, which hampers its applicability in practical settings. Motivated by the above, this paper proposes a way to apply the apparatus of formal XAI to the case of feature attribution based on formal explanation enumeration. Formal feature attribution (FFA) is argued to be advantageous over the existing methods, both formal and non-formal. Given the practical complexity of the problem, the paper then proposes an efficient technique for approximating exact FFA. Finally, it offers experimental evidence of the effectiveness of the proposed approximate FFA in comparison to the existing feature attribution algorithms not only in terms of feature importance and but also in terms of their relative order.

READ FULL TEXT

page 9

page 18

research
12/29/2021

Towards a Shapley Value Graph Framework for Medical peer-influence

eXplainable Artificial Intelligence (XAI) is a sub-field of Artificial I...
research
06/08/2023

Sound Explanation for Trustworthy Machine Learning

We take a formal approach to the explainability problem of machine learn...
research
01/05/2023

Semantic match: Debugging feature attribution methods in XAI for healthcare

The recent spike in certified Artificial Intelligence (AI) tools for hea...
research
08/23/2021

Longitudinal Distance: Towards Accountable Instance Attribution

Previous research in interpretable machine learning (IML) and explainabl...
research
07/12/2023

Stability Guarantees for Feature Attributions with Multiplicative Smoothing

Explanation methods for machine learning models tend to not provide any ...
research
06/14/2022

Attributions Beyond Neural Networks: The Linear Program Case

Linear Programs (LPs) have been one of the building blocks in machine le...
research
06/05/2023

From Robustness to Explainability and Back Again

In contrast with ad-hoc methods for eXplainable Artificial Intelligence ...

Please sign up or login with your details

Forgot password? Click here to reset