Foundations of Symbolic Languages for Model Interpretability

10/05/2021
by   Marcelo Arenas, et al.
0

Several queries and scores have recently been proposed to explain individual predictions over ML models. Given the need for flexible, reliable, and easy-to-apply interpretability methods for ML models, we foresee the need for developing declarative languages to naturally specify different explainability queries. We do this in a principled way by rooting such a language in a logic, called FOIL, that allows for expressing many simple but important explainability queries, and might serve as a core for more expressive interpretability languages. We study the computational complexity of FOIL queries over two classes of ML models often deemed to be easily interpretable: decision trees and OBDDs. Since the number of possible inputs for an ML model is exponential in its dimension, the tractability of the FOIL evaluation problem is delicate but can be achieved by either restricting the structure of the models or the fragment of FOIL being evaluated. We also present a prototype implementation of FOIL wrapped in a high-level declarative language and perform experiments showing that such a language can be used in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/24/2022

ML Interpretability: Simple Isn't Easy

The interpretability of ML models is important, but it is not clear what...
research
05/20/2022

On Tackling Explanation Redundancy in Decision Trees

Decision trees (DTs) epitomize the ideal of interpretability of machine ...
research
10/23/2020

Model Interpretability through the Lens of Computational Complexity

In spite of several claims stating that some models are more interpretab...
research
04/23/2020

Learning a Formula of Interpretability to Learn Interpretable Formulas

Many risk-sensitive applications require Machine Learning (ML) models to...
research
02/09/2019

Assessing the Local Interpretability of Machine Learning Models

The increasing adoption of machine learning tools has led to calls for a...
research
06/27/2023

On Logic-Based Explainability with Partially Specified Inputs

In the practical deployment of machine learning (ML) models, missing dat...
research
03/28/2022

User Driven Model Adjustment via Boolean Rule Explanations

AI solutions are heavily dependant on the quality and accuracy of the in...

Please sign up or login with your details

Forgot password? Click here to reset