DeepAI
Log In Sign Up

Foundations of Symbolic Languages for Model Interpretability

10/05/2021
by   Marcelo Arenas, et al.
0

Several queries and scores have recently been proposed to explain individual predictions over ML models. Given the need for flexible, reliable, and easy-to-apply interpretability methods for ML models, we foresee the need for developing declarative languages to naturally specify different explainability queries. We do this in a principled way by rooting such a language in a logic, called FOIL, that allows for expressing many simple but important explainability queries, and might serve as a core for more expressive interpretability languages. We study the computational complexity of FOIL queries over two classes of ML models often deemed to be easily interpretable: decision trees and OBDDs. Since the number of possible inputs for an ML model is exponential in its dimension, the tractability of the FOIL evaluation problem is delicate but can be achieved by either restricting the structure of the models or the fragment of FOIL being evaluated. We also present a prototype implementation of FOIL wrapped in a high-level declarative language and perform experiments showing that such a language can be used in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/24/2022

ML Interpretability: Simple Isn't Easy

The interpretability of ML models is important, but it is not clear what...
05/20/2022

On Tackling Explanation Redundancy in Decision Trees

Decision trees (DTs) epitomize the ideal of interpretability of machine ...
10/23/2020

Model Interpretability through the Lens of Computational Complexity

In spite of several claims stating that some models are more interpretab...
04/23/2020

Learning a Formula of Interpretability to Learn Interpretable Formulas

Many risk-sensitive applications require Machine Learning (ML) models to...
02/09/2019

Assessing the Local Interpretability of Machine Learning Models

The increasing adoption of machine learning tools has led to calls for a...
11/23/2022

Reconnoitering the class distinguishing abilities of the features, to know them better

The relevance of machine learning (ML) in our daily lives is closely int...
03/28/2022

User Driven Model Adjustment via Boolean Rule Explanations

AI solutions are heavily dependant on the quality and accuracy of the in...