Probably Approximately Correct Explanations of Machine Learning Models via Syntax-Guided Synthesis

09/18/2020
by   Daniel Neider, et al.
0

We propose a novel approach to understanding the decision making of complex machine learning models (e.g., deep neural networks) using a combination of probably approximately correct learning (PAC) and a logic inference methodology called syntax-guided synthesis (SyGuS). We prove that our framework produces explanations that with a high probability make only few errors and show empirically that it is effective in generating small, human-interpretable explanations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/12/2020

A Formal Language Approach to Explaining RNNs

This paper presents LEXR, a framework for explaining the decision making...
research
05/08/2019

Minimalistic Explanations: Capturing the Essence of Decisions

The use of complex machine learning models can make systems opaque to us...
research
12/12/2018

Can I trust you more? Model-Agnostic Hierarchical Explanations

Interactions such as double negation in sentences and scene interactions...
research
11/12/2020

Ensuring Actionable Recourse via Adversarial Training

As machine learning models are increasingly deployed in high-stakes doma...
research
12/31/2020

Flexible model composition in machine learning and its implementation in MLJ

A graph-based protocol called `learning networks' which combine assorted...
research
06/04/2019

The Secrets of Machine Learning: Ten Things You Wish You Had Known Earlier to be More Effective at Data Analysis

Despite the widespread usage of machine learning throughout organization...
research
04/05/2023

Physics-Inspired Interpretability Of Machine Learning Models

The ability to explain decisions made by machine learning models remains...

Please sign up or login with your details

Forgot password? Click here to reset