User Driven Model Adjustment via Boolean Rule Explanations

03/28/2022
by   Elizabeth M. Daly, et al.
9

AI solutions are heavily dependant on the quality and accuracy of the input training data, however the training data may not always fully reflect the most up-to-date policy landscape or may be missing business logic. The advances in explainability have opened the possibility of allowing users to interact with interpretable explanations of ML predictions in order to inject modifications or constraints that more accurately reflect current realities of the system. In this paper, we present a solution which leverages the predictive power of ML models while allowing the user to specify modifications to decision boundaries. Our interactive overlay approach achieves this goal without requiring model retraining, making it appropriate for systems that need to apply instant changes to their decision making. We demonstrate that user feedback rules can be layered with the ML predictions to provide immediate changes which in turn supports learning with less data.

READ FULL TEXT
research
01/04/2022

FROTE: Feedback Rule-Driven Oversampling for Editing Models

Machine learning models may involve decision boundaries that change over...
research
06/27/2023

On Logic-Based Explainability with Partially Specified Inputs

In the practical deployment of machine learning (ML) models, missing dat...
research
02/03/2022

Separating Rule Discovery and Global Solution Composition in a Learning Classifier System

The utilization of digital agents to support crucial decision making is ...
research
02/16/2022

Explainability of Predictive Process Monitoring Results: Can You See My Data Issues?

Predictive business process monitoring (PPM) has been around for several...
research
07/29/2022

Leveraging Explanations in Interactive Machine Learning: An Overview

Explanations have gained an increasing level of interest in the AI and M...
research
07/18/2019

User-Interactive Machine Learning Model for Identifying Structural Relationships of Code Features

Traditional machine learning based intelligent systems assist users by l...
research
10/05/2021

Foundations of Symbolic Languages for Model Interpretability

Several queries and scores have recently been proposed to explain indivi...

Please sign up or login with your details

Forgot password? Click here to reset