DeepAI AI Chat
Log In Sign Up

Just in Time: Personal Temporal Insights for Altering Model Decisions

07/08/2020
by   Naama Boer, et al.
Tel Aviv University
0

The interpretability of complex Machine Learning models is coming to be a critical social concern, as they are increasingly used in human-related decision-making processes such as resume filtering or loan applications. Individuals receiving an undesired classification are likely to call for an explanation – preferably one that specifies what they should do in order to alter that decision when they reapply in the future. Existing work focuses on a single ML model and a single point in time, whereas in practice, both models and data evolve over time: an explanation for an application rejection in 2018 may be irrelevant in 2019 since in the meantime both the model and the applicant's data can change. To this end, we propose a novel framework that provides users with insights and plans for changing their classification in particular future time points. The solution is based on combining state-of-the-art algorithms for (single) model explanations, ones for predicting future models, and database-style querying of the obtained explanations. We propose to demonstrate the usefulness of our solution in the context of loan applications, and interactively engage the audience in computing and viewing suggestions tailored for applicants based on their unique characteristic.

READ FULL TEXT
01/03/2019

Personalized explanation in machine learning

Explanation in machine learning and related fields such as artificial in...
05/18/2022

One Explanation to Rule them All – Ensemble Consistent Explanations

Transparency is a major requirement of modern AI based decision making s...
08/30/2022

On the Trade-Off between Actionable Explanations and the Right to be Forgotten

As machine learning (ML) models are increasingly being deployed in high-...
05/15/2022

Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations

As post hoc explanation methods are increasingly being leveraged to expl...
02/15/2023

A Case Study on Designing Evaluations of ML Explanations with Simulated User Studies

When conducting user studies to ascertain the usefulness of model explan...
01/21/2021

How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations

There have been several research works proposing new Explainable AI (XAI...