Change Detection for Local Explainability in Evolving Data Streams

09/06/2022
by   Johannes Haug, et al.
0

As complex machine learning models are increasingly used in sensitive applications like banking, trading or credit scoring, there is a growing demand for reliable explanation mechanisms. Local feature attribution methods have become a popular technique for post-hoc and model-agnostic explanations. However, attribution methods typically assume a stationary environment in which the predictive model has been trained and remains stable. As a result, it is often unclear how local attributions behave in realistic, constantly evolving settings such as streaming and online applications. In this paper, we discuss the impact of temporal change on local feature attributions. In particular, we show that local attributions can become obsolete each time the predictive model is updated or concept drift alters the data generating distribution. Consequently, local feature attributions in data streams provide high explanatory power only when combined with a mechanism that allows us to detect and respond to local changes over time. To this end, we present CDLEEDS, a flexible and model-agnostic framework for detecting local change and concept drift. CDLEEDS serves as an intuitive extension of attribution-based explanation techniques to identify outdated local attributions and enable more targeted recalculations. In experiments, we also show that the proposed framework can reliably detect both local and global concept drift. Accordingly, our work contributes to a more meaningful and robust explainability in online machine learning.

READ FULL TEXT
research
10/17/2022

On the Impact of Temporal Concept Drift on Model Explanations

Explanation faithfulness of model predictions in natural language proces...
research
10/19/2020

Learning Parameter Distributions to Detect Concept Drift in Data Streams

Data distributions in streaming environments are usually not stationary....
research
03/16/2023

Model Based Explanations of Concept Drift

The notion of concept drift refers to the phenomenon that the distributi...
research
05/31/2022

Attribution-based Explanations that Provide Recourse Cannot be Robust

Different users of machine learning methods require different explanatio...
research
04/28/2022

Standardized Evaluation of Machine Learning Methods for Evolving Data Streams

Due to the unspecified and dynamic nature of data streams, online machin...
research
06/13/2023

iPDP: On Partial Dependence Plots in Dynamic Modeling Scenarios

Post-hoc explanation techniques such as the well-established partial dep...
research
11/17/2022

CRAFT: Concept Recursive Activation FacTorization for Explainability

Attribution methods are a popular class of explainability methods that u...

Please sign up or login with your details

Forgot password? Click here to reset