On the Importance of Application-Grounded Experimental Design for Evaluating Explainable ML Methods

06/24/2022
by   Kasun Amarasinghe, et al.
13

Machine Learning (ML) models now inform a wide range of human decisions, but using “black box” models carries risks such as relying on spurious correlations or errant data. To address this, researchers have proposed methods for supplementing models with explanations of their predictions. However, robust evaluations of these methods' usefulness in real-world contexts have remained elusive, with experiments tending to rely on simplified settings or proxy tasks. We present an experimental study extending a prior explainable ML evaluation experiment and bringing the setup closer to the deployment setting by relaxing its simplifying assumptions. Our empirical study draws dramatically different conclusions than the prior work, highlighting how seemingly trivial experimental design choices can yield misleading results. Beyond the present experiment, we believe this work holds lessons about the necessity of situating the evaluation of any ML method and choosing appropriate tasks, data, users, and metrics to match the intended deployment contexts.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset