Transparent Interpretation with Knockouts

11/01/2020
by   Xing Han, et al.
3

How can we find a subset of training samples that are most responsible for a complicated black-box machine learning model prediction? More generally, how can we explain the model decision to end-users in a transparent way? We propose a new model-agnostic algorithm to identify a minimum number of training samples that are indispensable for a given model decision at a particular test point, as the model decision would otherwise change upon the removal of these training samples. In line with the counterfactual explanation, our algorithm identifies such a set of indispensable samples iteratively by solving a constrained optimization problem. Further, we efficiently speed up the algorithm through approximation. To demonstrate the effectiveness of the algorithm, we apply it to a variety of tasks including data poisoning detection, training set debugging, and understanding loan decisions. Results show that our algorithm is an effective and easy to comprehend tool to help better understand local model behaviors and therefore facilitate the application of machine learning in domains where such understanding is a requisite and where end-users do not have a machine learning background.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset