Feature Attributions and Counterfactual Explanations Can Be Manipulated

06/23/2021
by   Dylan Slack, et al.
0

As machine learning models are increasingly used in critical decision-making settings (e.g., healthcare, finance), there has been a growing emphasis on developing methods to explain model predictions. Such explanations are used to understand and establish trust in models and are vital components in machine learning pipelines. Though explanations are a critical piece in these systems, there is little understanding about how they are vulnerable to manipulation by adversaries. In this paper, we discuss how two broad classes of explanations are vulnerable to manipulation. We demonstrate how adversaries can design biased models that manipulate model agnostic feature attribution methods (e.g., LIME & SHAP) and counterfactual explanations that hill-climb during the counterfactual search (e.g., Wachter's Algorithm & DiCE) into concealing the model's biases. These vulnerabilities allow an adversary to deploy a biased model, yet explanations will not reveal this bias, thereby deceiving stakeholders into trusting the model. We evaluate the manipulations on real world data sets, including COMPAS and Communities & Crime, and find explanations can be manipulated in practice.

READ FULL TEXT
research
11/15/2019

On the computation of counterfactual explanations – A survey

Due to the increasing use of machine learning in practice it becomes mor...
research
08/02/2019

Efficient computation of counterfactual explanations of LVQ models

With the increasing use of machine learning in practice and because of l...
research
06/04/2021

Counterfactual Explanations Can Be Manipulated

Counterfactual explanations are emerging as an attractive option for pro...
research
07/08/2022

TalkToModel: Understanding Machine Learning Models With Open Ended Dialogues

Machine Learning (ML) models are increasingly used to make critical deci...
research
03/18/2021

Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations

Explainability for machine learning models has gained considerable atten...
research
12/10/2019

The Hidden Assumptions Behind Counterfactual Explanations and Principal Reasons

Counterfactual explanations are gaining prominence within technical, leg...
research
12/10/2020

Debiased-CAM for bias-agnostic faithful visual explanations of deep convolutional networks

Class activation maps (CAMs) explain convolutional neural network predic...

Please sign up or login with your details

Forgot password? Click here to reset