Disentangling Influence: Using Disentangled Representations to Audit Model Predictions

06/20/2019
by   Charles T. Marx, et al.
8

Motivated by the need to audit complex and black box models, there has been extensive research on quantifying how data features influence model predictions. Feature influence can be direct (a direct influence on model outcomes) and indirect (model outcomes are influenced via proxy features). Feature influence can also be expressed in aggregate over the training or test data or locally with respect to a single point. Current research has typically focused on one of each of these dimensions. In this paper, we develop disentangled influence audits, a procedure to audit the indirect influence of features. Specifically, we show that disentangled representations provide a mechanism to identify proxy features in the dataset, while allowing an explicit computation of feature influence on either individual outcomes or aggregate-level outcomes. We show through both theory and experiments that disentangled influence audits can both detect proxy features and show, for each individual or in aggregate, which of these proxy features affects the classifier being audited the most. In this respect, our method is more powerful than existing methods for ascertaining feature influence.

READ FULL TEXT

page 6

page 12

research
02/23/2016

Auditing Black-box Models for Indirect Influence

Data-trained predictive models see widespread use, but for the most part...
research
09/18/2018

Testing Selective Influence Directly Using Trackball Movement Tasks

Systems factorial technology (SFT; Townsend & Nozawa, 1995) is regarded ...
research
06/24/2019

Gauge theory and twins paradox of disentangled representations

Achieving disentangled representations of information is one of the key ...
research
02/19/2023

Disentangled Representation for Causal Mediation Analysis

Estimating direct and indirect causal effects from observational data is...
research
03/30/2023

Shapley Chains: Extending Shapley Values to Classifier Chains

In spite of increased attention on explainable machine learning models, ...
research
02/23/2021

Feature Importance Explanations for Temporal Black-Box Models

Models in the supervised learning framework may capture rich and complex...
research
05/22/2023

Risk Scores, Label Bias, and Everything but the Kitchen Sink

In designing risk assessment algorithms, many scholars promote a "kitche...

Please sign up or login with your details

Forgot password? Click here to reset