Who's responsible? Jointly quantifying the contribution of the learning algorithm and training data

10/09/2019
by   Gal Yona, et al.
0

A fancy learning algorithm A outperforms a baseline method B when they are both trained on the same data. Should A get all of the credit for the improved performance or does the training data also deserve some credit? When deployed in a new setting from a different domain, however, A makes more mistakes than B. How much of the blame should go to the learning algorithm or the training data? Such questions are becoming increasingly important and prevalent as we aim to make ML more accountable. Their answers would also help us allocate resources between algorithm design and data collection. In this paper, we formalize these questions and provide a principled Extended Shapley framework to jointly quantify the contribution of the learning algorithm and training data. Extended Shapley uniquely satisfies several natural properties that ensure equitable treatment of data and algorithm. Through experiments and theoretical analysis, we demonstrate that Extended Shapley has several important applications: 1) it provides a new metric of ML performance improvement that disentangles the influence of the data regime and the algorithm; 2) it facilitates ML accountability by properly assigning responsibility for mistakes; 3) it provides more robustness to manipulation by the ML designer.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/13/2021

Learnability of Learning Performance and Its Application to Data Valuation

For most machine learning (ML) tasks, evaluating learning performance on...
research
12/04/2021

SHAPr: An Efficient and Versatile Membership Privacy Risk Metric for Machine Learning

Data used to train machine learning (ML) models can be sensitive. Member...
research
03/05/2021

Representation Matters: Assessing the Importance of Subgroup Allocations in Training Data

Collecting more diverse and representative training data is often touted...
research
10/14/2021

Hindsight Network Credit Assignment: Efficient Credit Assignment in Networks of Discrete Stochastic Units

Training neural networks with discrete stochastic variables presents a u...
research
07/06/2020

Online NEAT for Credit Evaluation – a Dynamic Problem with Sequential Data

In this paper, we describe application of Neuroevolution to a P2P lendin...
research
04/07/2023

AI Model Disgorgement: Methods and Choices

Responsible use of data is an indispensable part of any machine learning...
research
06/04/2018

Diffeomorphic Learning

We introduce in this paper a learning paradigm in which the training dat...

Please sign up or login with your details

Forgot password? Click here to reset