Interpreting Deep Models through the Lens of Data

05/05/2020
by   Dominique Mercier, et al.
0

Identification of input data points relevant for the classifier (i.e. serve as the support vector) has recently spurred the interest of researchers for both interpretability as well as dataset debugging. This paper presents an in-depth analysis of the methods which attempt to identify the influence of these data points on the resulting classifier. To quantify the quality of the influence, we curated a set of experiments where we debugged and pruned the dataset based on the influence information obtained from different methods. To do so, we provided the classifier with mislabeled examples that hampered the overall performance. Since the classifier is a combination of both the data and the model, therefore, it is essential to also analyze these influences for the interpretability of deep learning models. Analysis of the results shows that some interpretability methods can detect mislabels better than using a random approach, however, contrary to the claim of these methods, the sample selection based on the training loss showed a superior performance.

READ FULL TEXT
research
12/31/2020

FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging

Influence functions approximate the 'influences' of training data-points...
research
02/21/2023

In-context Example Selection with Influences

In-context learning (ICL) is a powerful paradigm emerged from large lang...
research
08/30/2019

Rewarding High-Quality Data via Influence Functions

We consider a crowdsourcing data acquisition scenario, such as federated...
research
05/13/2019

Exact high-dimensional asymptotics for support vector machine

Support vector machine (SVM) is one of the most widely used classificati...
research
11/19/2018

How far from automatically interpreting deep learning

In recent years, deep learning researchers have focused on how to find t...
research
07/17/2018

Analyzing Hypersensitive AI: Instability in Corporate-Scale Machine Learning

Predictive geometric models deliver excellent results for many Machine L...
research
03/11/2018

Interpreting Deep Classifier by Visual Distillation of Dark Knowledge

Interpreting black box classifiers, such as deep networks, allows an ana...

Please sign up or login with your details

Forgot password? Click here to reset