DeepAI AI Chat
Log In Sign Up

Interpreting Deep Models through the Lens of Data

by   Dominique Mercier, et al.

Identification of input data points relevant for the classifier (i.e. serve as the support vector) has recently spurred the interest of researchers for both interpretability as well as dataset debugging. This paper presents an in-depth analysis of the methods which attempt to identify the influence of these data points on the resulting classifier. To quantify the quality of the influence, we curated a set of experiments where we debugged and pruned the dataset based on the influence information obtained from different methods. To do so, we provided the classifier with mislabeled examples that hampered the overall performance. Since the classifier is a combination of both the data and the model, therefore, it is essential to also analyze these influences for the interpretability of deep learning models. Analysis of the results shows that some interpretability methods can detect mislabels better than using a random approach, however, contrary to the claim of these methods, the sample selection based on the training loss showed a superior performance.


FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging

Influence functions approximate the 'influences' of training data-points...

In-context Example Selection with Influences

In-context learning (ICL) is a powerful paradigm emerged from large lang...

Rewarding High-Quality Data via Influence Functions

We consider a crowdsourcing data acquisition scenario, such as federated...

Exact high-dimensional asymptotics for support vector machine

Support vector machine (SVM) is one of the most widely used classificati...

How far from automatically interpreting deep learning

In recent years, deep learning researchers have focused on how to find t...

Analyzing Hypersensitive AI: Instability in Corporate-Scale Machine Learning

Predictive geometric models deliver excellent results for many Machine L...

Interpreting Deep Classifier by Visual Distillation of Dark Knowledge

Interpreting black box classifiers, such as deep networks, allows an ana...