Demysifying Deep Neural Networks Through Interpretation: A Survey

12/13/2020
by   Giang Dao, et al.
15

Modern deep learning algorithms tend to optimize an objective metric, such as minimize a cross entropy loss on a training dataset, to be able to learn. The problem is that the single metric is an incomplete description of the real world tasks. The single metric cannot explain why the algorithm learn. When an erroneous happens, the lack of interpretability causes a hardness of understanding and fixing the error. Recently, there are works done to tackle the problem of interpretability to provide insights into neural networks behavior and thought process. The works are important to identify potential bias and to ensure algorithm fairness as well as expected performance.

READ FULL TEXT

page 5

page 6

page 8

page 9

page 10

page 11

research
05/31/2018

Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning

There has recently been a surge of work in explanatory artificial intell...
research
12/25/2020

Adaptively Solving the Local-Minimum Problem for Deep Neural Networks

This paper aims to overcome a fundamental problem in the theory and appl...
research
05/18/2020

Niose-Sampling Cross Entropy Loss: Improving Disparity Regression Via Cost Volume Aware Regularizer

Recent end-to-end deep neural networks for disparity regression have ach...
research
06/03/2021

BiFair: Training Fair Models with Bilevel Optimization

Prior studies have shown that, training machine learning models via empi...
research
01/03/2020

The Real-World-Weight Cross-Entropy Loss Function: Modeling the Costs of Mislabeling

In this paper, we propose a new metric to measure goodness-of-fit for cl...
research
05/30/2019

Leveraging Simple Model Predictions for Enhancing its Performance

There has been recent interest in improving performance of simple models...

Please sign up or login with your details

Forgot password? Click here to reset