Generalization Error in Deep Learning

08/03/2018
by   Daniel Jakubovitz, et al.
0

Deep learning models have lately shown great performance in various fields such as computer vision, speech recognition, speech translation, and natural language processing. However, alongside their state-of-the-art performance, it is still generally unclear what is the source of their generalization ability. Thus, an important question is what makes deep neural networks able to generalize well from the training set to new data. In this article, we provide an overview of the existing theory and bounds for the characterization of the generalization error of deep neural networks, combining both classical and more recent theoretical and empirical results.

READ FULL TEXT
research
06/15/2017

An Overview of Multi-Task Learning in Deep Neural Networks

Multi-task learning (MTL) has led to successes in many applications of m...
research
09/23/2020

Deep Neural Networks with Short Circuits for Improved Gradient Learning

Deep neural networks have achieved great success both in computer vision...
research
04/24/2018

An Information-Theoretic View for Deep Learning

Deep learning has transformed the computer vision, natural language proc...
research
03/07/2022

Generalization Through The Lens Of Leave-One-Out Error

Despite the tremendous empirical success of deep learning models to solv...
research
05/24/2022

DNNAbacus: Toward Accurate Computational Cost Prediction for Deep Neural Networks

Deep learning is attracting interest across a variety of domains, includ...
research
02/06/2022

Evaluating natural language processing models with generalization metrics that do not need access to any training or testing data

The search for effective and robust generalization metrics has been the ...
research
01/17/2022

Using Machine Learning Based Models for Personality Recognition

Personality can be defined as the combination of behavior, emotion, moti...

Please sign up or login with your details

Forgot password? Click here to reset