Understanding Generalization through Visualizations

06/07/2019
by   W. Ronny Huang, et al.
8

The power of neural networks lies in their ability to generalize to unseen data, yet the underlying reasons for this phenomenon remains elusive. Numerous rigorous attempts have been made to explain generalization, but available bounds are still quite loose, and analysis does not always lead to true understanding. The goal of this work is to make generalization more intuitive. Using visualization methods, we discuss the mystery of generalization, the geometry of loss landscapes, and how the curse (or, rather, the blessing) of dimensionality causes optimizers to settle into minima that generalize well.

READ FULL TEXT

page 5

page 6

page 7

page 8

research
06/30/2017

Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes

It is widely observed that deep learning models with learned parameters ...
research
03/15/2017

Sharp Minima Can Generalize For Deep Nets

Despite their overwhelming capacity to overfit, deep learning architectu...
research
08/07/2019

Visualizing the PHATE of Neural Networks

Understanding why and how certain neural networks outperform others is k...
research
01/31/2021

Visualizing High-Dimensional Trajectories on the Loss-Landscape of ANNs

Training artificial neural networks requires the optimization of highly ...
research
06/17/2022

How You Start Matters for Generalization

Characterizing the remarkable generalization properties of over-paramete...
research
02/06/2023

Generalization Bounds with Data-dependent Fractal Dimensions

Providing generalization guarantees for modern neural networks has been ...
research
04/18/2023

A Study of Neural Collapse Phenomenon: Grassmannian Frame, Symmetry, Generalization

In this paper, we extends original Neural Collapse Phenomenon by proving...

Please sign up or login with your details

Forgot password? Click here to reset