A Note on Connectivity of Sublevel Sets in Deep Learning

01/21/2021
by   Quynh Nguyen, et al.
0

It is shown that for deep neural networks, a single wide layer of width N+1 (N being the number of training samples) suffices to prove the connectivity of sublevel sets of the training loss function. In the two-layer setting, the same property may not hold even if one has just one neuron less (i.e. width N can lead to disconnected sublevel sets).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/30/2017

The loss surface and expressivity of deep convolutional neural networks

We analyze the expressiveness and loss surface of practical deep convolu...
research
06/07/2021

Representation mitosis in wide neural networks

Deep neural networks (DNNs) defy the classical bias-variance trade-off: ...
research
01/22/2019

On Connected Sublevel Sets in Deep Learning

We study sublevel sets of the loss function in training deep neural netw...
research
05/08/2023

Shote note:Revisiting Linear Width: Rethinking the Relationship Between Single Ideal and Linear Obstacle

Linear-width is a well-known and highly regarded graph parameter. The co...
research
05/01/2019

A note on 'A fully parallel 3D thinning algorithm and its applications'

A 3D thinning algorithm erodes a 3D binary image layer by layer to extra...
research
08/23/2021

Pulse-Width Modulation Neuron Implemented by Single Positive-Feedback Device

Positive-feedback (PF) device and its operation scheme to implement puls...
research
05/03/2023

Structures of Neural Network Effective Theories

We develop a diagrammatic approach to effective field theories (EFTs) co...

Please sign up or login with your details

Forgot password? Click here to reset