
Multirate Training of Neural Networks
We propose multirate training of neural networks: partitioning neural ne...
read it

Time Series, Persistent Homology and Chirality
We investigate the point process of persistent diagram for Brownian moti...
read it

Edge of chaos as a guiding principle for modern neural network training
The success of deep neural networks in realworld problems has prompted ...
read it

Training Neural Networks is ERcomplete
Given a neural network, training data, and a threshold, it was known tha...
read it

ChannelWise Early Stopping without a Validation Set via NNK Polytope Interpolation
Stateoftheart neural network architectures continue to scale in size ...
read it

Modeling the Gaia ColorMagnitude Diagram with Bayesian Neural Flows to Constrain Distance Estimates
We demonstrate an algorithm for learning a flexible colormagnitude diag...
read it

A Practical Approach to Sizing Neural Networks
Memorization is worstcase generalization. Based on MacKay's information...
read it
Persistent Homology Captures the Generalization of Neural Networks Without A Validation Set
The training of neural networks is usually monitored with a validation (holdout) set to estimate the generalization of the model. This is done instead of measuring intrinsic properties of the model to determine whether it is learning appropriately. In this work, we suggest studying the training of neural networks with Algebraic Topology, specifically Persistent Homology (PH). Using simplicial complex representations of neural networks, we study the PH diagram distance evolution on the neural network learning process with different architectures and several datasets. Results show that the PH diagram distance between consecutive neural network states correlates with the validation accuracy, implying that the generalization error of a neural network could be intrinsically estimated without any holdout set.
READ FULL TEXT
Comments
There are no comments yet.