Applications of Koopman Mode Analysis to Neural Networks

06/21/2020
by   Iva Manojlović, et al.
0

We consider the training process of a neural network as a dynamical system acting on the high-dimensional weight space. Each epoch is an application of the map induced by the optimization algorithm and the loss function. Using this induced map, we can apply observables on the weight space and measure their evolution. The evolution of the observables are given by the Koopman operator associated with the induced dynamical system. We use the spectrum and modes of the Koopman operator to realize the above objectives. Our methods can help to, a priori, determine the network depth; determine if we have a bad initialization of the network weights, allowing a restart before training too long; speeding up the training time. Additionally, our methods help enable noise rejection and improve robustness. We show how the Koopman spectrum can be used to determine the number of layers required for the architecture. Additionally, we show how we can elucidate the convergence versus non-convergence of the training process by monitoring the spectrum, in particular, how the existence of eigenvalues clustering around 1 determines when to terminate the learning process. We also show how using Koopman modes we can selectively prune the network to speed up the training procedure. Finally, we show that incorporating loss functions based on negative Sobolev norms can allow for the reconstruction of a multi-scale signal polluted by very large amounts of noise.

READ FULL TEXT
research
03/20/2019

Representative Datasets: The Perceptron Case

One of the main drawbacks of the practical use of neural networks is the...
research
01/31/2023

The weight spectrum of several families of Reed-Muller codes

We determine the weight spectrum of RM(m-3,m) for m≥ 6, of RM(m-4,m) for...
research
02/11/2020

Population-Based Training for Loss Function Optimization

Metalearning of deep neural network (DNN) architectures and hyperparamet...
research
06/26/2018

Tangent-Space Regularization for Neural-Network Models of Dynamical Systems

This work introduces the concept of tangent space regularization for neu...
research
09/03/2019

LCA: Loss Change Allocation for Neural Network Training

Neural networks enjoy widespread use, but many aspects of their training...
research
12/18/2019

Geometric Considerations of a Good Dictionary for Koopman Analysis of Dynamical Systems

Representation of a dynamical system in terms of simplifying modes is a ...
research
06/09/2023

Improving Estimation of the Koopman Operator with Kolmogorov-Smirnov Indicator Functions

It has become common to perform kinetic analysis using approximate Koopm...

Please sign up or login with your details

Forgot password? Click here to reset