Time Regularization in Optimal Time Variable Learning

06/28/2023
by   Evelyn Herberg, et al.
0

Recently, optimal time variable learning in deep neural networks (DNNs) was introduced in arXiv:2204.08528. In this manuscript we extend the concept by introducing a regularization term that directly relates to the time horizon in discrete dynamical systems. Furthermore, we propose an adaptive pruning approach for Residual Neural Networks (ResNets), which reduces network complexity without compromising expressiveness, while simultaneously decreasing training time. The results are illustrated by applying the proposed concepts to classification tasks on the well known MNIST and Fashion MNIST data sets. Our PyTorch code is available on https://github.com/frederikkoehne/time_variable_learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/06/2023

Adaptive Growth: Real-time CNN Layer Expansion

Deep Neural Networks (DNNs) have shown unparalleled achievements in nume...
research
10/19/2018

Gradient target propagation

We report a learning rule for neural networks that computes how much eac...
research
04/20/2023

Learning Bottleneck Concepts in Image Classification

Interpreting and explaining the behavior of deep neural networks is crit...
research
09/06/2017

Neural Networks Regularization Through Class-wise Invariant Representation Learning

Training deep neural networks is known to require a large number of trai...
research
09/26/2019

Stochastic Weight Matrix-based Regularization Methods for Deep Neural Networks

The aim of this paper is to introduce two widely applicable regularizati...
research
11/13/2022

Experimental study of Neural ODE training with adaptive solver for dynamical systems modeling

Neural Ordinary Differential Equations (ODEs) was recently introduced as...
research
03/16/2022

Example Perplexity

Some examples are easier for humans to classify than others. The same sh...

Please sign up or login with your details

Forgot password? Click here to reset