Correcting auto-differentiation in neural-ODE training

06/03/2023
by   Yewei Xu, et al.
0

Does the use of auto-differentiation yield reasonable updates to deep neural networks that represent neural ODEs? Through mathematical analysis and numerical evidence, we find that when the neural network employs high-order forms to approximate the underlying ODE flows (such as the Linear Multistep Method (LMM)), brute-force computation using auto-differentiation often produces non-converging artificial oscillations. In the case of Leapfrog, we propose a straightforward post-processing technique that effectively eliminates these oscillations, rectifies the gradient computation and thus respects the updates of the underlying flow.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2020

A mathematical model for automatic differentiation in machine learning

Automatic differentiation, as implemented today, does not have a simple ...
research
09/20/2022

DiffTune: Auto-Tuning through Auto-Differentiation

The performance of a robot controller depends on the choice of its param...
research
10/26/2022

Adaptive scaling of the learning rate by second order automatic differentiation

In the context of the optimization of Deep Neural Networks, we propose t...
research
11/09/2022

Approximate backwards differentiation of gradient flow

The gradient flow (GF) is an ODE for which its explicit Euler's discreti...
research
04/02/2020

On the Principles of Differentiable Quantum Programming Languages

Variational Quantum Circuits (VQCs), or the so-called quantum neural-net...
research
12/20/2018

Calibrating Lévy Process from Observations Based on Neural Networks and Automatic Differentiation with Convergence Proofs

The Lévy process has been widely applied to mathematical finance, quantu...

Please sign up or login with your details

Forgot password? Click here to reset