TO-FLOW: Efficient Continuous Normalizing Flows with Temporal Optimization adjoint with Moving Speed

03/19/2022
by   Shian Du, et al.
0

Continuous normalizing flows (CNFs) construct invertible mappings between an arbitrary complex distribution and an isotropic Gaussian distribution using Neural Ordinary Differential Equations (neural ODEs). It has not been tractable on large datasets due to the incremental complexity of the neural ODE training. Optimal Transport theory has been applied to regularize the dynamics of the ODE to speed up training in recent works. In this paper, a temporal optimization is proposed by optimizing the evolutionary time for forward propagation of the neural ODE training. In this appoach, we optimize the network weights of the CNF alternately with evolutionary time by coordinate descent. Further with temporal regularization, stability of the evolution is ensured. This approach can be used in conjunction with the original regularization approach. We have experimentally demonstrated that the proposed approach can significantly accelerate training without sacrifying performance over baseline models.

READ FULL TEXT

page 4

page 13

page 14

page 15

page 16

page 17

page 18

page 19

research
06/18/2020

STEER : Simple Temporal Regularization For Neural ODEs

Training Neural Ordinary Differential Equations (ODEs) is often computat...
research
05/29/2020

OT-Flow: Fast and Accurate Continuous Normalizing Flows via Optimal Transport

A normalizing flow is an invertible mapping between an arbitrary probabi...
research
05/27/2020

Discretize-Optimize vs. Optimize-Discretize for Time-Series Regression and Continuous Normalizing Flows

We compare the discretize-optimize (Disc-Opt) and optimize-discretize (O...
research
02/21/2020

Stochastic Normalizing Flows

We introduce stochastic normalizing flows, an extension of continuous no...
research
02/07/2020

How to train your neural ODE

Training neural ODEs on large datasets has not been tractable due to the...
research
06/24/2021

Sparse Flows: Pruning Continuous-depth Models

Continuous deep learning architectures enable learning of flexible proba...
research
11/30/2022

Taming Hyperparameter Tuning in Continuous Normalizing Flows Using the JKO Scheme

A normalizing flow (NF) is a mapping that transforms a chosen probabilit...

Please sign up or login with your details

Forgot password? Click here to reset