Log In Sign Up

Universal Approximation Property of Neural Ordinary Differential Equations

by   Takeshi Teshima, et al.

Neural ordinary differential equations (NODEs) is an invertible neural network architecture promising for its free-form Jacobian and the availability of a tractable Jacobian determinant estimator. Recently, the representation power of NODEs has been partly uncovered: they form an L^p-universal approximator for continuous maps under certain conditions. However, the L^p-universality may fail to guarantee an approximation for the entire input domain as it may still hold even if the approximator largely differs from the target function on a small region of the input space. To further uncover the potential of NODEs, we show their stronger approximation property, namely the sup-universality for approximating a large class of diffeomorphisms. It is shown by leveraging a structure theorem of the diffeomorphism group, and the result complements the existing literature by establishing a fairly large set of mappings that NODEs can approximate with a stronger guarantee.


page 1

page 2

page 3

page 4


Approximation Capabilities of Neural Ordinary Differential Equations

Neural Ordinary Differential Equations have been recently proposed as an...

Rethinking PointNet Embedding for Faster and Compact Model

PointNet, which is the widely used point-wise embedding method and known...

Universal approximation property of invertible neural networks

Invertible neural networks (INNs) are neural network architectures with ...

Extended dynamic mode decomposition with dictionary learning using neural ordinary differential equations

Nonlinear phenomena can be analyzed via linear techniques using operator...

ACE-NODE: Attentive Co-Evolving Neural Ordinary Differential Equations

Neural ordinary differential equations (NODEs) presented a new paradigm ...