DeepAI AI Chat
Log In Sign Up

Approximation Capabilities of Neural Ordinary Differential Equations

by   Han Zhang, et al.
Virginia Commonwealth University

Neural Ordinary Differential Equations have been recently proposed as an infinite-depth generalization of residual networks. Neural ODEs provide out-of-the-box invertibility of the mapping realized by the neural network, and can lead to networks that are more efficient in terms of computational time and parameter space. Here, we show that a Neural ODE operating on a space with dimensionality increased by one compared to the input dimension is a universal approximator for the space of continuous functions, at the cost of loosing invertibility. We then turn our focus to invertible mappings, and we prove that any homeomorphism on a p-dimensional Euclidean space can be approximated by a Neural ODE operating on a (2p+1)-dimensional Euclidean space.


page 1

page 2

page 3

page 4


Universal Approximation Property of Neural Ordinary Differential Equations

Neural ordinary differential equations (NODEs) is an invertible neural n...

Augmented Neural ODEs

We show that Neural Ordinary Differential Equations (ODEs) learn represe...

Neural Piecewise-Constant Delay Differential Equations

Continuous-depth neural networks, such as the Neural Ordinary Differenti...

Continuous Ordinary Differential Equations and Infinite Time Turing Machines

We consider Continuous Ordinary Differential Equations (CODE) y'=f(y), w...

Residual Networks as Geodesic Flows of Diffeomorphisms

This paper addresses the understanding and characterization of residual ...

Invertible Residual Networks

Reversible deep networks provide useful theoretical guarantees and have ...