DeepAI AI Chat
Log In Sign Up

Neural Networks with Cheap Differential Operators

12/08/2019
by   Ricky T. Q. Chen, et al.
UNIVERSITY OF TORONTO
36

Gradients of neural networks can be computed efficiently for any architecture, but some applications require differential operators with higher time complexity. We describe a family of restricted neural network architectures that allow efficient computation of a family of differential operators involving dimension-wise derivatives, used in cases such as computing the divergence. Our proposed architecture has a Jacobian matrix composed of diagonal and hollow (non-diagonal) components. We can then modify the backward computation graph to extract dimension-wise derivatives efficiently with automatic differentiation. We demonstrate these cheap differential operators for solving root-finding subproblems in implicit ODE solvers, exact density evaluation for continuous normalizing flows, and evaluating the Fokker–Planck equation for training stochastic differential equation models.

READ FULL TEXT
07/09/2020

A blueprint for building efficient Neural Network Differential Equation Solvers

Neural Networks are well known to have universal approximation propertie...
10/25/2022

Neuro-symbolic partial differential equation solver

We present a highly scalable strategy for developing mesh-free neuro-sym...
09/27/2019

Data-driven discovery of free-form governing differential equations

We present a method of discovering governing differential equations from...
01/24/2022

Neural Implicit Surfaces in Higher Dimension

This work investigates the use of neural networks admitting high-order d...
10/17/2022

Signal Processing for Implicit Neural Representations

Implicit Neural Representations (INRs) encoding continuous multi-media d...
01/11/2022

Toward Evaluating the Complexity to Operate a Network

The task of determining which network architectures provide the best rat...