Approximation capabilities of measure-preserving neural networks

06/21/2021
by   Aiqing Zhu, et al.
0

Measure-preserving neural networks are well-developed invertible models, however, the approximation capabilities remain unexplored. This paper rigorously establishes the general sufficient conditions for approximating measure-preserving maps using measure-preserving neural networks. It is shown that for compact U ⊂^D with D≥ 2, every measure-preserving map ψ: U→^D which is injective and bounded can be approximated in the L^p-norm by measure-preserving neural networks. Specifically, the differentiable maps with ± 1 determinants of Jacobians are measure-preserving, injective and bounded on U, thus hold the approximation property.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/29/2019

Approximation of functions by neural networks

We study the approximation of measurable functions on the hypercube by f...
research
04/04/2023

Measure theoretic results for approximation by neural networks with limited weights

In this paper, we study approximation properties of single hidden layer ...
research
07/08/2020

Approximation with Neural Networks in Variable Lebesgue Spaces

This paper concerns the universal approximation property with neural net...
research
12/29/2019

On Parity-Preserving Constrained Coding

Necessary and sufficient conditions are presented for the existence of f...
research
04/29/2022

VPNets: Volume-preserving neural networks for learning source-free dynamics

We propose volume-preserving networks (VPNets) for learning unknown sour...
research
08/18/2023

On the Approximation of Bi-Lipschitz Maps by Invertible Neural Networks

Invertible neural networks (INNs) represent an important class of deep n...
research
01/26/2021

Dualizing sup-preserving endomaps of a complete lattice

It is argued in (Eklund et al., 2018) that the quantale [L,L] of sup-pre...

Please sign up or login with your details

Forgot password? Click here to reset