On approximating ∇ f with neural networks

10/28/2019
by   Saeed Saremi, et al.
0

Consider a feedforward neural network ψ: R^d→R^d such that ψ≈∇ f, where f:R^d →R is a smooth function, therefore ψ must satisfy ∂_j ψ_i = ∂_i ψ_j pointwise. We prove a theorem that for any such ψ networks, and for any depth L>2, all the input weights must be parallel to each other. In other words, ψ can only represent one feature in its first hidden layer. The proof of the theorem is straightforward, where two backward paths (from i to j and j to i) and a weight-tying matrix (connecting the last and first hidden layers) play the key roles. We thus make a strong theoretical case in favor of the implicit parametrization, where the neural network is ϕ: R^d →R and ∇ϕ≈∇ f. Throughout, we revisit two recent unnormalized probabilistic models that are formulated as ψ≈∇ f and also discuss the denoising autoencoders in the end.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/22/2020

Approximation capability of two hidden layer feedforward neural networks with fixed weights

We algorithmically construct a two hidden layer feedforward neural netwo...
research
02/16/2020

A closer look at the approximation capabilities of neural networks

The universal approximation theorem, in one of its most general versions...
research
07/31/2020

The Kolmogorov-Arnold representation theorem revisited

There is a longstanding debate whether the Kolmogorov-Arnold representat...
research
12/31/2015

A single hidden layer feedforward network with only one neuron in the hidden layer can approximate any univariate function

The possibility of approximating a continuous function on a compact subs...
research
07/21/2018

Towards Neural Theorem Proving at Scale

Neural models combining representation learning and reasoning in an end-...
research
08/29/2018

Autoencoders, Kernels, and Multilayer Perceptrons for Electron Micrograph Restoration and Compression

We present 14 autoencoders, 15 kernels and 14 multilayer perceptrons for...
research
03/03/2020

Implicitly Defined Layers in Neural Networks

In conventional formulations of multilayer feedforward neural networks, ...

Please sign up or login with your details

Forgot password? Click here to reset