Deep Function Machines: Generalized Neural Networks for Topological Layer Expression

12/14/2016
by   William H. Guss, et al.
0

In this paper we propose a generalization of deep neural networks called deep function machines (DFMs). DFMs act on vector spaces of arbitrary (possibly infinite) dimension and we show that a family of DFMs are invariant to the dimension of input data; that is, the parameterization of the model does not directly hinge on the quality of the input (eg. high resolution images). Using this generalization we provide a new theory of universal approximation of bounded non-linear operators between function spaces. We then suggest that DFMs provide an expressive framework for designing new neural network layer types with topological considerations in mind. Finally, we introduce a novel architecture, RippLeNet, for resolution invariant computer vision, which empirically achieves state of the art invariance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/02/2023

Resolution-Invariant Image Classification based on Fourier Neural Operators

In this paper we investigate the use of Fourier Neural Operators (FNOs) ...
research
11/05/2018

How deep is deep enough? - Optimizing deep neural network architecture

Deep neural networks use stacked layers of feature detectors to repeated...
research
12/11/2020

A New Neural Network Architecture Invariant to the Action of Symmetry Subgroups

We propose a computationally efficient G-invariant neural network that a...
research
06/05/2023

Global universal approximation of functional input maps on weighted spaces

We introduce so-called functional input neural networks defined on a pos...
research
10/25/2022

Deep Neural Networks as the Semi-classical Limit of Topological Quantum Neural Networks: The problem of generalisation

Deep Neural Networks miss a principled model of their operation. A novel...
research
08/08/2023

Probabilistic Invariant Learning with Randomized Linear Classifiers

Designing models that are both expressive and preserve known invariances...
research
09/03/2021

Dive into Layers: Neural Network Capacity Bounding using Algebraic Geometry

The empirical results suggest that the learnability of a neural network ...

Please sign up or login with your details

Forgot password? Click here to reset