Intelligent Matrix Exponentiation

08/10/2020
by   Thomas Fischbacher, et al.
16

We present a novel machine learning architecture that uses the exponential of a single input-dependent matrix as its only nonlinearity. The mathematical simplicity of this architecture allows a detailed analysis of its behaviour, providing robustness guarantees via Lipschitz bounds. Despite its simplicity, a single matrix exponential layer already provides universal approximation properties and can learn fundamental functions of the input, such as periodic functions or multivariate polynomials. This architecture outperforms other general-purpose architectures on benchmark problems, including CIFAR-10, using substantially fewer parameters.

READ FULL TEXT
research
04/05/2020

On Sharpness of Error Bounds for Multivariate Neural Network Approximation

Sharpness of error bounds for best non-linear multivariate approximation...
research
05/30/2023

Universality and Limitations of Prompt Tuning

Despite the demonstrated empirical efficacy of prompt tuning to adapt a ...
research
05/24/2021

Skew Orthogonal Convolutions

Training convolutional neural networks with a Lipschitz constraint under...
research
08/05/2022

Almost-Orthogonal Layers for Efficient General-Purpose Lipschitz Networks

It is a highly desirable property for deep networks to be robust against...
research
02/22/2020

SURF: A Simple, Universal, Robust, Fast Distribution Learning Algorithm

Sample- and computationally-efficient distribution estimation is a funda...
research
11/08/2017

Simplicity: A New Language for Blockchains

Simplicity is a typed, combinator-based, functional language without loo...

Please sign up or login with your details

Forgot password? Click here to reset