Transformers are Deep Infinite-Dimensional Non-Mercer Binary Kernel Machines

06/02/2021
by   Matthew A. Wright, et al.
0

Despite their ubiquity in core AI fields like natural language processing, the mechanics of deep attention-based neural networks like the Transformer model are not fully understood. In this article, we present a new perspective towards understanding how Transformers work. In particular, we show that the "dot-product attention" that is the core of the Transformer's operation can be characterized as a kernel learning method on a pair of Banach spaces. In particular, the Transformer's kernel is characterized as having an infinite feature dimension. Along the way we consider an extension of the standard kernel learning problem to a binary setting, where data come from two input domains and a response is defined for every cross-domain pair. We prove a new representer theorem for these binary kernel machines with non-Mercer (indefinite, asymmetric) kernels (implying that the functions learned are elements of reproducing kernel Banach spaces rather than Hilbert spaces), and also prove a new universal approximation theorem showing that the Transformer calculation can learn any binary non-Mercer reproducing kernel Banach space pair. We experiment with new kernels in Transformers, and obtain results that suggest the infinite dimensionality of the standard Transformer kernel is partially responsible for its performance. This paper's results provide a new theoretical understanding of a very important but poorly understood model in modern machine learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/09/2022

Duality for Neural Networks through Reproducing Kernel Banach Spaces

Reproducing Kernel Hilbert spaces (RKHS) have been a very successful too...
research
05/30/2023

Approximation and Estimation Ability of Transformers for Sequence-to-Sequence Functions with Infinite Dimensional Input

Despite the great success of Transformer networks in various application...
research
06/05/2023

Global universal approximation of functional input maps on weighted spaces

We introduce so-called functional input neural networks defined on a pos...
research
10/15/2021

On Learning the Transformer Kernel

In this work we introduce KERNELIZED TRANSFORMER, a generic, scalable, d...
research
01/15/2010

Kernel machines with two layers and multiple kernel learning

In this paper, the framework of kernel machines with two layers is intro...
research
08/30/2019

Transformer Dissection: An Unified Understanding for Transformer's Attention via the Lens of Kernel

Transformer is a powerful architecture that achieves superior performanc...
research
06/22/2020

Understanding Recurrent Neural Networks Using Nonequilibrium Response Theory

Recurrent neural networks (RNNs) are brain-inspired models widely used i...

Please sign up or login with your details

Forgot password? Click here to reset