Hybrid Random Features

10/08/2021
by   Krzysztof Choromanski, et al.
0

We propose a new class of random feature methods for linearizing softmax and Gaussian kernels called hybrid random features (HRFs) that automatically adapt the quality of kernel estimation to provide most accurate approximation in the defined regions of interest. Special instantiations of HRFs lead to well-known methods such as trigonometric (Rahimi and Recht, 2007) or (recently introduced in the context of linear-attention Transformers) positive random features (Choromanski et al., 2021). By generalizing Bochner's Theorem for softmax/Gaussian kernels and leveraging random features for compositional kernels, the HRF-mechanism provides strong theoretical guarantees - unbiased approximation and strictly smaller worst-case relative errors than its counterparts. We conduct exhaustive empirical evaluation of HRF ranging from pointwise kernel estimation experiments, through tests on data admitting clustering structure to benchmarking implicit-attention Transformers (also for downstream Robotics applications), demonstrating its quality in a wide spectrum of machine learning problems.

READ FULL TEXT

page 9

page 28

page 29

research
02/01/2023

FAVOR#: Sharp Attention Kernel Approximations via New Classes of Positive Random Features

The problem of efficient approximation of a linear operator induced by t...
research
05/30/2022

Chefs' Random Tables: Non-Trigonometric Random Features

We introduce chefs' random tables (CRTs), a new class of non-trigonometr...
research
01/31/2023

Simplex Random Features

We present Simplex Random Features (SimRFs), a new random feature (RF) m...
research
04/29/2023

Taming graph kernels with random features

We introduce in this paper the mechanism of graph random features (GRFs)...
research
09/30/2020

Rethinking Attention with Performers

We introduce Performers, Transformer architectures which can estimate re...
research
07/16/2021

Graph Kernel Attention Transformers

We introduce a new class of graph neural networks (GNNs), by combining s...
research
02/08/2021

Unlocking Pixels for Reinforcement Learning via Implicit Attention

There has recently been significant interest in training reinforcement l...

Please sign up or login with your details

Forgot password? Click here to reset