Not-So-Random Features

10/27/2017
by   Brian Bullins, et al.
0

We propose a principled method for kernel learning, which relies on a Fourier-analytic characterization of translation-invariant or rotation-invariant kernels. Our method produces a sequence of feature maps, iteratively refining the SVM margin. We provide rigorous guarantees for optimality and generalization, interpreting our algorithm as online equilibrium-finding dynamics in a certain two-player min-max game. Evaluations on synthetic and real-world datasets demonstrate scalability and consistent improvements over related random features-based methods.

READ FULL TEXT
research
06/17/2022

Orthonormal Expansions for Translation-Invariant Kernels

We present a general Fourier analytic technique for constructing orthono...
research
01/10/2013

Domain Generalization via Invariant Feature Representation

This paper investigates domain generalization: How to take knowledge acq...
research
10/01/2022

On The Relative Error of Random Fourier Features for Preserving Kernel Distance

The method of random Fourier features (RFF), proposed in a seminal paper...
research
05/23/2022

Nonparametric learning of kernels in nonlocal operators

Nonlocal operators with integral kernels have become a popular tool for ...
research
11/11/2022

RFFNet: Scalable and interpretable kernel methods via Random Fourier Features

Kernel methods provide a flexible and theoretically grounded approach to...
research
10/24/2017

Max-Margin Invariant Features from Transformed Unlabeled Data

The study of representations invariant to common transformations of the ...

Please sign up or login with your details

Forgot password? Click here to reset