Quantized Fourier and Polynomial Features for more Expressive Tensor Network Models

09/11/2023
by   Frederiek Wesel, et al.
0

In the context of kernel machines, polynomial and Fourier features are commonly used to provide a nonlinear extension to linear models by mapping the data to a higher-dimensional space. Unless one considers the dual formulation of the learning problem, which renders exact large-scale learning unfeasible, the exponential increase of model parameters in the dimensionality of the data caused by their tensor-product structure prohibits to tackle high-dimensional problems. One of the possible approaches to circumvent this exponential scaling is to exploit the tensor structure present in the features by constraining the model weights to be an underparametrized tensor network. In this paper we quantize, i.e. further tensorize, polynomial and Fourier features. Based on this feature quantization we propose to quantize the associated model weights, yielding quantized models. We show that, for the same number of model parameters, the resulting quantized models have a higher bound on the VC-dimension as opposed to their non-quantized counterparts, at no additional computational cost while learning from identical features. We verify experimentally how this additional tensorization regularizes the learning problem by prioritizing the most salient features in the data and how it provides models with increased generalization capabilities. We finally benchmark our approach on large regression task, achieving state-of-the-art results on a laptop computer.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/03/2021

Large-Scale Learning with Fourier Features and Tensor Decompositions

Random Fourier features provide a way to tackle large-scale machine lear...
research
06/22/2021

Lower and Upper Bounds on the VC-Dimension of Tensor Network Models

Tensor network methods have been a key ingredient of advances in condens...
research
12/02/2022

QFF: Quantized Fourier Features for Neural Field Representations

Multilayer perceptrons (MLPs) learn high frequencies slowly. Recent appr...
research
06/04/2021

Sigma-Delta and Distributed Noise-Shaping Quantization Methods for Random Fourier Features

We propose the use of low bit-depth Sigma-Delta and distributed noise-sh...
research
04/14/2020

Breaking the waves: asymmetric random periodic features for low-bitrate kernel machines

Many signal processing and machine learning applications are built from ...
research
06/02/2020

Quantized tensor FEM for multiscale problems: diffusion problems in two and three dimensions

Homogenization in terms of multiscale limits transforms a multiscale pro...
research
11/29/2018

Global Second-order Pooling Neural Networks

Deep Convolutional Networks (ConvNets) are fundamental to, besides large...

Please sign up or login with your details

Forgot password? Click here to reset