Scalable Nonlinear Learning with Adaptive Polynomial Expansions

10/02/2014
by   Alekh Agarwal, et al.
0

Can we effectively learn a nonlinear representation in time comparable to linear learning? We describe a new algorithm that explicitly and adaptively expands higher-order interaction features over base linear representations. The algorithm is designed for extreme computational efficiency, and an extensive experimental study shows that its computation/prediction tradeoff ability compares very favorably against strong baselines.

READ FULL TEXT

page 18

page 19

research
06/08/2020

Nonlinear Higher-Order Label Spreading

Label spreading is a general technique for semi-supervised learning with...
research
04/11/2023

A New Algorithm to determine Adomian Polynomials for nonlinear polynomial functions

We present a new algorithm by which the Adomian polynomials can be deter...
research
02/01/2018

Augmented Space Linear Model

The linear model uses the space defined by the input to project the targ...
research
10/20/2020

Volterra bootstrap: Resampling higher-order statistics for strictly stationary univariate time series

We are concerned with nonparametric hypothesis testing of time series fu...
research
08/24/2022

Higher-order adaptive methods for exit times of Itô diffusions

We construct a higher-order adaptive method for strong approximations of...
research
03/07/2022

A comparative study of several ADPCM schemes with linear and nonlinear prediction

In this paper we compare several ADPCM schemes with nonlinear prediction...
research
05/05/2022

Finding Bipartite Components in Hypergraphs

Hypergraphs are important objects to model ternary or higher-order relat...

Please sign up or login with your details

Forgot password? Click here to reset