Hyper-sparse optimal aggregation

12/08/2009
by   Stéphane Gaïffas, et al.
0

In this paper, we consider the problem of "hyper-sparse aggregation". Namely, given a dictionary F = {f_1, ..., f_M } of functions, we look for an optimal aggregation algorithm that writes f̃ = ∑_j=1^M θ_j f_j with as many zero coefficients θ_j as possible. This problem is of particular interest when F contains many irrelevant functions that should not appear in f̃. We provide an exact oracle inequality for f̃, where only two coefficients are non-zero, that entails f̃ to be an optimal aggregation algorithm. Since selectors are suboptimal aggregation procedures, this proves that 2 is the minimal number of elements of F required for the construction of an optimal aggregation procedures in every situations. A simulated example of this algorithm is proposed on a dictionary obtained using LARS, for the problem of selection of the regularization parameter of the LASSO. We also give an example of use of aggregation to achieve minimax adaptation over anisotropic Besov spaces, which was not previously known in minimax theory (in regression on a random design).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/12/2012

Deviation optimal learning using greedy Q-aggregation

Given a finite family of functions, the goal of model selection aggregat...
research
03/06/2014

Minimax Optimal Bayesian Aggregation

It is generally believed that ensemble approaches, which combine multipl...
research
10/06/2021

Variance function estimation in regression model via aggregation procedures

In the regression problem, we consider the problem of estimating the var...
research
05/14/2018

Maximum Entropy Interval Aggregations

Given a probability distribution p = (p_1, ..., p_n) and an integer 1≤ ...
research
05/17/2018

Minimax regularization

Classical approach to regularization is to design norms enhancing smooth...
research
03/18/2019

On Deep Set Learning and the Choice of Aggregations

Recently, it has been shown that many functions on sets can be represent...
research
06/29/2020

A No-Free-Lunch Theorem for MultiTask Learning

Multitask learning and related areas such as multi-source domain adaptat...

Please sign up or login with your details

Forgot password? Click here to reset