Robust Kernel-based Distribution Regression

04/21/2021
by   Zhan Yu, et al.
0

Regularization schemes for regression have been widely studied in learning theory and inverse problems. In this paper, we study distribution regression (DR) which involves two stages of sampling, and aims at regressing from probability measures to real-valued responses over a reproducing kernel Hilbert space (RKHS). Recently, theoretical analysis on DR has been carried out via kernel ridge regression and several learning behaviors have been observed. However, the topic has not been explored and understood beyond the least square based DR. By introducing a robust loss function l_σ for two-stage sampling problems, we present a novel robust distribution regression (RDR) scheme. With a windowing function V and a scaling parameter σ which can be appropriately chosen, l_σ can include a wide range of popular used loss functions that enrich the theme of DR. Moreover, the loss l_σ is not necessarily convex, hence largely improving the former regression class (least square) in the literature of DR. The learning rates under different regularity ranges of the regression function f_ρ are comprehensively studied and derived via integral operator techniques. The scaling parameter σ is shown to be crucial in providing robustness and satisfactory learning rates of RDR.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/26/2022

Coefficient-based Regularized Distribution Regression

In this paper, we consider the coefficient-based regularized distributio...
research
10/24/2020

Stochastic Gradient Descent Meets Distribution Regression

Stochastic gradient descent (SGD) provides a simple and efficient way to...
research
06/16/2020

Estimates on Learning Rates for Multi-Penalty Distribution Regression

This paper is concerned with functional learning by utilizing two-stage ...
research
05/25/2019

Kernel Truncated Randomized Ridge Regression: Optimal Rates and Low Noise Acceleration

In this paper, we consider the nonparametric least square regression in ...
research
03/09/2020

Theoretical Analysis of Divide-and-Conquer ERM: Beyond Square Loss and RKHS

Theoretical analysis of the divide-and-conquer based distributed learnin...
research
09/29/2020

A Framework of Learning Through Empirical Gain Maximization

We develop in this paper a framework of empirical gain maximization (EGM...
research
11/08/2014

Learning Theory for Distribution Regression

We focus on the distribution regression problem: regressing to vector-va...

Please sign up or login with your details

Forgot password? Click here to reset