A Simple Algorithm for Semi-supervised Learning with Improved Generalization Error Bound

06/27/2012
by   Ming Ji, et al.
0

In this work, we develop a simple algorithm for semi-supervised regression. The key idea is to use the top eigenfunctions of integral operator derived from both labeled and unlabeled examples as the basis functions and learn the prediction function by a simple linear regression. We show that under appropriate assumptions about the integral operator, this approach is able to achieve an improved regression error bound better than existing bounds of supervised learning. We also verify the effectiveness of the proposed algorithm by an empirical study.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2016

Muffled Semi-Supervised Learning

We explore a novel approach to semi-supervised learning. This approach i...
research
02/04/2019

Generalization Bounds For Unsupervised and Semi-Supervised Learning With Autoencoders

Autoencoders are widely used for unsupervised learning and as a regulari...
research
02/14/2012

Active Semi-Supervised Learning using Submodular Functions

We consider active, semi-supervised learning in an offline transductive ...
research
07/02/2013

Semi-supervised Ranking Pursuit

We propose a novel sparse preference learning/ranking algorithm. Our alg...
research
03/07/2023

Manually Selecting The Data Function for Supervised Learning of small datasets

Supervised learning problems may become ill-posed when there is a lack o...
research
09/01/2020

Semi-Supervised Empirical Risk Minimization: When can unlabeled data improve prediction

We present a general methodology for using unlabeled data to design semi...
research
02/01/2016

Semi-supervised K-means++

Traditionally, practitioners initialize the k-means algorithm with cent...

Please sign up or login with your details

Forgot password? Click here to reset