Explicit Learning Curves for Transduction and Application to Clustering and Compression Algorithms

06/30/2011
by   P. Derbeko, et al.
0

Inductive learning is based on inferring a general rule from a finite data set and using it to label new data. In transduction one attempts to solve the problem of using a labeled training set to label a set of unlabeled points, which are given to the learner prior to learning. Although transduction seems at the outset to be an easier task than induction, there have not been many provably useful algorithms for transduction. Moreover, the precise relation between induction and transduction has not yet been determined. The main theoretical developments related to transduction were presented by Vapnik more than twenty years ago. One of Vapnik's basic results is a rather tight error bound for transductive classification based on an exact computation of the hypergeometric tail. While tight, this bound is given implicitly via a computational routine. Our first contribution is a somewhat looser but explicit characterization of a slightly extended PAC-Bayesian version of Vapnik's transductive bound. This characterization is obtained using concentration inequalities for the tail of sums of random variables obtained by sampling without replacement. We then derive error bounds for compression schemes such as (transductive) support vector machines and for transduction algorithms based on clustering. The main observation used for deriving these new error bounds and algorithms is that the unlabeled test points, which in the transductive setting are known in advance, can be used in order to construct useful data dependent prior distributions over the hypothesis space.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/15/2014

Transductive Rademacher Complexity and its Applications

We develop a technique for deriving data-dependent error bounds for tran...
research
06/25/2023

Dual Induction CLT for High-dimensional m-dependent Data

In this work, we provide a 1/√(n)-rate finite sample Berry-Esseen bound ...
research
03/23/2021

PAC-Bayesian theory for stochastic LTI systems

In this paper we derive a PAC-Bayesian error bound for autonomous stocha...
research
04/20/2020

Generalization Error Bounds via mth Central Moments of the Information Density

We present a general approach to deriving bounds on the generalization e...
research
12/10/2022

High-dimensional Berry-Esseen Bound for m-Dependent Random Samples

In this work, we provide a (n/m)^-1/2-rate finite sample Berry-Esseen bo...
research
04/05/2014

A Compression Technique for Analyzing Disagreement-Based Active Learning

We introduce a new and improved characterization of the label complexity...
research
10/22/2020

Nonvacuous Loss Bounds with Fast Rates for Neural Networks via Conditional Information Measures

We present a framework to derive bounds on the test loss of randomized l...

Please sign up or login with your details

Forgot password? Click here to reset