Learning Mixtures of Linear Classifiers

11/11/2013
by   Yuekai Sun, et al.
0

We consider a discriminative learning (regression) problem, whereby the regression function is a convex combination of k linear classifiers. Existing approaches are based on the EM algorithm, or similar techniques, without provable guarantees. We develop a simple method based on spectral techniques and a `mirroring' trick, that discovers the subspace spanned by the classifiers' parameter vectors. Under a probabilistic assumption on the feature vector distribution, we prove that this approach has nearly optimal statistical efficiency.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/01/2021

Optimal Linear Combination of Classifiers

The question of whether to use one classifier or a combination of classi...
research
06/17/2013

Spectral Experts for Estimating Mixtures of Linear Regressions

Discriminative latent-variable models are typically learned using EM or ...
research
02/10/2018

Disturbance Grassmann Kernels for Subspace-Based Learning

In this paper, we focus on subspace-based learning problems, where data ...
research
02/01/2023

A Nearly-Optimal Bound for Fast Regression with ℓ_∞ Guarantee

Given a matrix A∈ℝ^n× d and a vector b∈ℝ^n, we consider the regression p...
research
07/27/2021

Statistical Guarantees for Fairness Aware Plug-In Algorithms

A plug-in algorithm to estimate Bayes Optimal Classifiers for fairness-a...
research
01/25/2019

On the Statistical Efficiency of Optimal Kernel Sum Classifiers

We propose a novel combination of optimization tools with learning theor...
research
08/22/2017

Learning Combinations of Sigmoids Through Gradient Estimation

We develop a new approach to learn the parameters of regression models w...

Please sign up or login with your details

Forgot password? Click here to reset