Classification Under Misspecification: Halfspaces, Generalized Linear Models, and Connections to Evolvability

06/08/2020
by   Sitan Chen, et al.
4

In this paper we revisit some classic problems on classification under misspecification. In particular, we study the problem of learning halfspaces under Massart noise with rate η. In a recent work, Diakonikolas, Goulekakis, and Tzamos resolved a long-standing problem by giving the first efficient algorithm for learning to accuracy η + ϵ for any ϵ > 0. However, their algorithm outputs a complicated hypothesis, which partitions space into poly(d,1/ϵ) regions. Here we give a much simpler algorithm and in the process resolve a number of outstanding open questions: (1) We give the first proper learner for Massart halfspaces that achieves η + ϵ. We also give improved bounds on the sample complexity achievable by polynomial time algorithms. (2) Based on (1), we develop a blackbox knowledge distillation procedure to convert an arbitrarily complex classifier to an equally good proper classifier. (3) By leveraging a simple but overlooked connection to evolvability, we show any SQ algorithm requires super-polynomially many queries to achieve 𝖮𝖯𝖳 + ϵ. Moreover we study generalized linear models where 𝔼[Y|𝐗] = σ(⟨𝐰^*, 𝐗⟩) for any odd, monotone, and Lipschitz function σ. This family includes the previously mentioned halfspace models as a special case, but is much richer and includes other fundamental models like logistic regression. We introduce a challenging new corruption model that generalizes Massart noise, and give a general algorithm for learning in this setting. Our algorithms are based on a small set of core recipes for learning to classify in the presence of misspecification. Finally we study our algorithm for learning halfspaces under Massart noise empirically and find that it exhibits some appealing fairness properties.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2021

Agnostic Proper Learning of Halfspaces under Gaussian Marginals

We study the problem of agnostically learning halfspaces under the Gauss...
research
07/20/2023

From Adaptive Query Release to Machine Unlearning

We formalize the problem of machine unlearning as design of efficient un...
research
02/24/2020

Learning Structured Distributions From Untrusted Batches: Faster and Simpler

We revisit the problem of learning from untrusted batches introduced by ...
research
12/17/2020

Hardness of Learning Halfspaces with Massart Noise

We study the complexity of PAC learning halfspaces in the presence of Ma...
research
10/18/2020

Robust Learning under Strong Noise via SQs

This work provides several new insights on the robustness of Kearns' sta...
research
05/06/2022

What Makes A Good Fisherman? Linear Regression under Self-Selection Bias

In the classical setting of self-selection, the goal is to learn k model...
research
06/13/2023

Robustly Learning a Single Neuron via Sharpness

We study the problem of learning a single neuron with respect to the L_2...

Please sign up or login with your details

Forgot password? Click here to reset