-
The Power of Localization for Efficiently Learning Linear Separators with Noise
We introduce a new approach for designing computationally efficient lear...
read it
-
Revisiting Perceptron: Efficient and Label-Optimal Learning of Halfspaces
It has been a long-standing problem to efficiently learn a halfspace usi...
read it
-
Online Active Learning: Label Complexity vs. Classification Errors
We study online active learning for classifying streaming instances. At ...
read it
-
Active Learning under Label Shift
Distribution shift poses a challenge for active data collection in the r...
read it
-
Target-Independent Active Learning via Distribution-Splitting
To reduce the label complexity in Agnostic Active Learning (A^2 algorith...
read it
-
Active and passive learning of linear separators under log-concave distributions
We provide new results concerning label efficient, polynomial time, pass...
read it
-
Attribute-Efficient Learning of Halfspaces with Malicious Noise: Near-Optimal Label Complexity and Noise Tolerance
This paper is concerned with computationally efficient learning of homog...
read it
On the Power of Localized Perceptron for Label-Optimal Learning of Halfspaces with Adversarial Noise
We study online active learning of homogeneous halfspaces in ℝ^d with adversarial noise where the overall probability of a noisy label is constrained to be at most ν. Our main contribution is a Perceptron-like online active learning algorithm that runs in polynomial time, and under the conditions that the marginal distribution is isotropic log-concave and ν = Ω(ϵ), where ϵ∈ (0, 1) is the target error rate, our algorithm PAC learns the underlying halfspace with near-optimal label complexity of Õ(d · polylog(1/ϵ)) and sample complexity of Õ(d/ϵ). Prior to this work, existing online algorithms designed for tolerating the adversarial noise are subject to either label complexity polynomial in 1/ϵ, or suboptimal noise tolerance, or restrictive marginal distributions. With the additional prior knowledge that the underlying halfspace is s-sparse, we obtain attribute-efficient label complexity of Õ( s · polylog(d, 1/ϵ) ) and sample complexity of Õ(s/ϵ· polylog(d) ). As an immediate corollary, we show that under the agnostic model where no assumption is made on the noise rate ν, our active learner achieves an error rate of O(OPT) + ϵ with the same running time and label and sample complexity, where OPT is the best possible error rate achievable by any homogeneous halfspace.
READ FULL TEXT
Comments
There are no comments yet.