Noise in Classification

10/10/2020
by   Maria-Florina Balcan, et al.
0

This chapter considers the computational and statistical aspects of learning linear thresholds in presence of noise. When there is no noise, several algorithms exist that efficiently learn near-optimal linear thresholds using a small amount of data. However, even a small amount of adversarial noise makes this problem notoriously hard in the worst-case. We discuss approaches for dealing with these negative results by exploiting natural assumptions on the data-generating process.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/18/2023

Near-Optimal Estimation of Linear Functionals with Log-Concave Observation Errors

This note addresses the question of optimally estimating a linear functi...
research
03/28/2023

Worst case tractability of linear problems in the presence of noise: linear information

We study the worst case tractability of multivariate linear problems def...
research
07/30/2020

Efficient Tensor Decomposition

This chapter studies the problem of decomposing a tensor into a sum of c...
research
07/19/2019

Direct information transfer rate optimisation for SSVEP-based BCI

In this work, a classification method for SSVEP-based BCI is proposed. T...
research
09/21/2020

Optimal Provable Robustness of Quantum Classification via Quantum Hypothesis Testing

Quantum machine learning models have the potential to offer speedups and...
research
07/01/2021

Prediction of tone detection thresholds in interaurally delayed noise based on interaural phase difference fluctuations

Differences between the interaural phase of a noise and a target tone im...
research
03/30/2022

Optimal Learning

This paper studies the problem of learning an unknown function f from gi...

Please sign up or login with your details

Forgot password? Click here to reset