Noise in Classification

10/10/2020
by   Maria-Florina Balcan, et al.
0

This chapter considers the computational and statistical aspects of learning linear thresholds in presence of noise. When there is no noise, several algorithms exist that efficiently learn near-optimal linear thresholds using a small amount of data. However, even a small amount of adversarial noise makes this problem notoriously hard in the worst-case. We discuss approaches for dealing with these negative results by exploiting natural assumptions on the data-generating process.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/16/2020

Algorithms with Predictions

We introduce algorithms that use predictions from machine learning appli...
07/30/2020

Efficient Tensor Decomposition

This chapter studies the problem of decomposing a tensor into a sum of c...
02/18/2017

Revisiting Perceptron: Efficient and Label-Optimal Learning of Halfspaces

It has been a long-standing problem to efficiently learn a halfspace usi...
07/19/2019

Direct information transfer rate optimisation for SSVEP-based BCI

In this work, a classification method for SSVEP-based BCI is proposed. T...
09/21/2020

Optimal Provable Robustness of Quantum Classification via Quantum Hypothesis Testing

Quantum machine learning models have the potential to offer speedups and...
04/09/2020

Composable Sketches for Functions of Frequencies: Beyond the Worst Case

Recently there has been increased interest in using machine learning tec...
07/01/2021

Prediction of tone detection thresholds in interaurally delayed noise based on interaural phase difference fluctuations

Differences between the interaural phase of a noise and a target tone im...