Near-Optimal Target Learning With Stochastic Binary Signals

02/14/2012
by   Mithun Chakraborty, et al.
0

We study learning in a noisy bisection model: specifically, Bayesian algorithms to learn a target value V given access only to noisy realizations of whether V is less than or greater than a threshold theta. At step t = 0, 1, 2, ..., the learner sets threshold theta t and observes a noisy realization of sign(V - theta t). After T steps, the goal is to output an estimate V^ which is within an eta-tolerance of V . This problem has been studied, predominantly in environments with a fixed error probability q < 1/2 for the noisy realization of sign(V - theta t). In practice, it is often the case that q can approach 1/2, especially as theta -> V, and there is little known when this happens. We give a pseudo-Bayesian algorithm which provably converges to V. When the true prior matches our algorithm's Gaussian prior, we show near-optimal expected performance. Our methods extend to the general multiple-threshold setting where the observation noisily indicates which of k >= 2 regions V belongs to.

READ FULL TEXT
research
01/04/2019

Near-Optimal Lower Bounds on the Threshold Degree and Sign-Rank of AC^0

The threshold degree of a Boolean function f{0,1}^n→{0,1} is the minimum...
research
05/29/2021

An algorithm for identifying eigenvectors exhibiting strong spatial localization

We introduce an approach for exploring eigenvector localization phenomen...
research
02/13/2023

Near-Optimal Cryptographic Hardness of Agnostically Learning Halfspaces and ReLU Regression under Gaussian Marginals

We study the task of agnostically learning halfspaces under the Gaussian...
research
10/15/2021

Quickest Inference of Network Cascades with Noisy Information

We study the problem of estimating the source of a network cascade given...
research
04/08/2018

On Remote Estimation with Multiple Communication Channels

This paper considers a sequential sensor scheduling and remote estimatio...
research
01/25/2016

Towards Resolving Unidentifiability in Inverse Reinforcement Learning

We consider a setting for Inverse Reinforcement Learning (IRL) where the...
research
12/27/2022

Almost-Bayesian Quadratic Persuasion (Extended Version)

In this article, we relax the Bayesianity assumption in the now-traditio...

Please sign up or login with your details

Forgot password? Click here to reset