Online Learning Using Only Peer Assessment

10/10/2019 ∙ by Yang Liu, et al. ∙ 0

This paper considers a variant of the classical online learning problem with expert predictions. Our model's differences and challenges are due to lacking any direct feedback on the loss each expert incurs at each time step t. We propose an approach that uses peer assessment and identify conditions where it succeeds. Our techniques revolve around a carefully designed peer score function s() that scores experts' predictions based on the peer consensus. We show a sufficient condition, that we call peer calibration, under which standard online learning algorithms using loss feedback computed by the carefully crafted s() have bounded regret with respect to the unrevealed ground truth values. We then demonstrate how suitable s() functions can be derived for different assumptions and models.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.