Non-monotone risk functions for learning

12/04/2020 ∙ by Matthew J. Holland, et al. ∙ 0

In this paper we consider generalized classes of potentially non-monotone risk functions for use as evaluation metrics in learning tasks. The resulting risks are in general non-convex and non-smooth, which makes both the computational and inferential sides of the learning problem difficult. For random losses belonging to any Banach space, we obtain sufficient conditions for the risk functions to be weakly convex, and to admit unbiased stochastic directional derivatives. We then use recent work on stochastic optimization of weakly convex functionals to obtain non-asymptotic guarantees of near-stationarity for Hilbert hypothesis classes, under assumptions that are weak enough to capture a wide variety of feedback distributions, including potentially heavy-tailed losses and gradients.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.