Online Learning in Dynamically Changing Environments

01/31/2023
by   Changlong Wu, et al.
0

We study the problem of online learning and online regret minimization when samples are drawn from a general unknown non-stationary process. We introduce the concept of a dynamic changing process with cost K, where the conditional marginals of the process can vary arbitrarily, but that the number of different conditional marginals is bounded by K over T rounds. For such processes we prove a tight (upto √(log T) factor) bound O(√(KT·𝖵𝖢(ℋ)log T)) for the expected worst case regret of any finite VC-dimensional class ℋ under absolute loss (i.e., the expected miss-classification loss). We then improve this bound for general mixable losses, by establishing a tight (up to log^3 T factor) regret bound O(K·𝖵𝖢(ℋ)log^3 T). We extend these results to general smooth adversary processes with unknown reference measure by showing a sub-linear regret bound for 1-dimensional threshold functions under a general bounded convex loss. Our results can be viewed as a first step towards regret analysis with non-stationary samples in the distribution blind (universal) regime. This also brings a new viewpoint that shifts the study of complexity of the hypothesis classes to the study of the complexity of processes generating data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset