Optimal Algorithms for Private Online Learning in a Stochastic Environment

02/16/2021
by   Bingshan Hu, et al.
0

We consider two variants of private stochastic online learning. The first variant is differentially private stochastic bandits. Previously, Sajed and Sheffet (2019) devised the DP Successive Elimination (DP-SE) algorithm that achieves the optimal O (∑_1≤ j ≤ K: Δ_j >0log T/Δ_j + Klog T/ϵ) problem-dependent regret bound, where K is the number of arms, Δ_j is the mean reward gap of arm j, T is the time horizon, and ϵ is the required privacy parameter. However, like other elimination style algorithms, it is not an anytime algorithm. Until now, it was not known whether UCB-based algorithms could achieve this optimal regret bound. We present an anytime, UCB-based algorithm that achieves optimality. Our experiments show that the UCB-based algorithm is competitive with DP-SE. The second variant is the full information version of private stochastic online learning. Specifically, for the problems of decision-theoretic online learning with stochastic rewards, we present the first algorithm that achieves an O ( log K/Δ_min + log K/ϵ) regret bound, where Δ_min is the minimum mean reward gap. The key idea behind our good theoretical guarantees in both settings is the forgetfulness, i.e., decisions are made based on a certain amount of newly obtained observations instead of all the observations obtained from the very beginning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/07/2022

Differentially Private Stochastic Linear Bandits: (Almost) for Free

In this paper, we propose differentially private algorithms for the prob...
research
06/20/2019

Stochastic One-Sided Full-Information Bandit

In this paper, we study the stochastic version of the one-sided full inf...
research
05/22/2019

An Optimal Private Stochastic-MAB Algorithm Based on an Optimal Private Stopping Rule

We present a provably optimal differentially private algorithm for the s...
research
04/24/2020

Fast Thompson Sampling Algorithm with Cumulative Oversampling: Application to Budgeted Influence Maximization

We propose a cumulative oversampling (CO) technique for Thompson Samplin...
research
06/04/2021

Optimal Rates of (Locally) Differentially Private Heavy-tailed Multi-Armed Bandits

In this paper we study the problem of stochastic multi-armed bandits (MA...
research
08/23/2017

Scale-invariant unconstrained online learning

We consider a variant of online convex optimization in which both the in...
research
10/24/2020

Differentially Private Online Submodular Maximization

In this work we consider the problem of online submodular maximization u...

Please sign up or login with your details

Forgot password? Click here to reset