On Private Online Convex Optimization: Optimal Algorithms in ℓ_p-Geometry and High Dimensional Contextual Bandits

06/16/2022
by   Yuxuan Han, et al.
0

Differentially private (DP) stochastic convex optimization (SCO) is ubiquitous in trustworthy machine learning algorithm design. This paper studies the DP-SCO problem with streaming data sampled from a distribution and arrives sequentially. We also consider the continual release model where parameters related to private information are updated and released upon each new data, often known as the online algorithms. Despite that numerous algorithms have been developed to achieve the optimal excess risks in different ℓ_p norm geometries, yet none of the existing ones can be adapted to the streaming and continual release setting. To address such a challenge as the online convex optimization with privacy protection, we propose a private variant of online Frank-Wolfe algorithm with recursive gradients for variance reduction to update and reveal the parameters upon each data. Combined with the adaptive differential privacy analysis, our online algorithm achieves in linear time the optimal excess risk when 1<p≤ 2 and the state-of-the-art excess risk meeting the non-private lower ones when 2<p≤∞. Our algorithm can also be extended to the case p=1 to achieve nearly dimension-independent excess risk. While previous variance reduction results on recursive gradient have theoretical guarantee only in the independent and identically distributed sample setting, we establish such a guarantee in a non-stationary setting. To demonstrate the virtues of our method, we design the first DP algorithm for high-dimensional generalized linear bandits with logarithmic regret. Comparative experiments with a variety of DP-SCO and DP-Bandit algorithms exhibit the efficacy and utility of the proposed algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/01/2020

Locally Differentially Private (Contextual) Bandits Learning

We study locally differentially private (LDP) bandits learning in this p...
research
06/27/2022

Normalized/Clipped SGD with Perturbation for Differentially Private Non-Convex Optimization

By ensuring differential privacy in the learning algorithms, one can rig...
research
04/04/2022

Langevin Diffusion: An Almost Universal Algorithm for Private Euclidean (Convex) Optimization

In this paper we revisit the problem of differentially private empirical...
research
06/01/2022

Bring Your Own Algorithm for Optimal Differentially Private Stochastic Minimax Optimization

We study differentially private (DP) algorithms for smooth stochastic mi...
research
06/24/2020

Private Stochastic Non-Convex Optimization: Adaptive Algorithms and Tighter Generalization Bounds

We study differentially private (DP) algorithms for stochastic non-conve...
research
03/31/2023

Differentially Private Stream Processing at Scale

We design, to the best of our knowledge, the first differentially privat...
research
07/23/2020

Quantum differentially private sparse regression learning

Differentially private (DP) learning, which aims to accurately extract p...

Please sign up or login with your details

Forgot password? Click here to reset