Shuffle Private Linear Contextual Bandits

02/11/2022
by   Sayak Ray Chowdhury, et al.
0

Differential privacy (DP) has been recently introduced to linear contextual bandits to formally address the privacy concerns in its associated personalized services to participating users (e.g., recommendations). Prior work largely focus on two trust models of DP: the central model, where a central server is responsible for protecting users sensitive data, and the (stronger) local model, where information needs to be protected directly on user side. However, there remains a fundamental gap in the utility achieved by learning algorithms under these two privacy models, e.g., Õ(√(T)) regret in the central model as compared to Õ(T^3/4) regret in the local model, if all users are unique within a learning horizon T. In this work, we aim to achieve a stronger model of trust than the central model, while suffering a smaller regret than the local model by considering recently popular shuffle model of privacy. We propose a general algorithmic framework for linear contextual bandits under the shuffle trust model, where there exists a trusted shuffler in between users and the central server, that randomly permutes a batch of users data before sending those to the server. We then instantiate this framework with two specific shuffle protocols: one relying on privacy amplification of local mechanisms, and another incorporating a protocol for summing vectors and matrices of bounded norms. We prove that both these instantiations lead to regret guarantees that significantly improve on that of the local model, and can potentially be of the order Õ(T^3/5) if all users are unique. We also verify this regret behavior with simulations on synthetic data. Finally, under the practical scenario of non-unique users, we show that the regret of our shuffle private algorithm scale as Õ(T^2/3), which matches that the central model could achieve in this case.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/12/2022

Distributed Differential Privacy in Multi-Armed Bandits

We consider the standard K-armed bandit problem under a distributed trus...
research
12/11/2021

Privacy Amplification via Shuffling for Linear Contextual Bandits

Contextual bandit algorithms are widely used in domains where it is desi...
research
01/28/2023

(Private) Kernelized Bandits with Distributed Biased Feedback

In this paper, we study kernelized bandits with distributed biased feedb...
research
06/08/2023

Federated Linear Contextual Bandits with User-level Differential Privacy

This paper studies federated linear contextual bandits under the notion ...
research
06/07/2021

Generalized Linear Bandits with Local Differential Privacy

Contextual bandit algorithms are useful in personalized online decision-...
research
07/12/2022

Differentially Private Linear Bandits with Partial Distributed Feedback

In this paper, we study the problem of global reward maximization with o...
research
08/04/2018

Distributed Differential Privacy via Mixnets

We consider the problem of designing scalable, robust protocols for comp...

Please sign up or login with your details

Forgot password? Click here to reset