Mitigating Bias in Adaptive Data Gathering via Differential Privacy

06/06/2018
by   Seth Neel, et al.
0

Data that is gathered adaptively --- via bandit algorithms, for example --- exhibits bias. This is true both when gathering simple numeric valued data --- the empirical means kept track of by stochastic bandit algorithms are biased downwards --- and when gathering more complicated data --- running hypothesis tests on complex data gathered via contextual bandit algorithms leads to false discovery. In this paper, we show that this problem is mitigated if the data collection procedure is differentially private. This lets us both bound the bias of simple numeric valued quantities (like the empirical means of stochastic bandit algorithms), and correct the p-values of hypothesis tests run on the adaptively gathered data. Moreover, there exist differentially private bandit algorithms with near optimal regret bounds: we apply existing theorems in the simple stochastic case, and give a new analysis for linear contextual bandits. We complement our theoretical results with experiments validating our theory.

READ FULL TEXT
research
09/28/2018

Differentially Private Contextual Linear Bandits

We study the contextual linear bandit problem, a version of the standard...
research
07/07/2022

Differentially Private Stochastic Linear Bandits: (Almost) for Free

In this paper, we propose differentially private algorithms for the prob...
research
10/22/2020

Differentially-Private Federated Linear Bandits

The rapid proliferation of decentralized learning systems mandates the n...
research
11/27/2015

Algorithms for Differentially Private Multi-Armed Bandits

We present differentially private algorithms for the stochastic Multi-Ar...
research
04/23/2023

Robust and differentially private stochastic linear bandits

In this paper, we study the stochastic linear bandit problem under the a...
research
08/30/2022

Dynamic Global Sensitivity for Differentially Private Contextual Bandits

Bandit algorithms have become a reference solution for interactive recom...
research
10/25/2020

Tractable contextual bandits beyond realizability

Tractable contextual bandit algorithms often rely on the realizability a...

Please sign up or login with your details

Forgot password? Click here to reset