DeepAI
Log In Sign Up

Majority Vote for Distributed Differentially Private Sign Selection

09/08/2022
by   Weidong Liu, et al.
0

Privacy-preserving data analysis has become prevailing in recent years. In this paper, we propose a distributed group differentially private majority vote mechanism for the sign selection problem in a distributed setup. To achieve this, we apply the iterative peeling to the stability function and use the exponential mechanism to recover the signs. As applications, we study the private sign selection for mean estimation and linear regression problems in distributed systems. Our method recovers the support and signs with the optimal signal-to-noise ratio as in the non-private scenario, which is better than contemporary works of private variable selections. Moreover, the sign selection consistency is justified with theoretical guarantees. Simulation studies are conducted to demonstrate the effectiveness of our proposed method.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/23/2020

Permute-and-Flip: A new mechanism for differentially private selection

We consider the problem of differentially private selection. Given a fin...
01/11/2022

Exponential Randomized Response: Boosting Utility in Differentially Private Selection

A differentially private selection algorithm outputs from a finite set t...
11/19/2018

Private Selection from Private Candidates

Differentially Private algorithms often need to select the best amongst ...
02/15/2022

Private Quantiles Estimation in the Presence of Atoms

We address the differentially private estimation of multiple quantiles (...
06/05/2020

Differentially private partition selection

Many data analysis operations can be expressed as a GROUP BY query on an...
06/16/2022

Differentially Private Multi-Party Data Release for Linear Regression

Differentially Private (DP) data release is a promising technique to dis...
02/25/2020

Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees

Federated learning (FL) has emerged as a prominent distributed learning ...