-
Distributed Differential Privacy via Mixnets
We consider the problem of designing scalable, robust protocols for comp...
read it
-
FLAME: Differentially Private Federated Learning in the Shuffle Model
Differentially private federated learning has been intensively studied. ...
read it
-
Local Differential Privacy for Deep Learning
Deep learning (DL) is a promising area of machine learning which is beco...
read it
-
How to Democratise and Protect AI: Fair and Differentially Private Decentralised Deep Learning
This paper firstly considers the research problem of fairness in collabo...
read it
-
On the Round Complexity of the Shuffle Model
The shuffle model of differential privacy was proposed as a viable model...
read it
-
Private Selection from Private Candidates
Differentially Private algorithms often need to select the best amongst ...
read it
-
Benchmarking Differentially Private Residual Networks for Medical Imagery
Hospitals and other medical institutions often have vast amounts of medi...
read it
Towards Differentially Private Text Representations
Most deep learning frameworks require users to pool their local data or model updates to a trusted server to train or maintain a global model. The assumption of a trusted server who has access to user information is ill-suited in many applications. To tackle this problem, we develop a new deep learning framework under an untrusted server setting, which includes three modules: (1) embedding module, (2) randomization module, and (3) classifier module. For the randomization module, we propose a novel local differentially private (LDP) protocol to reduce the impact of privacy parameter ϵ on accuracy, and provide enhanced flexibility in choosing randomization probabilities for LDP. Analysis and experiments show that our framework delivers comparable or even better performance than the non-private framework and existing LDP protocols, demonstrating the advantages of our LDP protocol.
READ FULL TEXT
Comments
There are no comments yet.