DeepAI AI Chat
Log In Sign Up

No free lunch theorem for security and utility in federated learning

by   Xiaojin Zhang, et al.

In a federated learning scenario where multiple parties jointly learn a model from their respective data, there exist two conflicting goals for the choice of appropriate algorithms. On one hand, private and sensitive training data must be kept secure as much as possible in the presence of semi-honest partners, while on the other hand, a certain amount of information has to be exchanged among different parties for the sake of learning utility. Such a challenge calls for the privacy-preserving federated learning solution, which maximizes the utility of the learned model and maintains a provable privacy guarantee of participating parties' private data. This article illustrates a general framework that a) formulates the trade-off between privacy loss and utility loss from a unified information-theoretic point of view, and b) delineates quantitative bounds of privacy-utility trade-off when different protection mechanisms including Randomization, Sparsity, and Homomorphic Encryption are used. It was shown that in general there is no free lunch for the privacy-utility trade-off and one has to trade the preserving of privacy with a certain degree of degraded utility. The quantitative analysis illustrated in this article may serve as the guidance for the design of practical federated learning algorithms.


page 1

page 2

page 3

page 4


Secure and Privacy-Preserving Federated Learning via Co-Utility

The decentralized nature of federated learning, that often leverages the...

SoteriaFL: A Unified Framework for Private Federated Learning with Communication Compression

To enable large-scale machine learning in bandwidth-hungry environments ...

Privacy Threats Analysis to Secure Federated Learning

Federated learning is emerging as a machine learning technique that trai...

On the security and privacy of Interac e-Transfers

Nowadays, the Interac e-Transfer is one of the most important remote pay...

Dataset Obfuscation: Its Applications to and Impacts on Edge Machine Learning

Obfuscating a dataset by adding random noises to protect the privacy of ...

Blinder: End-to-end Privacy Protection in Sensing Systems via Personalized Federated Learning

This paper proposes a sensor data anonymization model that is trained on...

Preliminary Steps Towards Federated Sentiment Classification

Automatically mining sentiment tendency contained in natural language is...