Towards Practical Watermark for Deep Neural Networks in Federated Learning

05/07/2021
by   Fang-Qi Li, et al.
0

With the wide application of deep neural networks, it is important to verify a host's possession over a deep neural network model and protect the model. To meet this goal, various mechanisms have been designed. By embedding extra information into a network and revealing it afterward, the watermark becomes a competitive candidate in proving integrity for deep learning systems. However, concurrent watermarking schemes can hardly be adopted for emerging distributed learning paradigms that raise extra requirements during the ownership verification. A spearheading distributed learning paradigm is federated learning (FL) where many parties participate in training one single model. Each author participating in the FL should be able to verify its ownership independently. Moreover, there are other potential threat and corresponding security requirements under this scenario. To meet those requirements, in this paper, we demonstrate a watermarking protocol for protecting deep neural networks in the setting of FL. By incorporating the state-of-the-art watermarking scheme and the cryptological primitive designed for distributed storage, the protocol meets the need for ownership verification in the FL scenario without violating the privacy for each participant. This work paves the way for generalizing watermark as a practical security mechanism for protecting deep learning models in distributed learning platforms.

READ FULL TEXT

Authors

page 1

03/18/2021

Secure Watermark for Deep Neural Networks with Multi-task Learning

Deep neural networks are playing an important role in many real-life app...
08/20/2021

Regulating Ownership Verification for Deep Neural Networks: Scenarios, Protocols, and Prospects

With the broad application of deep neural networks, the necessity of pro...
10/04/2019

Clustered Federated Learning: Model-Agnostic Distributed Multi-Task Optimization under Privacy Constraints

Federated Learning (FL) is currently the most widely adopted framework f...
01/21/2022

FedComm: Federated Learning as a Medium for Covert Communication

Proposed as a solution to mitigate the privacy implications related to t...
02/11/2021

Privacy-Preserving Self-Taught Federated Learning for Heterogeneous Data

Many application scenarios call for training a machine learning model am...
05/25/2022

VeriFi: Towards Verifiable Federated Unlearning

Federated learning (FL) is a collaborative learning paradigm where parti...
07/09/2019

Security for Distributed Deep Neural Networks Towards Data Confidentiality & Intellectual Property Protection

Current developments in Enterprise Systems observe a paradigm shift, mov...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.