Regulating Ownership Verification for Deep Neural Networks: Scenarios, Protocols, and Prospects

With the broad application of deep neural networks, the necessity of protecting them as intellectual properties has become evident. Numerous watermarking schemes have been proposed to identify the owner of a deep neural network and verify the ownership. However, most of them focused on the watermark embedding rather than the protocol for provable verification. To bridge the gap between those proposals and real-world demands, we study the deep learning model intellectual property protection in three scenarios: the ownership proof, the federated learning, and the intellectual property transfer. We present three protocols respectively. These protocols raise several new requirements for the bottom-level watermarking schemes.



There are no comments yet.


page 1

page 2

page 3

page 4


Towards Practical Watermark for Deep Neural Networks in Federated Learning

With the wide application of deep neural networks, it is important to ve...

Digital Passport: A Novel Technological Strategy for Intellectual Property Protection of Convolutional Neural Networks

In order to prevent deep neural networks from being infringed by unautho...

Knowledge-Free Black-Box Watermark and Ownership Proof for Image Classification Neural Networks

Watermarking has become a plausible candidate for ownership verification...

Secure Watermark for Deep Neural Networks with Multi-task Learning

Deep neural networks are playing an important role in many real-life app...

Have You Stolen My Model? Evasion Attacks Against Deep Neural Network Watermarking Techniques

Deep neural networks have had enormous impact on various domains of comp...

NeVer 2.0: Learning, Verification and Repair of Deep Neural Networks

In this work, we present an early prototype of NeVer 2.0, a new system f...

Machine learning in spectral domain

Deep neural networks are usually trained in the space of the nodes, by a...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The development of deep learning boosted the application of deep neural networks (DNN). Given abundant data and computing resources, DNNs outperform traditional models in many disciplines such as image processing, natural language processing 

[guo2020gluoncv], internet of things [lv2020deep], etc. The expense behind a DNN is high. Much data is collected, processed, and labeled. Designing the DNN architecture and tuning its parameters also involves tremendous effort. Therefore, DNNs are the intellectual property (IP) of the legitimate owner.

Ownership verification (OV) is necessary for the Intellectual Property Protection (IPR) of DNNs. To achieve OV, various DNN watermarking schemes have been proposed. A watermarking scheme embeds the owner-dependent watermark into the DNN, whose later revealing proves the owner’s identity. Based on the access level at which the suspicious DNN can be interacted with, watermarking schemes can be classified into

white-box ones and black-box ones.

In the white-box setting, the owner has full access to the pirated DNN. The watermark can be encoded into the model’s parameters [uchida2017embedding] or intermediate outputs [darvish2019deepsigns]. The owner can also insert extra modules into the DNN’s intermediate layers for OV [fan2021deepip]. As for the black-box setting, the owner can only interact with the pirated model through an API. Watermarking schemes for this case usually resort to backdoors [zhang2018protecting, zhu2020secure].

In contrast to the variety of proposed watermarking schemes, discussions on the protocol under which the OV is conducted remain scanty. The OV protocol is indispensable for commercializing deep learning models, but established works on OV protocols mainly focused on the secure transmission of watermarks and are highly inflexible [adi2018turning]. So the IPR of DNNs as a service remains a challenge.

Emerging real-world scenarios are providing diversified challenges for the verification protocols. For example, in the model competition, there exists a trusted sponsor with white-box access to all DNNs. While in commercial services where models are deployed as APIs, such a trusted party is unavailable. Distributed learning paradigms such as the Federated Learning (FL) introduce extra security requirements, which go beyond the scope of established watermarking schemes and protocols. Moreover, it is of broad interest to enlarge the coverage of IPR for DNN models to disciplines other than piracy identifying, e.g., secure intellectual property transfer.

To apply established DNN watermarking schemes to real-world scenarios for IPR of DNN models, we formulate these practical demands and propose candidate protocols meeting respective settings. The contributions of this paper are:

  • We analyze three real-world scenarios involving IPR of DNNs and formulate respective protocols.

  • We show that some security properties of the proposed protocols can be built upon the security of underlying watermarking schemes by reduction.

  • We explore several additional requirements for the watermarking schemes introduced by the protocols, which are prospective directions for further research.

2 Properties of Watermarking Schemes

In general, a watermarking scheme WM is composed of two modules , one generates the watermark:

one embeds the watermark into the model to be protected:

The watermark is an identifier key representing the owner’s identity and is the security parameter. The embedding module takes a clean DNN as its input. The module verify returned from Embed is part of the evidence for reconstructing the owner’s identity from to achieve OV. As has been outlined in [ours], a watermarking scheme has to satisfy the following security requirements.

2.1 Correctness

The module verify can identify the owner’s identity from the watermarked model:

where is a function negligible in . Meanwhile, an adversary’s identity cannot pass the verifier:

where is the adversary’s evidence randomly sampled from the key space.

2.2 Robustness

The adversary can tune the pirated model using fine-tuning, neuron-pruning, fine-pruning 

[liu2018fine], even distillation [zhang2021deep]:

Under a robust watermarking scheme, such tuning should not affect the accuracy of OV:

2.3 Covertness

An adversary should not be able to distinguish a watermarked model from a clean one. Otherwise, the adversary might manage to escape the IP regulation. Formally, we design the following Algo. 1.

Input: , , WM,
Output: Whether wins or not

1:  Randomly select .
2:  Generate from .
3:   is given and WM.
4:  if  then
5:      is given .
6:  else
7:      is given .
8:  end if
9:   outputs .
10:   wins the experiment if .
Algorithm 1 .

The watermarking scheme is covert if no efficient probabilistic machine can win

with a probability significantly higher than


2.4 Privacy-preserving

The privacy-preserving property suggests that no adversary can identify the model’s ownership given only partial information of the owner. One type of privacy-preserving is defined through Algo. 2.

Input: , , WM, ,
Output: Whether wins or not

1:  Randomly select .
2:  Generate from .
3:   is given , , , WM, , .
4:   outputs .
5:   wins the experiment if .
Algorithm 2 .

If no efficient can win with a probability significantly higher than then WM is key-privacy-preserving [ours]. Analogously, we can define verifier-privacy-preserving.

The key-privacy-preserving properties suggests that the verify module of the watermarking scheme should depend on key. Otherwise the privacy is easily breached.

Example 1.

The watermarking scheme of Uchida’s replaces parameters within the clean DNN by special digits. Its keyspace can be defined as or , where is the space of all parameters in the DNN model. Being formulated in the first manner, Uchida’s is a key-privacy-preserving scheme. In the second formulation, a legal key includes both the place where the digits are embedded and the digits. The corresponding verify is only a parameter-free comparison operator so it is not a key-privacy-preserving scheme.

2.5 Overwriting issues

Having known the watermarking scheme, the adversary can embed its identity into the model:

So the ownership becomes ambiguous. To cope with this threat, the owner’s watermark must not be invalidated, i.e.:

In cases where the adversary embeds its watermark into the DNN and redeclare the ownership, extra mechanisms, e.g., authorized time-stamp, are necessary to break the tie.

3 Scenarios and Watermarking Protocols

IPR involves proving the ownership to a third-party, which we denote as the notary. Embedding and recovering watermarks without clarifying the role of the notary is insufficient for IPR. Top-level protocols, follow which all parties involved in IPR (the owner, the adversary, and the notary) operate, are indispensable. The configuration of the three parties’ functionalities varies in different scenarios, so it is necessary to design a specialized protocol for each case. We present practical protocols for three important real-world scenarios:

  • An owner proves its ownership over a DNN to a notary.

  • Collaborated owners in FL prove their ownership over a DNN to a notary, during which they can recover each other’s identity proof and trace potential traitors.

  • An owner transfers the IP of its DNN to a third party.

For watermarking schemes, these protocols introduce extra security requirements besides those listed in Section 2.

3.1 Protocols for ownership proof

3.1.1 The centralized OV protocol

The simplest OV protocol is centralized, in which the notary is a verification center responsible for publishing legitimate ownership proofs. This is the case which most established watermarking schemes have assumed. It involves two steps:

  1. The owner submits and the access to to the notary.

  2. The notary computes and publishes the output.

To use white/black-box watermarking schemes, the owner has to provide the white/black-box access of the suspicious model to the notary. To preserve privacy, the channel between the owner and the notary has to be encrypted. As for a curious notary, a Secure Multi-Party Computation (SMPC) [bogetoft2009secure] protocol can be adopted to protect the owner’s data. Using such a protocol, the redeclaration problem can be solved. Instead of generating key on its own, the owner queries the notary for time authorization, who would return a key containing the time-stamp back to the owner. Overwriting and redeclaring cannot falsify the time-stamp so the ownership is secured.

Despite its simplicity, this protocol has many defects:

  • The proof is valid only within the community that recognizes the notary’s credit. It is difficult to accommodate this protocol for a broader range of entities.

  • If the notary is compromised then all verifications within the community are at risk.

  • Attacks against centralized protocols, such as the Deny Of Service (DOS) can paralyze the protocol.

3.1.2 The decentralized OV protocol

Given the defects of the centralized protocol, we propose a decentralized protocol for OV [ours]. Instead of relying on a verification center, we resort to a community of agents distributed across the network. To prove its ownership over a DNN, the owner broadcasts the necessary evidence to the verification community. Then each agent can volunteer to conduct the verification and broadcast the result. The OV is finished by voting through the entire community. To solve the redeclaration dilemma, an owner has to broadcast the hash of the DNN architecture and the evidence under a consensus protocol [ongaro2014search]. Then the entire community would have a consensus on the time-stamp corresponding to ownership. This protocol is outlined in Algo. 3. Its unforgeability and correctness can be reduced to the security of WM, that of the digital signature scheme, and the reliability of the consensus protocol.

Participants: The owner, the verification community
Modules: A watermarking scheme WM, a digital signature scheme, a consensus protocol

1:  The owner generates .
2:  The owner generates key, , and verify by WM.
3:  The owner signs the following message:
using the digital signature scheme, where time is the current time-stamp, hash is a hash function, and info descripts the DNN model’s architecture.
4:  The owner broadcasts the signed message to the community using the consensus protocol.
5:  To conduct OV over a DNN , the owner signs and broadcasts .
6:  An agent retrives the time-stamp by computing , and submits to the community using the consensus protocol.
Algorithm 3 The decentralized OV protocol.

As in other distributed service systems [mengelkamp2018blockchain], to motivate the entire community to conduct OV, each correct verification assigns credits to agents that contribute to the proof, with which they can initiate their OV requests. This protocol is immune to attacks that only compromise a single agent. However, the communication traffic is increased. Especially when the owner adopts a white-box watermarking scheme, then each agent has to download the entire model. Since the proof is done on many independent agents, using SMPC thoroughly would be expensive and inefficient. Therefore, an eavesdropping adversary may steal the evidence corresponding to the owner and its model. Then the adversary can spoil this specific watermark so the owner can no longer succeed in OV over the new model, this spoil attack is illustrated in Fig. 1.

(a) The first verification.
(b) The spoil attack.
(c) The second verification.
Figure 1: The spoil attack. The blue node is the owner, green nodes are benign agents, the red node is a malicious agent, and the purple one is the eavesdropping adversary.

3.1.3 Discussion: the spoil attack

The spoil attack, as an additional threat in the decentralized OV protocol, has seldom entered the concern of designers of DNN watermarking schemes. Consequently, almost all established watermarking schemes are vulnerable against the spoil attack, which fact challenges the applicability of the decentralized OV protocol.

Backdoor-based watermarking schemes can be spoiled by fitting the DNN to randomly shuffled labels on the triggers. For white-box watermarking schemes, the adversary can spoil the watermark by tuning the model reversely.

Theoretically, the security against the spoil attack can be defined through Algo. 4.

Input: , , WM,
Output: Whether wins or not

1:  Generate , key and verify from .
2:   is given , key, , WM and verify.
3:   outputs .
4:   wins the experiment if and ’s performance declines for no larger than compared with .
Algorithm 4 .

The scheme WM is secure against the spoil attack iff no efficient adversary can win with non-negligible probability for a given .

Such proof is intractable for almost all established watermarking schemes. It is unknown whether a scheme provably secure against the spoil attack exists or not.

As a substitute, we can improve traditional watermarking schemes against the spoil attack by simply embedding multiple watermarks into the DNN to be protected. Since each round of OV only exposes one watermark, such configuration can resist the spoil attack. But inserting multiple watermarks also calls for additional requirements, namely the watermarking capacity and independence.

As defined in [li2021practical], the -watermarking capacity for a DNN model , , is the maximum number of watermarks that can be embedded and verified correctly before ’s performance declines for . This OV service can survive rounds of spoil attacks by sacrificing the performance for at most .

The other aspect is: spoiling one watermark should not affect other ones. Otherwise, spoiling one watermark might invalidate others that have not been exposed and decrease the times of correct OV. To evaluate the watermarking independence against the spoil attack of WM w.r.t. a DNN , we firstly insert watermarks into using WM. Then we spoil a random watermark and denote the number of watermarks that can still be correctly verified as . The higher the watermarking independence score is, the more robust WM is against the spoil attack.

3.2 The OV protocol for federated learning

In the basic OV protocol, each DNN has a unique owner. The development of distributed learning paradigms, especially FL [yang2019federated], has changed this assumption. In FL, many parties coordinated by an aggregator cooperate to train one deep learning model without interchanging local data as illustrated in Fig. 2.

Figure 2: The client-server architecture for FL.

Each participating party should be able to verify its ownership over the model independently. The privacy of each owner must not be breached, concretely, an owner can not falsify himself as another owner. Considering the collaboration of all owners, it is desirable that when one owner undergoes severe spoil attacks so its identity information is erased from the model, other owners can help it recover the ownership proof. Moreover, if one party betrays its co-authors, pirates the intermediate model and claims it to be its product, then the honest parties can correctly identify this traitor.

These four requirements, independent verification, privacy-preserving, recovery, and traitor-tracing, mark the characteristics for OV in FL. To reduce the communication traffic between the owners and the verification community, achieve the recovery property, and ensure traitor-tracing, a modified version of the basic decentralized OV protocol, Merkle-Sign, has been proposed [li2021practical]. Its representative features are:

  • As shown in Fig. 3, in training, the aggregator embeds its key (), a surveillance key () into the intermediate model distributed to the -th author to achieve traitor-tracing. When training terminates, the aggregator embeds the identity information of all authors into the model and broadcasts the hashed message as in the decentralized OV protocol.

    Figure 3: The Merkle-Sign watermarking framework for FL.
  • When broadcasting messages to the verification community, the hashed value of the identity proof for all owners are associated into a Merkle-tree [li2013efficient] so owners can build correlations between their evidence to enable recovery.

The analysis in [li2021practical] showed that the security of this protocol and four characteristics can be reduced to the computational hardness of the cryptological primitives within.

The watermarking scheme for Merkle-Sign also has to have a large watermarking capacity and independence. In addition, the embedding process is expected to be efficient and exert only slight modification to the entire DNN model. Otherwise, the model might fail to converge.

3.2.1 Discussion: aggregatable watermarks

In Merkle-Sign, the aggregator is in charge of embedding watermarks into the DNN model. Therefore this protocol needs a trusted aggregator and is not completely decentralized as in recent configurations of secure FL [wei2020federated].

To transfer the responsibility of watermark embedding from the aggregator to the owners, it is necessary that: the verification process remains valid to the aggregator’s model aggregation. We denote the aggregator’s combinator as , it can be model average, ensembling, etc. For independent owners, this requirement can be formulated as:


In which is the model distributed to the

-th author from the aggregator in each epoch. If the watermarking scheme

WM satisfies (1) then we define it as an aggregatable scheme. An aggregatable watermarking scheme can improve Merkle-Sign into a completely decentralized protocol regarding both model training and OV.

3.3 The Protocol for DNN IP transfer

Apart from OV, IPR in DNN commercialization includes many other aspects, among which an important one is the transferring of DNN as IP. For the purchaser of a DNN, it is important that the deep learning model he paid for only contains his identity information. Otherwise, the seller might unilaterally cancel the transaction by redeclaring the ownership over this DNN from the seller’s watermarks hidden within. Therefore, it is necessary to convince the purchaser that the sold DNN product is free from any watermark. Concretely, such convincing is possible only if there exists an algorithm that can win the experiment defined in Algo. 5 with probability one.

Input: , , WM,
Output: Whether wins or not

1:  Generate from .
2:  Randomly select .
3:   is given and WM.
4:  if  then
5:      is given .
6:  else
7:      is given .
8:  end if
9:   outputs .
10:   wins the experiment if .
Algorithm 5 .

Notice that in , is given neither key nor verify since the seller might hide them from the purchaser. We assumed that the watermarking scheme and the security parameter have been agreed on within the community where the transaction takes place. To conduct a DNN IP transfer the purchaser runs on the model transmitted by the seller. If the output is zero then the purchaser is convinced that the sold model is free from any watermark. Then the purchaser can treat this model as its , deploy services by it, and protect it as his IP by protocols in Section 3.1.

It is remarkable that the existence of a distinguisher winning is contradictive to the fundamental property of covertness defined in Section 2.3. So a watermarking scheme designed for one purpose is not necessarily a option in another scenario.

4 Experiments and Discussions

The evaluation of watermarking schemes w.r.t. basic security requirements has been presented in [chen2018performance]. To examine the adaptivity of current watermarking schemes to the real-world settings and corresponding protocols, we are interested in additional requirements listed in Table 1.

decline due to
the spoil attack
Time consumption
of watermark
decline in FL due
to watermarking
Decentralized OV
DNN IP transferring
Table 1: Additional security requirements. ✓denotes necessity, – denotes irrelevance, and denotes negativity.

4.1 Settings

We adopted ResNet-50 [he2016deep]

as the backbone DNN architecture. Experiments were conducted on three datasets: MNIST 

[deng2012mnist], CIFAR10, and CIFAR100 [krizhevsky2009learning]. To evaluate the adaptivity of established DNN watermarking schemes to the presented protocols, we considered five candidates: Uchida’s, random trigger (Rand), Wonder Filter (WF), ATGF, and MTL-Sign (M-S). In Uchida’s, we adopted . For random trigger and WF, we adopted the configuration in [zhang2018protecting] and [li2019persistent]. As for ATGF and MTL-Sign, the initialization in [li2021practical] and [ours] were used. All experiments were conducted under the PyTorch framework.

4.2 Evaluations of extra security requirements

The metric (A) reflects the damage of the spoil attack to the DNN model. The higher (A) is, the less likely an adversary is willing to conduct a spoil attack. Metrics (B), (C), (D), and (E) have been introduced in Section 3. We conducted spoil attacks against five watermarking schemes as described in Section 3.1.3. To compute (B), we adopted as the error rate of classification of the clean model. To compute (C), we adopted . To compute (E), we included 200 independent authors in FL, and the aggregator used the model average for DNN model combination. The evaluations of (A) to (E) in all datasets are presented in Table 23, and 4. The optimal scheme w.r.t. each metric is highlighted.

[width=5.7em]SchemeMetric (A) (B) (C) (D) (E)
Uchida’s 0.1% 1,000 94.1% 21ms 0.1%
Rand 0.0% 111 30.2% 312ms 0.3%
WF 0.0% 194 41.3% 320ms 0.0%
ATGF 0.0% 117 94.3% 303ms 0.0%
M-S 0.7% 1,000 79.5% 750ms 0.0%
Table 2: Evaluation of extra security requirements w.r.t. MNIST.
[width=5.7em]SchemeMetric (A) (B) (C) (D) (E)
Uchida’s 0.2% 1,000 95.3% 20ms 0.0%
Rand 0.1% 312 41.0% 321ms 1.1%
WF 0.1% 473 36.1% 336ms 1.3%
ATGF 0.2% 300 90.4% 300ms 1.1%
M-S 4.5% 1,000 78.0% 798ms 0.3%
Table 3: Evaluation of extra security requirements w.r.t. CIFAR10.
[width=5.7em]SchemeMetric (A) (B) (C) (D) (E)
Uchida’s 0.2% 1,000 98.2% 19ms 0.0%
Rand 0.9% 412 21.2% 458ms 3.4%
WF 0.7% 479 12.9% 433ms 5.6%
ATGF 1.1% 410 90.4% 495ms 4.1%
M-S 8.2% 1,000 77.5% 784ms 0.3%
Table 4: Evaluation of extra security requirements w.r.t. CIFAR100.

4.3 Discussions

We observed that M-S is optimal regarding (A), since spoiling this watermark would result in the largest decrease in the DNN’s normal performance. For (B), it is found that white-box schemes significantly outperformed backdoor-based ones. As for (C), only one backdoor-based scheme, ATGF, had a comparable performance as white-box schemes. The embedding time (D) for M-S is the longest, followed by that for backdoor-based schemes. Uchida’s is the easiest scheme regarding overwriting. All schemes had little impact on the convergence of the DNN model in FL (E). Although Uchida’s and M-S have met all the requirements of the decentralized OV protocol and Merkle-Sign, they are white-box schemes and the corresponding communication traffic is high.

Requirements from different protocols are sometimes contradictive against each other, e.g., the watermark capacity (B). The first two protocols require the influence of the watermark to be as small as possible so many watermarks can be embedded into the DNN, in this case a large watermark capacity is desirable. While in DNN IP transferring, it is preferable that the watermark exerts large impact to the model so a clean model and a watermarked model can be accurately differentiated, so the watermark capacity is better to be small.

5 Conclusion

To explore the applicability of IPR for deep learning models by watermarking as a service, this paper studies three scenarios and presents their respective protocols. Our analysis shows that these protocols demand extra properties other than those discussed in designing ordinary watermarking schemes, among with some are even against each other. Moreover, empirical studies show that current watermarking schemes cannot meet all requirements in practical protocols. Therefore, it is necessary to formulate protocols for more real-world scenarios as well as to design watermarking schemes that meet new security properties.


This work presented in this paper was supported by National Natural Science Foundation of China (61771310).