OVLA: Neural Network Ownership Verification using Latent Watermarks

06/15/2023
by   Feisi Fu, et al.
0

Ownership verification for neural networks is important for protecting these models from illegal copying, free-riding, re-distribution and other intellectual property misuse. We present a novel methodology for neural network ownership verification based on the notion of latent watermarks. Existing ownership verification methods either modify or introduce constraints to the neural network parameters, which are accessible to an attacker in a white-box attack and can be harmful to the network's normal operation, or train the network to respond to specific watermarks in the inputs similar to data poisoning-based backdoor attacks, which are susceptible to backdoor removal techniques. In this paper, we address these problems by decoupling a network's normal operation from its responses to watermarked inputs during ownership verification. The key idea is to train the network such that the watermarks remain dormant unless the owner's secret key is applied to activate it. The secret key is realized as a specific perturbation only known to the owner to the network's parameters. We show that our approach offers strong defense against backdoor detection, backdoor removal and surrogate model attacks.In addition, our method provides protection against ambiguity attacks where the attacker either tries to guess the secret weight key or uses fine-tuning to embed their own watermarks with a different key into a pre-trained neural network. Experimental results demonstrate the advantages and effectiveness of our proposed approach.

READ FULL TEXT
research
11/02/2022

Dormant Neural Trojans

We present a novel methodology for neural network backdoor attacks. Unli...
research
09/04/2023

Safe and Robust Watermark Injection with a Single OoD Image

Training a high-performance deep neural network requires large amounts o...
research
06/13/2021

Non-Transferable Learning: A New Approach for Model Verification and Authorization

As Artificial Intelligence as a Service gains popularity, protecting wel...
research
08/16/2022

Neural network fragile watermarking with no model performance degradation

Deep neural networks are vulnerable to malicious fine-tuning attacks suc...
research
09/21/2023

MarkNerf:Watermarking for Neural Radiance Field

A watermarking algorithm is proposed in this paper to address the copyri...
research
01/12/2021

DeepiSign: Invisible Fragile Watermark to Protect the Integrityand Authenticity of CNN

Convolutional Neural Networks (CNNs) deployed in real-life applications ...
research
11/17/2018

Statistical Verification of Neural Networks

We present a new approach to neural network verification based on estima...

Please sign up or login with your details

Forgot password? Click here to reset