DeepiSign: Invisible Fragile Watermark to Protect the Integrityand Authenticity of CNN

01/12/2021
by   Alsharif Abuadbba, et al.
0

Convolutional Neural Networks (CNNs) deployed in real-life applications such as autonomous vehicles have shown to be vulnerable to manipulation attacks, such as poisoning attacks and fine-tuning. Hence, it is essential to ensure the integrity and authenticity of CNNs because compromised models can produce incorrect outputs and behave maliciously. In this paper, we propose a self-contained tamper-proofing method, called DeepiSign, to ensure the integrity and authenticity of CNN models against such manipulation attacks. DeepiSign applies the idea of fragile invisible watermarking to securely embed a secret and its hash value into a CNN model. To verify the integrity and authenticity of the model, we retrieve the secret from the model, compute the hash value of the secret, and compare it with the embedded hash value. To minimize the effects of the embedded secret on the CNN model, we use a wavelet-based technique to transform weights into the frequency domain and embed the secret into less significant coefficients. Our theoretical analysis shows that DeepiSign can hide up to 1KB secret in each layer with minimal loss of the model's accuracy. To evaluate the security and performance of DeepiSign, we performed experiments on four pre-trained models (ResNet18, VGG16, AlexNet, and MobileNet) using three datasets (MNIST, CIFAR-10, and Imagenet) against three types of manipulation attacks (targeted input poisoning, output poisoning, and fine-tuning). The results demonstrate that DeepiSign is verifiable without degrading the classification accuracy, and robust against representative CNN manipulation attacks.

READ FULL TEXT
research
09/01/2021

A Protection Method of Trained CNN Model Using Feature Maps Transformed With Secret Key From Unauthorized Access

In this paper, we propose a model protection method for convolutional ne...
research
05/28/2019

Memory Integrity of CNNs for Cross-Dataset Facial Expression Recognition

Facial expression recognition is a major problem in the domain of artifi...
research
08/16/2022

Neural network fragile watermarking with no model performance degradation

Deep neural networks are vulnerable to malicious fine-tuning attacks suc...
research
11/14/2019

Adversarial Embedding: A robust and elusive Steganography and Watermarking technique

We propose adversarial embedding, a new steganography and watermarking t...
research
06/15/2023

OVLA: Neural Network Ownership Verification using Latent Watermarks

Ownership verification for neural networks is important for protecting t...
research
03/05/2021

Transfer Learning-Based Model Protection With Secret Key

We propose a novel method for protecting trained models with a secret ke...
research
04/26/2020

Secure Steganography Technique Based on Bitplane Indexes

This paper is concerned with secret hiding in multiple image bitplanes f...

Please sign up or login with your details

Forgot password? Click here to reset