Transfer Learning-Based Model Protection With Secret Key

03/05/2021
by   MaungMaung AprilPyone, et al.
0

We propose a novel method for protecting trained models with a secret key so that unauthorized users without the correct key cannot get the correct inference. By taking advantage of transfer learning, the proposed method enables us to train a large protected model like a model trained with ImageNet by using a small subset of a training dataset. It utilizes a learnable encryption step with a secret key to generate learnable transformed images. Models with pre-trained weights are fine-tuned by using such transformed images. In experiments with the ImageNet dataset, it is shown that the performance of a protected model was close to that of a non-protected model when the correct key was given, while the accuracy tremendously dropped when an incorrect key was used. The protected model was also demonstrated to be robust against key estimation attacks.

READ FULL TEXT
research
05/31/2021

A Protection Method of Trained CNN Model with Secret Key from Unauthorized Access

In this paper, we propose a novel method for protecting convolutional ne...
research
08/06/2020

Training DNN Model with Secret Key for Model Protection

In this paper, we propose a model protection method by using block-wise ...
research
09/01/2021

A Protection Method of Trained CNN Model Using Feature Maps Transformed With Secret Key From Unauthorized Access

In this paper, we propose a model protection method for convolutional ne...
research
07/20/2021

Protecting Semantic Segmentation Models by Using Block-wise Image Encryption with Secret Key from Unauthorized Access

Since production-level trained deep neural networks (DNNs) are of a grea...
research
04/09/2021

Piracy-Resistant DNN Watermarking by Block-Wise Image Transformation with Secret Key

In this paper, we propose a novel DNN watermarking method that utilizes ...
research
02/01/2018

Attacking the Nintendo 3DS Boot ROMs

We demonstrate attacks on the boot ROMs of the Nintendo 3DS in order to ...
research
01/12/2021

DeepiSign: Invisible Fragile Watermark to Protect the Integrityand Authenticity of CNN

Convolutional Neural Networks (CNNs) deployed in real-life applications ...

Please sign up or login with your details

Forgot password? Click here to reset