A Protection against the Extraction of Neural Network Models

05/26/2020
by   Hervé Chabanne, et al.
0

Given oracle access to a Neural Network (NN), it is possible to extract its underlying model. We here introduce a protection by adding parasitic layers which mostly keep unchanged the underlying NN while complexifying the task of reverse-engineering. Our countermeasure relies on approximating the identity mapping with a Convolutional NN. We explain why the introduction of new parasitic layers complexifies the attacks. We report experiments regarding the performance and the accuracy of the protected NN.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/22/2020

Premium Access to Convolutional Neural Networks

Neural Networks (NNs) are today used for all our daily tasks; for instan...
research
12/06/2017

Achieving the time of 1-NN, but the accuracy of k-NN

We propose a simple approach which, given distributed computing resource...
research
04/30/2021

InfoNEAT: Information Theory-based NeuroEvolution of Augmenting Topologies for Side-channel Analysis

Profiled side-channel analysis (SCA) leverages leakage from cryptographi...
research
11/09/2021

Convolutional Neural Network Dynamics: A Graph Perspective

The success of neural networks (NNs) in a wide range of applications has...
research
06/22/2020

The GCE in a New Light: Disentangling the γ-ray Sky with Bayesian Graph Convolutional Neural Networks

A fundamental question regarding the Galactic Center Excess (GCE) is whe...
research
11/19/2022

Class-Specific Attention (CSA) for Time-Series Classification

Most neural network-based classifiers extract features using several hid...
research
01/17/2023

MooseNet: A trainable metric for synthesized speech with plda backend

We present MooseNet, a trainable speech metric that predicts listeners' ...

Please sign up or login with your details

Forgot password? Click here to reset