Model Weight Theft With Just Noise Inputs: The Curious Case of the Petulant Attacker

12/19/2019
by   Nicholas Roberts, et al.
16

This paper explores the scenarios under which an attacker can claim that 'Noise and access to the softmax layer of the model is all you need' to steal the weights of a convolutional neural network whose architecture is already known. We were able to achieve 96 and 82 Bernoulli noise inputs. We posit that this theft-susceptibility of the weights is indicative of the complexity of the dataset and propose a new metric that captures the same. The goal of this dissemination is to not just showcase how far knowing the architecture can take you in terms of model stealing, but to also draw attention to this rather idiosyncratic weight learnability aspects of CNNs spurred by i.i.d. noise input. We also disseminate some initial results obtained with using the Ising probability distribution in lieu of the i.i.d. Bernoulli distribution.

READ FULL TEXT

page 8

page 9

research
05/02/2019

Weight Map Layer for Noise and Adversarial Attack Robustness

Convolutional neural networks (CNNs) are known for their good performanc...
research
05/12/2020

Perturbing Inputs to Prevent Model Stealing

We show how perturbing inputs to machine learning services (ML-service) ...
research
05/04/2017

Senti17 at SemEval-2017 Task 4: Ten Convolutional Neural Network Voters for Tweet Polarity Classification

This paper presents Senti17 system which uses ten convolutional neural n...
research
07/02/2019

MimosaNet: An Unrobust Neural Network Preventing Model Stealing

Deep Neural Networks are robust to minor perturbations of the learned ne...
research
10/22/2018

CSI Neural Network: Using Side-channels to Recover Your Artificial Neural Network Information

Machine learning has become mainstream across industries. Numerous examp...
research
12/18/2017

Dynamic Weight Alignment for Convolutional Neural Networks

In this paper, we propose a method of improving Convolutional Neural Net...
research
10/25/2022

Gradient-based Weight Density Balancing for Robust Dynamic Sparse Training

Training a sparse neural network from scratch requires optimizing connec...

Please sign up or login with your details

Forgot password? Click here to reset