Span Recovery for Deep Neural Networks with Applications to Input Obfuscation

02/19/2020
by   Rajesh Jayaram, et al.
0

The tremendous success of deep neural networks has motivated the need to better understand the fundamental properties of these networks, but many of the theoretical results proposed have only been for shallow networks. In this paper, we study an important primitive for understanding the meaningful input space of a deep network: span recovery. For k<n, let A∈R^k × n be the innermost weight matrix of an arbitrary feed forward neural network M:R^n →R, so M(x) can be written as M(x) = σ(A x), for some network σ:R^k →R. The goal is then to recover the row span of A given only oracle access to the value of M(x). We show that if M is a multi-layered network with ReLU activation functions, then partial recovery is possible: namely, we can provably recover k/2 linearly independent vectors in the row span of A using poly(n) non-adaptive queries to M(x). Furthermore, if M has differentiable activation functions, we demonstrate that full span recovery is possible even when the output is first passed through a sign or 0/1 thresholding function; in this case our algorithm is adaptive. Empirically, we confirm that full span recovery is not always possible, but only for unrealistically thin layers. For reasonably wide networks, we obtain full span recovery on both random networks and networks trained on MNIST data. Furthermore, we demonstrate the utility of span recovery as an attack by inducing neural networks to misclassify data obfuscated by controlled random noise as sensical inputs.

READ FULL TEXT
research
05/20/2021

Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions

We propose a new type of neural networks, Kronecker neural networks (KNN...
research
09/04/2017

Optimal deep neural networks for sparse recovery via Laplace techniques

This paper introduces Laplace techniques for designing a neural network,...
research
05/04/2022

Convolutional and Residual Networks Provably Contain Lottery Tickets

The Lottery Ticket Hypothesis continues to have a profound practical imp...
research
11/05/2018

Learning Two Layer Rectified Neural Networks in Polynomial Time

Consider the following fundamental learning problem: given input example...
research
04/05/2023

Hybrid Zonotopes Exactly Represent ReLU Neural Networks

We show that hybrid zonotopes offer an equivalent representation of feed...
research
10/13/2019

Large Deviation Analysis of Function Sensitivity in Random Deep Neural Networks

Mean field theory has been successfully used to analyze deep neural netw...
research
04/08/2019

On the Learnability of Deep Random Networks

In this paper we study the learnability of deep random networks from bot...

Please sign up or login with your details

Forgot password? Click here to reset