Representation Learning and Recovery in the ReLU Model

03/12/2018
by   Arya Mazumdar, et al.
0

Rectified linear units, or ReLUs, have become the preferred activation function for artificial neural networks. In this paper we consider two basic learning problems assuming that the underlying data follow a generative model based on a ReLU-network -- a neural network with ReLU activations. As a primarily theoretical study, we limit ourselves to a single-layer network. The first problem we study corresponds to dictionary-learning in the presence of nonlinearity (modeled by the ReLU functions). Given a set of observation vectors y^i ∈R^d, i =1, 2, ... , n, we aim to recover d× k matrix A and the latent vectors {c^i}⊂R^k under the model y^i = ReLU(Ac^i +b), where b∈R^d is a random bias. We show that it is possible to recover the column space of A within an error of O(d) (in Frobenius norm) under certain conditions on the probability distribution of b. The second problem we consider is that of robust recovery of the signal in the presence of outliers, i.e., large but sparse noise. In this setting we are interested in recovering the latent vector c from its noisy nonlinear sketches of the form v = ReLU(Ac) + e+w, where e∈R^d denotes the outliers with sparsity s and w∈R^d denote the dense but small noise. This line of work has recently been studied (Soltanolkotabi, 2017) without the presence of outliers. For this problem, we show that a generalized LASSO algorithm is able to recover the signal c∈R^k within an ℓ_2 error of O(√((k+s) d/d)) when A is a random Gaussian matrix.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset