
A Convergence Theory for Deep Learning via OverParameterization
Deep neural networks (DNNs) have demonstrated dominating performance in ...
read it

Convex Geometry and Duality of Overparameterized Neural Networks
We develop a convex analytic framework for ReLU neural networks which el...
read it

On the Convergence Rate of Training Recurrent Neural Networks
Despite the huge success of deep learning, our understanding to how the ...
read it

Convex Duality of Deep Neural Networks
We study regularized deep neural networks and introduce an analytic fram...
read it

Inference in MultiLayer Networks with MatrixValued Unknowns
We consider the problem of inferring the input and hidden variables of a...
read it

Convex Relaxations of Convolutional Neural Nets
We propose convex relaxations for convolutional neural nets with one hid...
read it

On the Margin Theory of Feedforward Neural Networks
Past works have shown that, somewhat surprisingly, overparametrization ...
read it
Neural Networks are Convex Regularizers: Exact Polynomialtime Convex Optimization Formulations for TwoLayer Networks
We develop exact representations of two layer neural networks with rectified linear units in terms of a single convex program with number of variables polynomial in the number of training samples and number of hidden neurons. Our theory utilizes semiinfinite duality and minimum norm regularization. Moreover, we show that certain standard multilayer convolutional neural networks are equivalent to L1 regularized linear models in a polynomial sized discrete Fourier feature space. We also introduce exact semidefinite programming representations of convolutional and fully connected linear multilayer networks which are polynomial size in both the sample size and dimension.
READ FULL TEXT
Comments
There are no comments yet.