Neural Networks are Convex Regularizers: Exact Polynomial-time Convex Optimization Formulations for Two-Layer Networks

02/24/2020
by   Mert Pilanci, et al.
38

We develop exact representations of two layer neural networks with rectified linear units in terms of a single convex program with number of variables polynomial in the number of training samples and number of hidden neurons. Our theory utilizes semi-infinite duality and minimum norm regularization. Moreover, we show that certain standard multi-layer convolutional neural networks are equivalent to L1 regularized linear models in a polynomial sized discrete Fourier feature space. We also introduce exact semi-definite programming representations of convolutional and fully connected linear multi-layer networks which are polynomial size in both the sample size and dimension.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 12

page 15

06/26/2020

Implicit Convex Regularizers of CNN Architectures: Convex Optimization of Two- and Three-Layer Networks in Polynomial Time

We study training of Convolutional Neural Networks (CNNs) with ReLU acti...
12/31/2018

Convex Relaxations of Convolutional Neural Nets

We propose convex relaxations for convolutional neural nets with one hid...
11/09/2018

A Convergence Theory for Deep Learning via Over-Parameterization

Deep neural networks (DNNs) have demonstrated dominating performance in ...
02/25/2020

Convex Geometry and Duality of Over-parameterized Neural Networks

We develop a convex analytic framework for ReLU neural networks which el...
07/15/2020

Convexifying Sparse Interpolation with Infinitely Wide Neural Networks: An Atomic Norm Approach

This work examines the problem of exact data interpolation via sparse (n...
10/18/2021

Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel ReLU Networks

Despite several attempts, the fundamental mechanisms behind the success ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.