Learning Non-overlapping Convolutional Neural Networks with Multiple Kernels

11/08/2017
by   Kai Zhong, et al.
0

In this paper, we consider parameter recovery for non-overlapping convolutional neural networks (CNNs) with multiple kernels. We show that when the inputs follow Gaussian distribution and the sample size is sufficiently large, the squared loss of such CNNs is locally strongly convex in a basin of attraction near the global optima for most popular activation functions, like ReLU, Leaky ReLU, Squared ReLU, Sigmoid and Tanh. The required sample complexity is proportional to the dimension of the input and polynomial in the number of kernels and a condition number of the parameters. We also show that tensor methods are able to initialize the parameters to the local strong convex region. Hence, for most smooth activations, gradient descent following tensor initialization is guaranteed to converge to the global optimal with time that is linear in input dimension, logarithmic in precision and polynomial in other factors. To the best of our knowledge, this is the first work that provides recovery guarantees for CNNs with multiple kernels under polynomial sample and computational complexities.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/10/2017

Recovery Guarantees for One-hidden-layer Neural Networks

In this paper, we consider regression problems with one-hidden-layer neu...
research
11/12/2019

Tight Sample Complexity of Learning One-hidden-layer Convolutional Neural Networks

We study the sample complexity of learning one-hidden-layer convolutiona...
research
02/26/2017

Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs

Deep learning models are often successfully trained using gradient desce...
research
11/15/2021

Neural networks with linear threshold activations: structure and algorithms

In this article we present new results on neural networks with linear th...
research
09/09/2022

Fast Neural Kernel Embeddings for General Activations

Infinite width limit has shed light on generalization and optimization a...
research
12/09/2021

A New Measure of Model Redundancy for Compressed Convolutional Neural Networks

While recently many designs have been proposed to improve the model effi...
research
11/03/2021

A Johnson–Lindenstrauss Framework for Randomly Initialized CNNs

How does the geometric representation of a dataset change after the appl...

Please sign up or login with your details

Forgot password? Click here to reset