DeepAI AI Chat
Log In Sign Up

An Algorithm for Training Polynomial Networks

04/26/2013
by   Roi Livni, et al.
Weizmann Institute of Science
Hebrew University of Jerusalem
0

We consider deep neural networks, in which the output of each node is a quadratic function of its inputs. Similar to other deep architectures, these networks can compactly represent any function on a finite training set. The main goal of this paper is the derivation of an efficient layer-by-layer algorithm for training such networks, which we denote as the Basis Learner. The algorithm is a universal learner in the sense that the training error is guaranteed to decrease at every iteration, and can eventually reach zero under mild conditions. We present practical implementations of this algorithm, as well as preliminary experimental results. We also compare our deep architecture to other shallow architectures for learning polynomials, in particular kernel learning.

READ FULL TEXT
02/18/2017

Deep Stochastic Configuration Networks with Universal Approximation Property

This paper develops a randomized approach for incrementally building dee...
07/21/2023

Local Kernel Renormalization as a mechanism for feature learning in overparametrized Convolutional Neural Networks

Feature learning, or the ability of deep neural networks to automaticall...
01/13/2020

Backward Feature Correction: How Deep Learning Performs Deep Learning

How does a 110-layer ResNet learn a high-complexity classifier using rel...
05/29/2019

On the Expressive Power of Deep Polynomial Neural Networks

We study deep neural networks with polynomial activations, particularly ...
02/27/2017

SGD Learns the Conjugate Kernel Class of the Network

We show that the standard stochastic gradient decent (SGD) algorithm is ...
05/19/2018

Reconciled Polynomial Machine: A Unified Representation of Shallow and Deep Learning Models

In this paper, we aim at introducing a new machine learning model, namel...
05/03/2018

Local Critic Training for Model-Parallel Learning of Deep Neural Networks

This paper proposes a novel approach to train deep neural networks in a ...