DeepAI AI Chat
Log In Sign Up

An Algorithm for Training Polynomial Networks

by   Roi Livni, et al.
Weizmann Institute of Science
Hebrew University of Jerusalem

We consider deep neural networks, in which the output of each node is a quadratic function of its inputs. Similar to other deep architectures, these networks can compactly represent any function on a finite training set. The main goal of this paper is the derivation of an efficient layer-by-layer algorithm for training such networks, which we denote as the Basis Learner. The algorithm is a universal learner in the sense that the training error is guaranteed to decrease at every iteration, and can eventually reach zero under mild conditions. We present practical implementations of this algorithm, as well as preliminary experimental results. We also compare our deep architecture to other shallow architectures for learning polynomials, in particular kernel learning.


Deep Stochastic Configuration Networks with Universal Approximation Property

This paper develops a randomized approach for incrementally building dee...

Local Kernel Renormalization as a mechanism for feature learning in overparametrized Convolutional Neural Networks

Feature learning, or the ability of deep neural networks to automaticall...

Backward Feature Correction: How Deep Learning Performs Deep Learning

How does a 110-layer ResNet learn a high-complexity classifier using rel...

On the Expressive Power of Deep Polynomial Neural Networks

We study deep neural networks with polynomial activations, particularly ...

SGD Learns the Conjugate Kernel Class of the Network

We show that the standard stochastic gradient decent (SGD) algorithm is ...

Reconciled Polynomial Machine: A Unified Representation of Shallow and Deep Learning Models

In this paper, we aim at introducing a new machine learning model, namel...

Local Critic Training for Model-Parallel Learning of Deep Neural Networks

This paper proposes a novel approach to train deep neural networks in a ...