DeepAI AI Chat
Log In Sign Up

Adaptive Learning with Binary Neurons

A efficient incremental learning algorithm for classification tasks, called NetLines, well adapted for both binary and real-valued input patterns is presented. It generates small compact feedforward neural networks with one hidden layer of binary units and binary output units. A convergence theorem ensures that solutions with a finite number of hidden units exist for both binary and real-valued input patterns. An implementation for problems with more than two classes, valid for any binary classifier, is proposed. The generalization error and the size of the resulting networks are compared to the best published results on well-known classification benchmarks. Early stopping is shown to decrease overfitting, without improving the generalization performance.


page 1

page 2

page 3

page 4


Universal Approximation of Markov Kernels by Shallow Stochastic Feedforward Networks

We establish upper bounds for the minimal number of hidden units for whi...

On the geometry of solutions and on the capacity of multi-layer neural networks with ReLU activations

Rectified Linear Units (ReLU) have become the main model for the neural ...

Bitwise Neural Networks

Based on the assumption that there exists a neural network that efficien...

Common Product Neurons

The present work develops a comparative performance of artificial neuron...

Minimal Perceptrons for Memorizing Complex Patterns

Feedforward neural networks have been investigated to understand learnin...

Techniques for Learning Binary Stochastic Feedforward Neural Networks

Stochastic binary hidden units in a multi-layer perceptron (MLP) network...

Binarization Methods for Motor-Imagery Brain-Computer Interface Classification

Successful motor-imagery brain-computer interface (MI-BCI) algorithms ei...