Internal node bagging: an explicit ensemble learning method in neural network training

05/01/2018
by   Shun Yi, et al.
0

We introduce a novel view to understand how dropout works as an inexplicit ensemble learning method, which do not point out how many and which nodes to learn a certain feature. We propose a new training method named internal node bagging, this method explicitly force a group of nodes to learn a certain feature in training time, and combine those nodes to be one node in inference time. It means we can use much more parameters to improve model's fitting ability in training time while keeping model small in inference time. We test our method on several benchmark datasets and find it significantly more efficiency than dropout on small model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/18/2020

Failout: Achieving Failure-Resilient Inference in Distributed Neural Networks

When a neural network is partitioned and distributed across physical nod...
research
09/26/2016

Dropout with Expectation-linear Regularization

Dropout, a simple and effective way to train deep neural networks, has l...
research
12/19/2016

Crowd collectiveness measure via graph-based node clique learning

Collectiveness motions of crowd systems have attracted a great deal of a...
research
03/06/2015

To Drop or Not to Drop: Robustness, Consistency and Differential Privacy Properties of Dropout

Training deep belief networks (DBNs) requires optimizing a non-convex fu...
research
03/03/2023

RAFEN – Regularized Alignment Framework for Embeddings of Nodes

Learning representations of nodes has been a crucial area of the graph m...
research
12/16/2014

Learning with Pseudo-Ensembles

We formalize the notion of a pseudo-ensemble, a (possibly infinite) coll...
research
07/03/2012

Improving neural networks by preventing co-adaptation of feature detectors

When a large feedforward neural network is trained on a small training s...

Please sign up or login with your details

Forgot password? Click here to reset