Sparse tree-based initialization for neural networks

09/30/2022
by   Patrick Lutz, et al.
0

Dedicated neural network (NN) architectures have been designed to handle specific data types (such as CNN for images or RNN for text), which ranks them among state-of-the-art methods for dealing with these data. Unfortunately, no architecture has been found for dealing with tabular data yet, for which tree ensemble methods (tree boosting, random forests) usually show the best predictive performances. In this work, we propose a new sparse initialization technique for (potentially deep) multilayer perceptrons (MLP): we first train a tree-based procedure to detect feature interactions and use the resulting information to initialize the network, which is subsequently trained via standard stochastic gradient strategies. Numerical experiments on several tabular data sets show that this new, simple and easy-to-use method is a solid concurrent, both in terms of generalization capacity and computation time, to default MLP initialization and even to existing complex deep learning solutions. In fact, this wise MLP initialization raises the resulting NN methods to the level of a valid competitor to gradient boosting when dealing with tabular data. Besides, such initializations are able to preserve the sparsity of weights introduced in the first layers of the network through training. This fact suggests that this new initializer operates an implicit regularization during the NN training, and emphasizes that the first layers act as a sparse feature extractor (as for convolutional layers in CNN).

READ FULL TEXT
research
02/22/2023

A Gradient Boosting Approach for Training Convolutional and Deep Neural Networks

Deep learning has revolutionized the computer vision and image classific...
research
01/07/2019

Generalization in Deep Networks: The Role of Distance from Initialization

Why does training deep neural networks using stochastic gradient descent...
research
02/02/2021

Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network Optimization

Training sparse networks to converge to the same performance as dense ne...
research
09/13/2020

Random boosting and random^2 forests – A random tree depth injection approach

The induction of additional randomness in parallel and sequential ensemb...
research
05/18/2021

Make $\ell_1$ Regularization Effective in Training Sparse CNN

Compressed Sensing using 𝓁1 regularization is among the most powerful an...
research
01/28/2020

MSE-Optimal Neural Network Initialization via Layer Fusion

Deep neural networks achieve state-of-the-art performance for a range of...
research
11/04/2019

Supervised level-wise pretraining for recurrent neural network initialization in multi-class classification

Recurrent Neural Networks (RNNs) can be seriously impacted by the initia...

Please sign up or login with your details

Forgot password? Click here to reset