Regularization Learning Networks

05/16/2018
by   Ira Shavitt, et al.
0

Despite their impressive performance, Deep Neural Networks (DNNs) typically underperform Gradient Boosting Trees (GBTs) on many tabular-dataset learning tasks. We propose that applying a different regularization coefficient to each weight might boost the performance of DNNs by allowing them to make more use of the more relevant inputs. However, this will lead to an intractable number of hyperparameters. Here, we introduce Regularization Learning Networks (RLNs), which overcome this challenge by introducing an efficient hyperparameter tuning scheme that minimizes a new Counterfactual Loss. Our results show that RLNs significantly improve DNNs on tabular datasets, and achieve comparable results to GBTs, with the best performance achieved with an ensemble that combines GBTs and RLNs. RLNs produce extremely sparse networks, eliminating up to 99.8 the network edges and 82 interpretable models and reveal the importance that the network assigns to different inputs. RLNs could efficiently learn a single network in datasets that comprise both tabular and unstructured data, such as in the setting of medical imaging accompanied by electronic health records.

READ FULL TEXT

page 5

page 7

research
09/01/2018

Learning Low Precision Deep Neural Networks through Regularization

We consider the quantization of deep neural networks (DNNs) to produce l...
research
02/22/2022

Explicit Regularization via Regularizer Mirror Descent

Despite perfectly interpolating the training data, deep neural networks ...
research
06/24/2021

When Differential Privacy Meets Interpretability: A Case Study

Given the increase in the use of personal data for training Deep Neural ...
research
02/18/2019

Evolutionary Neural AutoML for Deep Learning

Deep neural networks (DNNs) have produced state-of-the-art results in ma...
research
01/23/2018

Flexible Deep Neural Network Processing

The recent success of Deep Neural Networks (DNNs) has drastically improv...
research
01/04/2019

Transformed ℓ_1 Regularization for Learning Sparse Deep Neural Networks

Deep neural networks (DNNs) have achieved extraordinary success in numer...

Please sign up or login with your details

Forgot password? Click here to reset