DeepAI AI Chat
Log In Sign Up

TANGOS: Regularizing Tabular Neural Networks through Gradient Orthogonalization and Specialization

by   Alan Jeffares, et al.
University of Cambridge

Despite their success with unstructured data, deep neural networks are not yet a panacea for structured tabular data. In the tabular domain, their efficiency crucially relies on various forms of regularization to prevent overfitting and provide strong generalization performance. Existing regularization techniques include broad modelling decisions such as choice of architecture, loss functions, and optimization methods. In this work, we introduce Tabular Neural Gradient Orthogonalization and Specialization (TANGOS), a novel framework for regularization in the tabular setting built on latent unit attributions. The gradient attribution of an activation with respect to a given input feature suggests how the neuron attends to that feature, and is often employed to interpret the predictions of deep networks. In TANGOS, we take a different approach and incorporate neuron attributions directly into training to encourage orthogonalization and specialization of latent attributions in a fully-connected network. Our regularizer encourages neurons to focus on sparse, non-overlapping input features and results in a set of diverse and specialized latent units. In the tabular domain, we demonstrate that our approach can lead to improved out-of-sample generalization performance, outperforming other popular regularization methods. We provide insight into why our regularizer is effective and demonstrate that TANGOS can be applied jointly with existing methods to achieve even greater generalization performance.


page 19

page 20


Reducing Overfitting in Deep Networks by Decorrelating Representations

One major challenge in training Deep Neural Networks is preventing overf...

SeReNe: Sensitivity based Regularization of Neurons for Structured Sparsity in Neural Networks

Deep neural networks include millions of learnable parameters, making th...

Diagnostic Visualization for Deep Neural Networks Using Stochastic Gradient Langevin Dynamics

The internal states of most deep neural networks are difficult to interp...

Adversarially Robust Training through Structured Gradient Regularization

We propose a novel data-dependent structured gradient regularizer to inc...

Regularizing Deep Neural Networks by Noise: Its Interpretation and Optimization

Overfitting is one of the most critical challenges in deep neural networ...

Effective Neural Network L_0 Regularization With BinMask

L_0 regularization of neural networks is a fundamental problem. In addit...

Vector Neurons: A General Framework for SO(3)-Equivariant Networks

Invariance and equivariance to the rotation group have been widely discu...