Sparse Neural Additive Model: Interpretable Deep Learning with Feature Selection via Group Sparsity

02/25/2022
by   Shiyun Xu, et al.
0

Interpretable machine learning has demonstrated impressive performance while preserving explainability. In particular, neural additive models (NAM) offer the interpretability to the black-box deep learning and achieve state-of-the-art accuracy among the large family of generalized additive models. In order to empower NAM with feature selection and improve the generalization, we propose the sparse neural additive models (SNAM) that employ the group sparsity regularization (e.g. Group LASSO), where each feature is learned by a sub-network whose trainable parameters are clustered as a group. We study the theoretical properties for SNAM with novel techniques to tackle the non-parametric truth, thus extending from classical sparse linear models such as the LASSO, which only works on the parametric truth. Specifically, we show that SNAM with subgradient and proximal gradient descents provably converges to zero training loss as t→∞, and that the estimation error of SNAM vanishes asymptotically as n→∞. We also prove that SNAM, similar to LASSO, can have exact support recovery, i.e. perfect feature selection, with appropriate regularization. Moreover, we show that the SNAM can generalize well and preserve the `identifiability', recovering each feature's effect. We validate our theories via extensive experiments and further testify to the good accuracy and efficiency of SNAM.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/18/2012

Group Sparse Additive Models

We consider the problem of sparse variable selection in nonparametric ad...
research
12/18/2020

Flexible, Non-parametric Modeling Using Regularized Neural Networks

Non-parametric regression, such as generalized additive models (GAMs), i...
research
10/03/2022

Sparsity by Redundancy: Solving L_1 with a Simple Reparametrization

We identify and prove a general principle: L_1 sparsity can be achieved ...
research
02/02/2023

The Contextual Lasso: Sparse Linear Models via Deep Neural Networks

Sparse linear models are a gold standard tool for interpretable machine ...
research
10/19/2020

Factorization Machines with Regularization for Sparse Feature Interactions

Factorization machines (FMs) are machine learning predictive models base...
research
12/14/2020

E2E-FS: An End-to-End Feature Selection Method for Neural Networks

Classic embedded feature selection algorithms are often divided in two l...
research
07/08/2022

ControlBurn: Nonlinear Feature Selection with Sparse Tree Ensembles

ControlBurn is a Python package to construct feature-sparse tree ensembl...

Please sign up or login with your details

Forgot password? Click here to reset