Interpretable Set Functions

05/31/2018
by   Andrew Cotter, et al.
0

We propose learning flexible but interpretable functions that aggregate a variable-length set of permutation-invariant feature vectors to predict a label. We use a deep lattice network model so we can architect the model structure to enhance interpretability, and add monotonicity constraints between inputs-and-outputs. We then use the proposed set function to automate the engineering of dense, interpretable features from sparse categorical features, which we call semantic feature engine. Experiments on real-world data show the achieved accuracy is similar to deep sets or deep neural networks, and is easier to debug and understand.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/23/2023

Take 5: Interpretable Image Classification with a Handful of Features

Deep Neural Networks use thousands of mostly incomprehensible features t...
research
09/10/2017

Classifying Unordered Feature Sets with Convolutional Deep Averaging Networks

Unordered feature sets are a nonstandard data structure that traditional...
research
05/27/2022

Neural Basis Models for Interpretability

Due to the widespread use of complex machine learning models in real-wor...
research
01/30/2020

Learn to Predict Sets Using Feed-Forward Neural Networks

This paper addresses the task of set prediction using deep feed-forward ...
research
04/03/2019

Interpretable Deep Learning for Two-Prong Jet Classification with Jet Spectra

Classification of jets with deep learning has gained significant attenti...
research
10/19/2020

A Framework to Learn with Interpretation

With increasingly widespread use of deep neural networks in critical dec...
research
10/19/2021

AEFE: Automatic Embedded Feature Engineering for Categorical Features

The challenge of solving data mining problems in e-commerce applications...

Please sign up or login with your details

Forgot password? Click here to reset