Predicting Census Survey Response Rates via Interpretable Nonparametric Additive Models with Structured Interactions

08/24/2021
by   Shibal Ibrahim, et al.
5

Accurate and interpretable prediction of survey response rates is important from an operational standpoint. The US Census Bureau's well-known ROAM application uses principled statistical models trained on the US Census Planning Database data to identify hard-to-survey areas. An earlier crowdsourcing competition revealed that an ensemble of regression trees led to the best performance in predicting survey response rates; however, the corresponding models could not be adopted for the intended application due to limited interpretability. In this paper, we present new interpretable statistical methods to predict, with high accuracy, response rates in surveys. We study sparse nonparametric additive models with pairwise interactions via ℓ_0-regularization, as well as hierarchically structured variants that provide enhanced interpretability. Despite strong methodological underpinnings, such models can be computationally challenging – we present new scalable algorithms for learning these models. We also establish novel non-asymptotic error bounds for the proposed estimators. Experiments based on the US Census Planning Database demonstrate that our methods lead to high-quality predictive models that permit actionable interpretability for different segments of the population. Interestingly, our methods provide significant gains in interpretability without losing in predictive performance to state-of-the-art black-box machine learning methods based on gradient boosting and feedforward neural networks. Our code implementation in python is available at https://github.com/ShibalIbrahim/Additive-Models-with-Structured-Interactions.

READ FULL TEXT

page 8

page 18

page 19

research
05/10/2019

Hybrid Predictive Model: When an Interpretable Model Collaborates with a Black-box Model

Interpretable machine learning has become a strong competitor for tradit...
research
09/30/2022

Higher-order Neural Additive Models: An Interpretable Machine Learning Model with Feature Interactions

Black-box models, such as deep neural networks, exhibit superior predict...
research
01/27/2023

Neural Additive Models for Location Scale and Shape: A Framework for Interpretable Neural Regression Beyond the Mean

Deep neural networks (DNNs) have proven to be highly effective in a vari...
research
06/08/2020

A Semiparametric Approach to Interpretable Machine Learning

Black box models in machine learning have demonstrated excellent predict...
research
05/15/2022

Supervised Learning and Model Analysis with Compositional Data

The compositionality and sparsity of high-throughput sequencing data pos...
research
07/03/2023

Learning Difference Equations with Structured Grammatical Evolution for Postprandial Glycaemia Prediction

People with diabetes must carefully monitor their blood glucose levels, ...
research
10/11/2019

A Nonparametric Bayesian Model for Sparse Temporal Multigraphs

As the availability and importance of temporal interaction data–such as ...

Please sign up or login with your details

Forgot password? Click here to reset