Interpretable models for extrapolation in scientific machine learning

12/16/2022
by   Eric S. Muckley, et al.
0

Data-driven models are central to scientific discovery. In efforts to achieve state-of-the-art model accuracy, researchers are employing increasingly complex machine learning algorithms that often outperform simple regressions in interpolative settings (e.g. random k-fold cross-validation) but suffer from poor extrapolation performance, portability, and human interpretability, which limits their potential for facilitating novel scientific insight. Here we examine the trade-off between model performance and interpretability across a broad range of science and engineering problems with an emphasis on materials science datasets. We compare the performance of black box random forest and neural network machine learning algorithms to that of single-feature linear regressions which are fitted using interpretable input features discovered by a simple random search algorithm. For interpolation problems, the average prediction errors of linear regressions were twice as high as those of black box models. Remarkably, when prediction tasks required extrapolation, linear models yielded average error only 5 outperformed black box models in roughly 40 which suggests that they may be desirable over complex algorithms in many extrapolation problems because of their superior interpretability, computational overhead, and ease of use. The results challenge the common assumption that extrapolative models for scientific machine learning are constrained by an inherent trade-off between performance and interpretability.

READ FULL TEXT

page 4

page 15

research
05/06/2021

Partially Interpretable Estimators (PIE): Black-Box-Refined Interpretable Machine Learning

We propose Partially Interpretable Estimators (PIE) which attribute a pr...
research
02/11/2020

Lifting Interpretability-Performance Trade-off via Automated Feature Engineering

Complex black-box predictive models may have high performance, but lack ...
research
07/10/2023

Interpreting and generalizing deep learning in physics-based problems with functional linear models

Although deep learning has achieved remarkable success in various scient...
research
04/12/2021

An Approach to Symbolic Regression Using Feyn

In this article we introduce the supervised machine learning tool called...
research
10/30/2017

Contextual Regression: An Accurate and Conveniently Interpretable Nonlinear Model for Mining Discovery from Scientific Data

Machine learning algorithms such as linear regression, SVM and neural ne...
research
09/12/2020

Interpretable Machine Learning Approaches to Prediction of Chronic Homelessness

We introduce a machine learning approach to predict chronic homelessness...
research
09/13/2019

A Double Penalty Model for Interpretability

Modern statistical learning techniques have often emphasized prediction ...

Please sign up or login with your details

Forgot password? Click here to reset