LLMs Understand Glass-Box Models, Discover Surprises, and Suggest Repairs

08/02/2023
by   Benjamin J. Lengerich, et al.
0

We show that large language models (LLMs) are remarkably good at working with interpretable models that decompose complex outcomes into univariate graph-represented components. By adopting a hierarchical approach to reasoning, LLMs can provide comprehensive model-level summaries without ever requiring the entire model to fit in context. This approach enables LLMs to apply their extensive background knowledge to automate common tasks in data science such as detecting anomalies that contradict prior knowledge, describing potential reasons for the anomalies, and suggesting repairs that would remove the anomalies. We use multiple examples in healthcare to demonstrate the utility of these new capabilities of LLMs, with particular emphasis on Generalized Additive Models (GAMs). Finally, we present the package as an open-source LLM-GAM interface.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset