LLMs Understand Glass-Box Models, Discover Surprises, and Suggest Repairs

08/02/2023
by   Benjamin J. Lengerich, et al.
0

We show that large language models (LLMs) are remarkably good at working with interpretable models that decompose complex outcomes into univariate graph-represented components. By adopting a hierarchical approach to reasoning, LLMs can provide comprehensive model-level summaries without ever requiring the entire model to fit in context. This approach enables LLMs to apply their extensive background knowledge to automate common tasks in data science such as detecting anomalies that contradict prior knowledge, describing potential reasons for the anomalies, and suggesting repairs that would remove the anomalies. We use multiple examples in healthcare to demonstrate the utility of these new capabilities of LLMs, with particular emphasis on Generalized Additive Models (GAMs). Finally, we present the package as an open-source LLM-GAM interface.

READ FULL TEXT
research
05/26/2023

Chain-of-Thought Hub: A Continuous Effort to Measure Large Language Models' Reasoning Performance

As large language models (LLMs) are continuously being developed, their ...
research
05/16/2021

How is BERT surprised? Layerwise detection of linguistic anomalies

Transformer language models have shown remarkable ability in detecting w...
research
07/30/2020

On the Nature and Types of Anomalies: A Review

Anomalies are occurrences in a dataset that are in some way unusual and ...
research
11/16/2022

Galactica: A Large Language Model for Science

Information overload is a major obstacle to scientific progress. The exp...
research
08/22/2023

NLP-based detection of systematic anomalies among the narratives of consumer complaints

We develop an NLP-based procedure for detecting systematic nonmeritoriou...
research
10/10/2022

Neurosymbolic Programming for Science

Neurosymbolic Programming (NP) techniques have the potential to accelera...

Please sign up or login with your details

Forgot password? Click here to reset