Iterated Decomposition: Improving Science Q A by Supervising Reasoning Processes

01/04/2023
by   Justin Reppert, et al.
0

Language models (LMs) can perform complex reasoning either end-to-end, with hidden latent state, or compositionally, with transparent intermediate state. Composition offers benefits for interpretability and safety, but may need workflow support and infrastructure to remain competitive. We describe iterated decomposition, a human-in-the-loop workflow for developing and refining compositional LM programs. We improve the performance of compositions by zooming in on failing components and refining them through decomposition, additional context, chain of thought, etc. To support this workflow, we develop ICE, an open-source tool for visualizing the execution traces of LM programs. We apply iterated decomposition to three real-world tasks and improve the accuracy of LM programs over less compositional baselines: describing the placebo used in a randomized controlled trial (25 participant adherence to a medical intervention (53 questions on the Qasper dataset (38 studies for a workflow that, if automated, could keep ML systems interpretable and safe even as they scale to increasingly complex tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/23/2022

Learning to Perform Complex Tasks through Compositional Fine-Tuning of Language Models

How to usefully encode compositional task structure has long been a core...
research
03/16/2023

ART: Automatic multi-step reasoning and tool-use for large language models

Large language models (LLMs) can perform complex reasoning in few- and z...
research
07/23/2018

Explainable Neural Computation via Stack Neural Module Networks

In complex inferential tasks like question answering, machine learning m...
research
10/06/2022

ReAct: Synergizing Reasoning and Acting in Language Models

While large language models (LLMs) have demonstrated impressive capabili...
research
07/07/2023

Exploring and Characterizing Large Language Models For Embedded System Development and Debugging

Large language models (LLMs) have shown remarkable abilities to generate...
research
06/08/2023

Interpretable Medical Diagnostics with Structured Data Extraction by Large Language Models

Tabular data is often hidden in text, particularly in medical diagnostic...
research
05/30/2023

Dissecting Chain-of-Thought: A Study on Compositional In-Context Learning of MLPs

Chain-of-thought (CoT) is a method that enables language models to handl...

Please sign up or login with your details

Forgot password? Click here to reset