Teaching Algorithmic Reasoning via In-context Learning

11/15/2022
by   Hattie Zhou, et al.
0

Large language models (LLMs) have shown increasing in-context learning capabilities through scaling up model and data size. Despite this progress, LLMs are still unable to solve algorithmic reasoning problems. While providing a rationale with the final answer has led to further improvements in multi-step reasoning problems, Anil et al. 2022 showed that even simple algorithmic reasoning tasks such as parity are far from solved. In this work, we identify and study four key stages for successfully teaching algorithmic reasoning to LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills simultaneously (skill accumulation), (3) teaching how to combine skills (skill composition) and (4) teaching how to use skills as tools. We show that it is possible to teach algorithmic reasoning to LLMs via in-context learning, which we refer to as algorithmic prompting. We evaluate our approach on a variety of arithmetic and quantitative reasoning tasks, and demonstrate significant boosts in performance over existing prompting techniques. In particular, for long parity, addition, multiplication and subtraction, we achieve an error reduction of approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/01/2023

Skills-in-Context Prompting: Unlocking Compositionality in Large Language Models

We consider the problem of eliciting compositional generalization capabi...
research
10/06/2022

Teaching Neural Module Networks to Do Arithmetic

Answering complex questions that require multi-step multi-type reasoning...
research
03/26/2021

SKID RAW: Skill Discovery from Raw Trajectories

Integrating robots in complex everyday environments requires a multitude...
research
04/09/2020

Injecting Numerical Reasoning Skills into Language Models

Large pre-trained language models (LMs) are known to encode substantial ...
research
05/14/2023

Learning Non-linguistic Skills without Sacrificing Linguistic Proficiency

The field of Math-NLP has witnessed significant growth in recent years, ...
research
05/23/2023

When Does Aggregating Multiple Skills with Multi-Task Learning Work? A Case Study in Financial NLP

Multi-task learning (MTL) aims at achieving a better model by leveraging...
research
07/03/2023

ChatGPT is not a pocket calculator – Problems of AI-chatbots for teaching Geography

The recent success of large language models and AI chatbots such as Chat...

Please sign up or login with your details

Forgot password? Click here to reset