Large Language Model Programs

05/09/2023
by   Imanol Schlag, et al.
0

In recent years, large pre-trained language models (LLMs) have demonstrated the ability to follow instructions and perform novel tasks from a few examples. The possibility to parameterise an LLM through such in-context examples widens their capability at a much lower cost than finetuning. We extend this line of reasoning and present a method which further expands the capabilities of an LLM by embedding it within an algorithm or program. To demonstrate the benefits of this approach, we present an illustrative example of evidence-supported question-answering. We obtain a 6.4% improvement over the chain of thought baseline through a more algorithmic approach without any finetuning. Furthermore, we highlight recent work from this perspective and discuss the advantages and disadvantages in comparison to the standard approaches.

READ FULL TEXT
research
08/04/2019

Exploring Neural Net Augmentation to BERT for Question Answering on SQUAD 2.0

Enhancing machine capabilities to answer questions has been a topic of c...
research
05/01/2021

When to Fold'em: How to answer Unanswerable questions

We present 3 different question-answering models trained on the SQuAD2.0...
research
10/23/2022

Do Language Models Understand Measurements?

Recent success of pre-trained language models (PLMs) has stimulated inte...
research
01/27/2023

ThoughtSource: A central hub for large language model reasoning data

Large language models (LLMs) such as GPT-3 and ChatGPT have recently dem...
research
05/24/2023

Revisiting Parallel Context Windows: A Frustratingly Simple Alternative and Chain-of-Thought Deterioration

We identify two crucial limitations in the evaluation of recent parallel...
research
05/20/2023

LogiCoT: Logical Chain-of-Thought Instruction-Tuning Data Collection with GPT-4

Generative Pre-trained Transformer 4 (GPT-4) demonstrates impressive cha...
research
08/20/2023

Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models

Current literature, aiming to surpass the "Chain-of-Thought" approach, o...

Please sign up or login with your details

Forgot password? Click here to reset