Induced Natural Language Rationales and Interleaved Markup Tokens Enable Extrapolation in Large Language Models

08/24/2022
by   Mirelle Bueno, et al.
0

The ability to extrapolate, i.e., to make predictions on sequences that are longer than those presented as training examples, is a challenging problem for current deep learning models. Recent work shows that this limitation persists in state-of-the-art Transformer-based models. Most solutions to this problem use specific architectures or training methods that do not generalize to other tasks. We demonstrate that large language models can succeed in extrapolation without modifying their architecture or training procedure. Experimental results show that generating step-by-step rationales and introducing marker tokens are both required for effective extrapolation. First, we induce it to produce step-by-step rationales before outputting the answer to effectively communicate the task to the model. However, as sequences become longer, we find that current models struggle to keep track of token positions. To address this issue, we interleave output tokens with markup tokens that act as explicit positional and counting symbols. Our findings show how these two complementary approaches enable remarkable sequence extrapolation and highlight a limitation of current architectures to effectively generalize without explicit surface form guidance. Code available at https://github.com/MirelleB/induced-rationales-markup-tokens

READ FULL TEXT
research
09/10/2021

PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models

Large pre-trained language models for textual data have an unconstrained...
research
08/31/2023

SpeechTokenizer: Unified Speech Tokenizer for Speech Large Language Models

Current speech large language models build upon discrete speech represen...
research
02/20/2023

Can discrete information extraction prompts generalize across language models?

We study whether automatically-induced prompts that effectively extract ...
research
02/25/2021

Investigating the Limitations of Transformers with Simple Arithmetic Tasks

The ability to perform arithmetic tasks is a remarkable trait of human i...
research
08/31/2023

YaRN: Efficient Context Window Extension of Large Language Models

Rotary Position Embeddings (RoPE) have been shown to effectively encode ...
research
05/31/2023

Perception and Semantic Aware Regularization for Sequential Confidence Calibration

Deep sequence recognition (DSR) models receive increasing attention due ...
research
07/29/2023

GeneMask: Fast Pretraining of Gene Sequences to Enable Few-Shot Learning

Large-scale language models such as DNABert and LOGO aim to learn optima...

Please sign up or login with your details

Forgot password? Click here to reset