oLMpics – On what Language Model Pre-training Captures

by   Alon Talmor, et al.

Recent success of pre-trained language models (LMs) has spurred widespread interest in the language capabilities that they possess. However, efforts to understand whether LM representations are useful for symbolic reasoning tasks have been limited and scattered. In this work, we propose eight reasoning tasks, which conceptually require operations such as comparison, conjunction, and composition. A fundamental challenge is to understand whether the performance of a LM on a task should be attributed to the pre-trained representations or to the process of fine-tuning on the task data. To address this, we propose an evaluation protocol that includes both zero-shot evaluation (no fine-tuning), as well as comparing the learning curve of a fine-tuned LM to the learning curve of multiple controls, which paints a rich picture of the LM capabilities. Our main findings are that: (a) different LMs exhibit qualitatively different reasoning abilities, e.g., RoBERTa succeeds in reasoning tasks where BERT fails completely; (b) LMs do not reason in an abstract manner and are context-dependent, e.g., while RoBERTa can compare ages, it can do so only when the ages are in the typical range of human ages; (c) On half of our reasoning tasks all models fail completely. Our findings and infrastructure can help future work on designing new datasets, models and objective functions for pre-training.


Spanish Pre-trained BERT Model and Evaluation Data

The Spanish language is one of the top 5 spoken languages in the world. ...

Few-shot Fine-tuning vs. In-context Learning: A Fair Comparison and Evaluation

Few-shot fine-tuning and in-context learning are two alternative strateg...

Multi-party Goal Tracking with LLMs: Comparing Pre-training, Fine-tuning, and Prompt Engineering

This paper evaluates the extent to which current Large Language Models (...

A Data Source for Reasoning Embodied Agents

Recent progress in using machine learning models for reasoning tasks has...

TART: A plug-and-play Transformer module for task-agnostic reasoning

Large language models (LLMs) exhibit in-context learning abilities which...

The future is different: Large pre-trained language models fail in prediction tasks

Large pre-trained language models (LPLM) have shown spectacular success ...

On the Role of Bidirectionality in Language Model Pre-Training

Prior work on language model pre-training has explored different archite...

Code Repositories

Please sign up or login with your details

Forgot password? Click here to reset