ReasonBERT: Pre-trained to Reason with Distant Supervision

09/10/2021
by   Xiang Deng, et al.
0

We present ReasonBert, a pre-training method that augments language models with the ability to reason over long-range relations and multiple, possibly hybrid contexts. Unlike existing pre-training methods that only harvest learning signals from local contexts of naturally occurring texts, we propose a generalized notion of distant supervision to automatically connect multiple pieces of text and tables to create pre-training examples that require long-range reasoning. Different types of reasoning are simulated, including intersecting multiple pieces of evidence, bridging from one piece of evidence to another, and detecting unanswerable cases. We conduct a comprehensive evaluation on a variety of extractive question answering datasets ranging from single-hop to multi-hop and from text-only to table-only to hybrid that require various reasoning capabilities and show that ReasonBert achieves remarkable improvement over an array of strong baselines. Few-shot experiments further demonstrate that our pre-training method substantially improves sample efficiency.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2023

Pre-training Language Models for Comparative Reasoning

In this paper, we propose a novel framework to pre-train language models...
research
07/16/2021

TAPEX: Table Pre-training via Learning a Neural SQL Executor

Recent years pre-trained language models hit a success on modeling natur...
research
05/10/2021

REPT: Bridging Language Models and Machine Reading Comprehension via Retrieval-Based Pre-training

Pre-trained Language Models (PLMs) have achieved great success on Machin...
research
07/15/2021

Turning Tables: Generating Examples from Semi-structured Tables for Endowing Language Models with Reasoning Skills

Models pre-trained with a language modeling objective possess ample worl...
research
06/18/2021

Weakly Supervised Pre-Training for Multi-Hop Retriever

In multi-hop QA, answering complex questions entails iterative document ...
research
05/20/2023

Can NLP Models Correctly Reason Over Contexts that Break the Common Assumptions?

Pre-training on large corpora of text enables the language models to acq...
research
10/30/2022

Learning to Decompose: Hypothetical Question Decomposition Based on Comparable Texts

Explicit decomposition modeling, which involves breaking down complex ta...

Please sign up or login with your details

Forgot password? Click here to reset