Breakpoint Transformers for Modeling and Tracking Intermediate Beliefs

11/15/2022
by   Kyle Richardson, et al.
0

Can we teach natural language understanding models to track their beliefs through intermediate points in text? We propose a representation learning framework called breakpoint modeling that allows for learning of this type. Given any text encoder and data marked with intermediate states (breakpoints) along with corresponding textual queries viewed as true/false propositions (i.e., the candidate beliefs of a model, consisting of information changing through time) our approach trains models in an efficient and end-to-end fashion to build intermediate representations that facilitate teaching and direct querying of beliefs at arbitrary points alongside solving other end tasks. To show the benefit of our approach, we experiment with a diverse set of NLU tasks including relational reasoning on CLUTRR and narrative understanding on bAbI. Using novel belief prediction tasks for both tasks, we show the benefit of our main breakpoint transformer, based on T5, over conventional representation learning approaches in terms of processing efficiency, prediction accuracy and prediction consistency, all with minimal to no effect on corresponding QA end tasks. To show the feasibility of incorporating our belief tracker into more complex reasoning pipelines, we also obtain SOTA performance on the three-tiered reasoning challenge for the TRIP benchmark (around 23-32 improvement on Tasks 2-3).

READ FULL TEXT
research
05/19/2011

Typical models: minimizing false beliefs

A knowledge system S describing a part of real world does in general not...
research
08/15/2020

Is Supervised Syntactic Parsing Beneficial for Language Understanding? An Empirical Investigation

Traditional NLP has long held (supervised) syntactic parsing necessary f...
research
10/12/2021

Mention Memory: incorporating textual knowledge into Transformers through entity mention attention

Natural language understanding tasks such as open-domain question answer...
research
11/07/2018

Compositional Language Understanding with Text-based Relational Reasoning

Neural networks for natural language reasoning have largely focused on e...
research
02/03/2023

SPARLING: Learning Latent Representations with Extremely Sparse Activations

Real-world processes often contain intermediate state that can be modele...
research
05/17/2021

Factoring Statutory Reasoning as Language Understanding Challenges

Statutory reasoning is the task of determining whether a legal statute, ...
research
12/04/2016

Who is Mistaken?

Recognizing when people have false beliefs is crucial for understanding ...

Please sign up or login with your details

Forgot password? Click here to reset