DeepAI AI Chat
Log In Sign Up

Event Knowledge Incorporation with Posterior Regularization for Event-Centric Question Answering

by   Junru Lu, et al.
King's College London
University of Warwick

We propose a simple yet effective strategy to incorporate event knowledge extracted from event trigger annotations via posterior regularization to improve the event reasoning capability of mainstream question-answering (QA) models for event-centric QA. In particular, we define event-related knowledge constraints based on the event trigger annotations in the QA datasets, and subsequently use them to regularize the posterior answer output probabilities from the backbone pre-trained language models used in the QA setting. We explore two different posterior regularization strategies for extractive and generative QA separately. For extractive QA, the sentence-level event knowledge constraint is defined by assessing if a sentence contains an answer event or not, which is later used to modify the answer span extraction probability. For generative QA, the token-level event knowledge constraint is defined by comparing the generated token from the backbone language model with the answer event in order to introduce a reward or penalty term, which essentially adjusts the answer generative probability indirectly. We conduct experiments on two event-centric QA datasets, TORQUE and ESTER. The results show that our proposed approach can effectively inject event knowledge into existing pre-trained language models and achieves strong performance compared to existing QA models in answer evaluation. Code and models can be found:


page 1

page 2

page 3

page 4


Event-Centric Question Answering via Contrastive Learning and Invertible Event Transformation

Human reading comprehension often requires reasoning of event semantic r...

Enhanced Language Representation with Label Knowledge for Span Extraction

Span extraction, aiming to extract text spans (such as words or phrases)...

Repurposing Entailment for Multi-Hop Question Answering Tasks

Question Answering (QA) naturally reduces to an entailment problem, name...

Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models

Reasoning about time is of fundamental importance. Many facts are time-d...

Can Edge Probing Tasks Reveal Linguistic Knowledge in QA Models?

There have been many efforts to try to understand what gram-matical know...

Event Detection as Question Answering with Entity Information

In this paper, we propose a recent and under-researched paradigm for the...

A mixed policy to improve performance of language models on math problems

When to solve math problems, most language models take a sampling strate...