Event Knowledge Incorporation with Posterior Regularization for Event-Centric Question Answering

05/08/2023
by   Junru Lu, et al.
0

We propose a simple yet effective strategy to incorporate event knowledge extracted from event trigger annotations via posterior regularization to improve the event reasoning capability of mainstream question-answering (QA) models for event-centric QA. In particular, we define event-related knowledge constraints based on the event trigger annotations in the QA datasets, and subsequently use them to regularize the posterior answer output probabilities from the backbone pre-trained language models used in the QA setting. We explore two different posterior regularization strategies for extractive and generative QA separately. For extractive QA, the sentence-level event knowledge constraint is defined by assessing if a sentence contains an answer event or not, which is later used to modify the answer span extraction probability. For generative QA, the token-level event knowledge constraint is defined by comparing the generated token from the backbone language model with the answer event in order to introduce a reward or penalty term, which essentially adjusts the answer generative probability indirectly. We conduct experiments on two event-centric QA datasets, TORQUE and ESTER. The results show that our proposed approach can effectively inject event knowledge into existing pre-trained language models and achieves strong performance compared to existing QA models in answer evaluation. Code and models can be found: https://github.com/LuJunru/EventQAviaPR.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2022

Event-Centric Question Answering via Contrastive Learning and Invertible Event Transformation

Human reading comprehension often requires reasoning of event semantic r...
research
11/01/2021

Enhanced Language Representation with Label Knowledge for Span Extraction

Span extraction, aiming to extract text spans (such as words or phrases)...
research
04/20/2019

Repurposing Entailment for Multi-Hop Question Answering Tasks

Question Answering (QA) naturally reduces to an entailment problem, name...
research
06/15/2023

Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models

Reasoning about time is of fundamental importance. Many facts are time-d...
research
09/15/2021

Can Edge Probing Tasks Reveal Linguistic Knowledge in QA Models?

There have been many efforts to try to understand what gram-matical know...
research
04/14/2021

Event Detection as Question Answering with Entity Information

In this paper, we propose a recent and under-researched paradigm for the...
research
07/17/2023

A mixed policy to improve performance of language models on math problems

When to solve math problems, most language models take a sampling strate...

Please sign up or login with your details

Forgot password? Click here to reset