Interpretable Multi-Step Reasoning with Knowledge Extraction on Complex Healthcare Question Answering

08/06/2020
by   Ye Liu, et al.
15

Healthcare question answering assistance aims to provide customer healthcare information, which widely appears in both Web and mobile Internet. The questions usually require the assistance to have proficient healthcare background knowledge as well as the reasoning ability on the knowledge. Recently a challenge involving complex healthcare reasoning, HeadQA dataset, has been proposed, which contains multiple-choice questions authorized for the public healthcare specialization exam. Unlike most other QA tasks that focus on linguistic understanding, HeadQA requires deeper reasoning involving not only knowledge extraction, but also complex reasoning with healthcare knowledge. These questions are the most challenging for current QA systems, and the current performance of the state-of-the-art method is slightly better than a random guess. In order to solve this challenging task, we present a Multi-step reasoning with Knowledge extraction framework (MurKe). The proposed framework first extracts the healthcare knowledge as supporting documents from the large corpus. In order to find the reasoning chain and choose the correct answer, MurKe iterates between selecting the supporting documents, reformulating the query representation using the supporting documents and getting entailment score for each choice using the entailment model. The reformulation module leverages selected documents for missing evidence, which maintains interpretability. Moreover, we are striving to make full use of off-the-shelf pre-trained models. With less trainable weight, the pre-trained model can easily adapt to healthcare tasks with limited training samples. From the experimental results and ablation study, our system is able to outperform several strong baselines on the HeadQA dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/15/2018

Answering Science Exam Questions Using Query Rewriting with Background Knowledge

Open-domain question answering (QA) is an important problem in AI and NL...
research
10/15/2019

Answering Complex Open-domain Questions Through Iterative Query Generation

It is challenging for current one-step retrieve-and-read question answer...
research
07/24/2019

Careful Selection of Knowledge to solve Open Book Question Answering

Open book question answering is a type of natural language based QA (NLQ...
research
06/11/2019

HEAD-QA: A Healthcare Dataset for Complex Reasoning

We present HEAD-QA, a multi-choice question answering testbed to encoura...
research
04/20/2019

Repurposing Entailment for Multi-Hop Question Answering Tasks

Question Answering (QA) naturally reduces to an entailment problem, name...
research
02/24/2023

Time-aware Multiway Adaptive Fusion Network for Temporal Knowledge Graph Question Answering

Knowledge graphs (KGs) have received increasing attention due to its wid...
research
05/25/2022

Reasoning over Logically Interacted Conditions for Question Answering

Some questions have multiple answers that are not equally correct, i.e. ...

Please sign up or login with your details

Forgot password? Click here to reset