Pentagon at MEDIQA 2019: Multi-task Learning for Filtering and Re-ranking Answers using Language Inference and Question Entailment

07/01/2019
by   Hemant Pugaliya, et al.
0

Parallel deep learning architectures like fine-tuned BERT and MT-DNN, have quickly become the state of the art, bypassing previous deep and shallow learning methods by a large margin. More recently, pre-trained models from large related datasets have been able to perform well on many downstream tasks by just fine-tuning on domain-specific datasets . However, using powerful models on non-trivial tasks, such as ranking and large document classification, still remains a challenge due to input size limitations of parallel architecture and extremely small datasets (insufficient for fine-tuning). In this work, we introduce an end-to-end system, trained in a multi-task setting, to filter and re-rank answers in the medical domain. We use task-specific pre-trained models as deep feature extractors. Our model achieves the highest Spearman's Rho and Mean Reciprocal Rank of 0.338 and 0.9622 respectively, on the ACL-BioNLP workshop MediQA Question Answering shared-task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/30/2022

Task Adaptive Parameter Sharing for Multi-Task Learning

Adapting pre-trained models with broad capabilities has become standard ...
research
02/07/2019

BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning

Multi-task learning allows the sharing of useful information between mul...
research
10/14/2022

Watermarking Pre-trained Language Models with Backdooring

Large pre-trained language models (PLMs) have proven to be a crucial com...
research
05/05/2022

Declaration-based Prompt Tuning for Visual Question Answering

In recent years, the pre-training-then-fine-tuning paradigm has yielded ...
research
06/14/2021

Exploiting Sentence-Level Representations for Passage Ranking

Recently, pre-trained contextual models, such as BERT, have shown to per...
research
03/25/2022

Striking a Balance: Alleviating Inconsistency in Pre-trained Models for Symmetric Classification Tasks

While fine-tuning pre-trained models for downstream classification is th...
research
05/05/2020

Multi-task pre-training of deep neural networks for digital pathology

In this work, we investigate multi-task learning as a way of pre-trainin...

Please sign up or login with your details

Forgot password? Click here to reset