Learning to Select from Multiple Options

12/01/2022
by   Jiangshu Du, et al.
0

Many NLP tasks can be regarded as a selection problem from a set of options, such as classification tasks, multi-choice question answering, etc. Textual entailment (TE) has been shown as the state-of-the-art (SOTA) approach to dealing with those selection problems. TE treats input texts as premises (P), options as hypotheses (H), then handles the selection problem by modeling (P, H) pairwise. Two limitations: first, the pairwise modeling is unaware of other options, which is less intuitive since humans often determine the best options by comparing competing candidates; second, the inference process of pairwise TE is time-consuming, especially when the option space is large. To deal with the two issues, this work first proposes a contextualized TE model (Context-TE) by appending other k options as the context of the current (P, H) modeling. Context-TE is able to learn more reliable decision for the H since it considers various context. Second, we speed up Context-TE by coming up with Parallel-TE, which learns the decisions of multiple options simultaneously. Parallel-TE significantly improves the inference speed while keeping comparable performance with Context-TE. Our methods are evaluated on three tasks (ultra-fine entity typing, intent detection and multi-choice QA) that are typical selection problems with different sizes of options. Experiments show our models set new SOTA performance; particularly, Parallel-TE is faster than the pairwise TE by k times in inference. Our code is publicly available at https://github.com/jiangshdd/LearningToSelect.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2022

Leveraging Large Language Models for Multiple Choice Question Answering

While large language models (LLMs) like GPT-3 have achieved impressive r...
research
02/11/2023

Compositional Exemplars for In-context Learning

Large pretrained language models (LMs) have shown impressive In-Context ...
research
04/29/2019

Learning to Find Common Objects Across Image Collections

We address the problem of finding a set of images containing a common, b...
research
04/04/2019

ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions

The task of Reading Comprehension with Multiple Choice Questions, requir...
research
05/25/2022

LingMess: Linguistically Informed Multi Expert Scorers for Coreference Resolution

While coreference resolution typically involves various linguistic chall...
research
08/22/2023

Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions

Large Language Models (LLMs) have demonstrated remarkable capabilities i...
research
09/07/2023

On Large Language Models' Selection Bias in Multi-Choice Questions

Multi-choice questions (MCQs) serve as a common yet important task forma...

Please sign up or login with your details

Forgot password? Click here to reset