Neural Paraphrase Identification of Questions with Noisy Pretraining

04/15/2017
by   Gaurav Singh Tomar, et al.
0

We present a solution to the problem of paraphrase identification of questions. We focus on a recent dataset of question pairs annotated with binary paraphrase labels and show that a variant of the decomposable attention model (Parikh et al., 2016) results in accurate performance on this task, while being far simpler than many competing neural architectures. Furthermore, when the model is pretrained on a noisy dataset of automatically collected question paraphrases, it obtains the best reported performance on the dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/25/2022

IDK-MRC: Unanswerable Questions for Indonesian Machine Reading Comprehension

Machine Reading Comprehension (MRC) has become one of the essential task...
research
06/04/2020

Experiments on Paraphrase Identification Using Quora Question Pairs Dataset

We modeled the Quora question pairs dataset to identify a similar questi...
research
11/02/2018

CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge

When answering a question, people often draw upon their rich world knowl...
research
04/27/2018

Weaver: Deep Co-Encoding of Questions and Documents for Machine Reading

This paper aims at improving how machines can answer questions directly ...
research
12/20/2022

Socratic Pretraining: Question-Driven Pretraining for Controllable Summarization

In long document controllable summarization, where labeled data is scarc...
research
04/14/2021

Jointly Learning Truth-Conditional Denotations and Groundings using Parallel Attention

We present a model that jointly learns the denotations of words together...
research
12/08/2020

Edited Media Understanding: Reasoning About Implications of Manipulated Images

Multimodal disinformation, from `deepfakes' to simple edits that deceive...

Please sign up or login with your details

Forgot password? Click here to reset