Applying Deep Learning to Answer Selection: A Study and An Open Task

08/07/2015
by   Minwei Feng, et al.
0

We apply a general deep learning framework to address the non-factoid question answering task. Our approach does not rely on any linguistic tools and can be applied to different languages or domains. Various architectures are presented and compared. We create and release a QA corpus and setup a new QA task in the insurance domain. Experimental results demonstrate superior performance compared to the baseline methods and various technologies give further improvements. For this highly challenging task, the top-1 accuracy can reach up to 65.3 practical use.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2019

Domain-agnostic Question-Answering with Adversarial Training

Adapting models to new domain without finetuning is a challenging proble...
research
10/19/2020

Technical Question Answering across Tasks and Domains

Building automatic technical support system is an important yet challeng...
research
05/25/2022

Investigating Information Inconsistency in Multilingual Open-Domain Question Answering

Retrieval based open-domain QA systems use retrieved documents and answe...
research
09/18/2020

Tradeoffs in Sentence Selection Techniques for Open-Domain Question Answering

Current methods in open-domain question answering (QA) usually employ a ...
research
09/13/2021

Question Answering over Electronic Devices: A New Benchmark Dataset and a Multi-Task Learning based QA Framework

Answering questions asked from instructional corpora such as E-manuals, ...
research
03/23/2016

Recurrent Neural Network Encoder with Attention for Community Question Answering

We apply a general recurrent neural network (RNN) encoder framework to c...
research
11/12/2015

LSTM-based Deep Learning Models for Non-factoid Answer Selection

In this paper, we apply a general deep learning (DL) framework for the a...

Please sign up or login with your details

Forgot password? Click here to reset