Distributed Deep Learning for Question Answering

11/03/2015
by   Minwei Feng, et al.
0

This paper is an empirical study of the distributed deep learning for question answering subtasks: answer selection and question classification. Comparison studies of SGD, MSGD, ADADELTA, ADAGRAD, ADAM/ADAMAX, RMSPROP, DOWNPOUR and EASGD/EAMSGD algorithms have been presented. Experimental results show that the distributed framework based on the message passing interface can accelerate the convergence speed at a sublinear scale. This paper demonstrates the importance of distributed training. For example, with 48 workers, a 24x speedup is achievable for the answer selection task and running time is decreased from 138.2 hours to 5.81 hours, which will increase the productivity significantly.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/07/2017

ISS-MULT: Intelligent Sample Selection for Multi-Task Learning in Question Answering

Transferring knowledge from a source domain to another domain is useful,...
research
01/06/2018

Analysis of Wikipedia-based Corpora for Question Answering

This paper gives comprehensive analyses of corpora based on Wikipedia fo...
research
06/22/2015

Answer Sequence Learning with Neural Networks for Answer Selection in Community Question Answering

In this paper, the answer selection problem in community question answer...
research
11/16/2017

An Abstractive approach to Question Answering

Question Answering has come a long way from answer sentence selection, r...
research
06/10/2020

ClarQ: A large-scale and diverse dataset for Clarification Question Generation

Question answering and conversational systems are often baffled and need...
research
02/04/2020

Improving Efficiency in Large-Scale Decentralized Distributed Training

Decentralized Parallel SGD (D-PSGD) and its asynchronous variant Asynchr...

Please sign up or login with your details

Forgot password? Click here to reset