Distributed Deep Learning for Question Answering

11/03/2015 ∙ by Minwei Feng, et al. ∙ ibm 0

This paper is an empirical study of the distributed deep learning for question answering subtasks: answer selection and question classification. Comparison studies of SGD, MSGD, ADADELTA, ADAGRAD, ADAM/ADAMAX, RMSPROP, DOWNPOUR and EASGD/EAMSGD algorithms have been presented. Experimental results show that the distributed framework based on the message passing interface can accelerate the convergence speed at a sublinear scale. This paper demonstrates the importance of distributed training. For example, with 48 workers, a 24x speedup is achievable for the answer selection task and running time is decreased from 138.2 hours to 5.81 hours, which will increase the productivity significantly.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

111This paper will appear in the Proceeding of The 25th ACM International Conference on Information and Knowledge Management (CIKM 2016), Indianapolis, USA.

Deep Learning technology [9] has been widely adopted in various AI tasks and has achieved the state-of-the-art performance. One practical challenge of Deep Learning is the highly time consuming training procedure. It is not unusual to see the reported training time in the magnitude of days or even weeks in research papers. However, this is rarely acceptable for practical commercial usage (e.g. training as a service on the cloud) where short turn around time is expected by customers. Even for research environment the long time computation could stop scientists from running as many experiments as needed and slow down the R&D cycle. Hence the distributed training has become a crucial research direction along with the advancement of deep learning itself on the algorithm side.

Various infrastructures and experimental results have been published recently. Most of those results are on computer vision benchmark tasks like CIFAR10 or ImageNet. In this paper, we focus on the question answering (QA) domain. We study two subtasks of QA: answer selection and question classification. It is trivial to observe the epoch speed (training data processing speed) increased after more computing resources have been adopted. However, this does not necessarily guarantee that the convergence speed is also improved. The ultimate goal is to have convergence speedup as users will expect models with equal accuracy to be trained faster when the cost is increased for more computing resources. Many optimization algorithms are available but their performances have not been compared under the distributed training mode. The motivation of this paper is to conduct comparison study for distributed training algorithms and demonstrate the sublinear scalability of the distributed training on convergence speed. We have compared the latest technologies, including SGD

[1] , MSGD [11] , RMSPROP [6], ADADELTA [13], ADAGRAD [4], ADAM/ADAMAX [8], DOWNPOUR [3] and EASGD/EAMSGD [14]. To our best knowledge, it is the first time that such results of distributed training algorithms have been reported on the QA subtasks.

The rest of the paper is organized as follows: section 2 is the summary of related work; section 3 will describe the answer selection benchmark task; section 4 summarizes the question classification task; we demonstrate the MPI-based infrastructure in section 5 and the review of the distributed training algorithms is given in section 6 . Experimental results are reported in section 7 and finally conclusions are drawn in section 8 .







Figure 1: Architecture for answer selection. HL is the hidden layer with activation function. CNN is the convolutional neural networks. P stands for maxpooling and R stands for the activation function. QA means the weights of corresponding layer are shared by Q and A.





Figure 2: Architecture for question classification. HL is the hidden layer with activation function. CNN is the convolutional neural networks.

2 Related Work

Various systems have been proposed for distributed deep learning. One of the pioneering work is Google’s Distbelief system [3] in which DOWNPOUR has been proposed. The system has multiple parameter servers and clients. Most of other work follow the same spirit of DOWNPOUR. The system Adam [2] is another similar framework which has many engineering features like reduced memory copies and mitigating the impact of slow machines. IBM’s Rudra system [7] is a master-client based distributed framework where the servers are organized as a tree structure to save communication overhead. A parameter server framework is proposed in [10] that supports flexible consistency models, elastic scalability and continuous fault tolerance. [10] provides the APIs so that other framework like MXNet222https://github.com/dmlc/mxnet can utilize it. The platform Petuum [12] supports a synchronization model with bounded staleness. Compared to the previous work, the main contribution of this paper is that we study a different task, answer selection, and focus on the comparison of state-of-the-art algorithms.

3 Answer Selection Task

Different from many previous work, we study a QA task: answer selection. The paper [5] created an open task (including the released corpus) which serves as a benchmark for comparison purpose. For the detailed description of the data and task please refer to [5]. A summary is given here to make the paper self-contained. Given a question and an answer candidate pool for that question ( is the pool size), the goal is to find the best answer candidate ,  . If the selected answer is inside the ground truth set of (questions could have multiple correct answers), the question is considered correct. In this paper the best architecture (Figure 2) from [5]

has been used. The idea is to learn a vector representation of a given question and its answer candidates and then use a similarity metric to measure the matching degree. The similarity metric is Geometric mean of Euclidean and Sigmoid Dot product (GESD)

. and are the vector representations of Q and A. The training is computational expensive due to the usage of the hinge loss: for each training question there is a positive answer (the ground truth). A training instance is then constructed by pairing this with a negative answer (a wrong answer) sampled from the whole answer space. The forward pass calculation generates vector representations for the question and the two candidates:  , and  . The similarities and are calculated and their difference is compared to a margin :  . If this condition is not satisfied, there is no update to the model and a new negative example is sampled until the margin is less than ( this repetitive negative sampling procedure is time-consuming and to reduce running time we set maximum sampling times to be 100).

4 Question Classification Task

The second QA subtask we study in this paper is question classification. For certain application scenario (e.g. online customer service), the set of possible answers for all incoming questions is limited and predefined. Hence we can convert the QA into a question classification problem, where each question’s label represents the specific answer in the predefined set. Usually there is a noAnswer

label in the set for chit-chat questions. The data we used for this task is a customer corpus in financial domain. There are 78566 questions and the answer set size is 6763 (6763 different labels for the classifier). We further randomly split the data into train/valid/test parts with the question size 74566/2000/2000. We use a convolutional neural networks based model (Figure

2) to tackle this task. The last layer is Softmax

since this is a classification task. Please note that this general model can be applied to many natural language classification tasks such as relation classification, intent classification and sentiment analysis.


Client 1

Client 2

Client n

Server 1

Server 2

Server n
Figure 3: MPI framework

5 MPI-based Framework

Figure 3 demonstrates the Message Passing Interface (MPI) framework. There are three types of process: worker, parameter server and tester. Those processes are allocated across the high performance computing clusters. The worker will conduct the forward pass/backward pass calculations and send the update messages to servers. The servers hold a central model. They receive the messages from workers and update the central model and send the latest model back to the workers. A tester will only receive the latest model from the servers and run testing over the test corpus periodically. For MPI the non-blocking communication (MPI_ISend/MPI_IRecv) is used to increase the overall speed. To reduce the communication overhead, we split the model into partitions and set up multiple servers. Each server is responsible for the storage and update of one model partition. The amount of worker and server is set to be equal. We use the popular MPI toolkit MPICH.

6 Distributed Training Algorithms

We have compared state-of-the-art algorithms: stochastic gradient descent (SGD)

[1] , momentum stochastic gradient descent (MSGD) [11], RMSPROP (implemented same as section 4.2 of [6]), ADADELTA [13], ADAGRAD [4], ADAM/ADAMAX [8], DOWNPOUR [3] , elastic averaging stochastic gradient descent (EASGD) and its variation momentum EASGD (EAMSGD) [14] .

Method Peak Time (hour) Comment
SGD 64.61 145.98 61.50@50.95
MSGD 65.50 138.20 61.50@54.07
RMSPROP 64.89 99.06 61.50@37.03
ADADELTA 61.50 49.83
ADAGRAD 58.50 126.52
ADAM 54.06 140.00
ADAMAX 52.06 145.55
Table 1: Answer selection task: single worker training results. Accuracy is the peak accuracy of test corpus within 6 days. Time is the wall clock time of the peak accuracy.
Method Peak Time (hour) Comment
DOWNPOUR 64.61 5.26
EASGD 66.11 8.57 65.50@8.23
EAMSGD 67.50 11.09 65.50@5.81
RMSPROP 64.44 5.30
ADADELTA 61.39 6.34
ADAGRAD 59.50 8.73 58.50@3.75
ADAM 55.28 9.95 54.28@5.82
Table 2: Answer selection task: distributed training results with 48 workers. Accuracy is the peak accuracy of test corpus within 12 hours. Time is the wall clock time of the peak accuracy.

7 Experimental Results

Table 1 demonstrates the results of conventional optimization algorithms which use only one worker for the answer selection task. Table 2 demonstrates the results of distributed optimization algorithms for the answer selection task. Similarly, the results of the question classification task are shown in Table 3 and Table 4. Each method has its own hyper parameters. We have conducted extensive tuning experiments and only the best results of each method are presented in all tables. The strategy of hyper parameter tuning is two steps of grid search. In the first step, a coarse-grained grid selection of hyper parameters is conducted to find the rough range of the best hyper parameters. Then in the second step, a fine-grained grid selection of hyper parameters is conducted within the range that are discovered in the first step. For the answer selection task, Peak Accuracy is the top accuracy score on the test1 corpus of the released corpus from [5] within the whole running period. For the question classification task, Peak Accuracy is the top accuracy score on the test corpus within the whole running period. Time is the wall clock time (unit is hour) when the accuracy reaches that peak value. In Comment, 65.50@8.23 means the accuracy climbs up to 65.50% at wall clock time 8.23 hours. For the answer selection task, we let the single worker training methods keep running for 150 hours (approximately 6 days); for the question classification task, the single worker training methods are set to keep running for 3 days. For distributed methods the running time limit is set to 12 hours for both tasks. This is to save the computing resources so that more experiments can be scheduled. Also in practice it is much less meaningful if the running time is still prohibitive when large amount of computing resources are used. Finally, from previous study we notice that: for the answer selection task, the highest accuracy scores of test1 corpus are around 65%; for the question classification task, the model accuracy on test corpus should be around 98.5%.

Method Peak Time (hour) Comment
SGD 98.60 39.02 98.50@33.30
MSGD 98.50 38.30
RMSPROP 98.70 40.97 98.50@29.15
ADADELTA 93.10 70.84
ADAGRAD 98.50 37.57
ADAM 91.90 57.06
ADAMAX 82.70 60.53
Table 3: Question classification task: single worker training results. Accuracy is the peak accuracy of test corpus within 3 days. Time is the wall clock time of the peak accuracy.
Method Peak Time (hour) Comment
DOWNPOUR 98.55 5.20
EASGD 97.65 10.06
EAMSGD 98.35 4.80
RMSPROP 98.30 4.87
ADADELTA 97.40 5.42
ADAGRAD 82.65 11.61
ADAM 88.45 2.82
Table 4: Question classification task: distributed training results with 48 workers. Accuracy is the peak accuracy of test corpus within 12 hours. Time is the wall clock time of the peak accuracy.

7.1 Results of Answer Selection Task

In Table 1, we observe the following facts from the single worker experiments: (1) in terms of peak accuracy, SGD, MSGD and RMSPROP have scores around 65% which is same with the highest number reported in [5]; (2) ADADELTA and ADAGRAD lose several points of accuracy; (3) ADAM and ADAMAX perform significantly worse than other methods; (4) if a top accuracy is the goal, the best method is MSGD; (5) if for some practical applications where light accuracy loss is acceptable(e.g. 61.50% is fine), then RMSPROP is preferable as it converges faster.

Since ADAMAX does not work well and is similar to ADAM, we did not conduct experiments using distributed versions of ADAMAX algorithm. Also notice the algorithms EASGD/EAMSGD are only designed for the distributed training. In Table 2, we observe the following facts from the 48-worker experiments: (1) overall distributed training does not incur accuracy loss; (2) EASGD/EAMSGD achieve higher peak accuracy than the best score 65.50% from single worker results; (3) overall distributed training speeds up the training; (4) for EAMSGD it takes 5.81 hours to reach 65.50% and compared to single worker MSGD where it takes 138.20 hours to climb up to 65.50% , a 24x speed up is achievable by using distributed training.

7.2 Results of Question Classification Task

In Table 3, we observe the following facts from the single worker experiments: (1) in terms of peak accuracy, SGD, MSGD, RMSPROP and ADAGRAD have scores around 98.50% which is the expected accuracy score from previous study; (2) ADADELTA and ADAM lose several points of accuracy; (3) ADAMAX performs significantly worse than other methods.

In Table 4, we observe the following facts from the 48-worker experiments: (1) in terms of accuracy DOWNPOUR, EASGD, ADADELTA, EAMSGD and RMSPROP perform well; (2) ADAGRAD performs poorly under the distributed scenario where large accuracy loss is incurred; (3) considering both accuracy and convergence speed, DOWNPOUR, EAMSGD and RMSPROP are outstanding. Overall the training time has been decreased from 29.15 hours to 5.2 hours. The improvement is less compared to the answer selection task but it is still a significant productivity boost in practice.

8 Conclusions

We have conducted an empirical study of the distributed training for the answer selection task and question classification task which are crucial components of QA. We build the framework with MPI. The state-of-the-art algorithms have been compared, including SGD, MSGD, RMSPROP, ADADELTA, ADAGRAD, ADAM, ADAMAX, DOWNPOUR and EASGD/EAMSGD. To our best knowledge, it is the first time that the experimental results for distributed training have been reported on QA subtasks. This work proves the significance of the distributed training and a proper algorithm selection is crucial. E.g., for the answer selection task, a 24x speedup is achievable with the deployment of 48 workers and running time is decreased from 138.2 hours to 5.81 hours which is a huge gain for practical productivity. We realize that due to the lack of a solid mathematical foundation, the distributed training is still a trial-and-error procedure. Our experiences show that the hyper parameter tuning (especially the learning rate) can play a crucial role for the performance. On the other hand, the task itself could change the performance. For example, in [8] the ADAM demonstrates superior performance for image classification tasks while in our study the performance of ADAM/ADAMAX is relatively weak. From the four tables we can reach the conclusion that DOWNPOUR, EAMSGD and RMSPROP are the most attractive distributed training methods as they significantly increase the convergence speed while maintain the accuracy. The code in this paper has been written based on the Torch7 framework and our source code will be released. For future work we plan to study an algorithm combination strategy so that different distributed training methods could benefit from each other and further improvement could be achieved.


  • [1] L. Bottou. Online learning in neural networks. chapter Online Learning and Stochastic Approximations. Cambridge University Press, New York, NY, USA, 1998.
  • [2] T. Chilimbi, Y. Suzue, J. Apacible, and K. Kalyanaraman. Project adam: Building an efficient and scalable deep learning training system. In Proceedings of the 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI 14), pages 571–582, Broomfield, CO, 2014. USENIX Association.
  • [3] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, Q. V. Le, M. Z. Mao, M. A. Ranzato, A. W. Senior, P. A. Tucker, K. Yang, and A. Y. Ng. Large scale distributed deep networks. In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012., pages 1232–1240, 2012.
  • [4] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res., 12:2121–2159, July 2011.
  • [5] M. Feng, B. Xiang, M. R. Glass, L. Wang, and B. Zhou. Applying deep learning to answer selection: A study and an open task. In Proceedings of the 2015 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2015), Scottsdale, Arizona, 2015.
  • [6] A. Graves. Generating sequences with recurrent neural networks. CoRR, abs/1308.0850, 2013.
  • [7] S. Gupta, W. Zhang, and J. Milthorpe. Model Accuracy and Runtime Tradeoff in Distributed Deep Learning. ArXiv e-prints, September 2015.
  • [8] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
  • [9] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521:436–444, May 2015.
  • [10] M. Li, D. G. Andersen, J. W. Park, A. J. Smola, A. Ahmed, V. Josifovski, J. Long, E. J. Shekita, and B. Su.

    Scaling distributed machine learning with the parameter server.

    In 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI 14), pages 583–598, Broomfield, CO, Oct. 2014. USENIX Association.
  • [11] I. Sutskever, J. Martens, G. E. Dahl, and G. E. Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), volume 28, pages 1139–1147. JMLR Workshop and Conference Proceedings, May 2013.
  • [12] E. P. Xing, Q. Ho, W. Dai, J. Kim, J. Wei, S. Lee, X. Zheng, P. Xie, A. Kumar, and Y. Yu. Petuum: A new platform for distributed machine learning on big data. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’15, pages 1335–1344, 2015.
  • [13] M. D. Zeiler. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701, 2012.
  • [14] S. Zhang, A. Choromanska, and Y. LeCun. Deep learning with elastic averaging SGD. In Proceedings of the 2015 Conference on Neural Information Processing Systems. (NIPS 2015), 2015.