ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions

04/04/2019 ∙ by Soham Parikh, et al. ∙ Indian Institute Of Technology, Madras 0

The task of Reading Comprehension with Multiple Choice Questions, requires a human (or machine) to read a given passage, question pair and select one of the n given options. The current state of the art model for this task first computes a question-aware representation for the passage and then selects the option which has the maximum similarity with this representation. However, when humans perform this task they do not just focus on option selection but use a combination of elimination and selection. Specifically, a human would first try to eliminate the most irrelevant option and then read the passage again in the light of this new information (and perhaps ignore portions corresponding to the eliminated option). This process could be repeated multiple times till the reader is finally ready to select the correct option. We propose ElimiNet, a neural network-based model which tries to mimic this process. Specifically, it has gates which decide whether an option can be eliminated given the passage, question pair and if so it tries to make the passage representation orthogonal to this eliminated option (akin to ignoring portions of the passage corresponding to the eliminated option). The model makes multiple rounds of partial elimination to refine the passage representation and finally uses a selection module to pick the best option. We evaluate our model on the recently released large scale RACE dataset and show that it outperforms the current state of the art model on 7 out of the 13 question types in this dataset. Further, we show that taking an ensemble of our elimination-selection based method with a selection based method gives us an improvement of 3.1 best-reported performance on this dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Reading comprehension is the task of answering questions pertaining to a given passage. An AI agent which can display such capabilities would be useful in a wide variety of commercial applications such as answering questions from financial reports of a company, troubleshooting using product manuals, answering general knowledge questions from Wikipedia documents, etc. Given its widespread applicability, several variants of this task have been studied in the literature. For example, given a passage and a question, the answer could either (i) match some span in the passage or (ii) be generated from the passage or (iii) be one of the given candidate answers. The last variant is typically used in various high school, middle school, and competitive examinations. We refer to this as Reading Comprehension with Multiple Choice Questions (RC-MCQ). There is an increasing interest in building AI agents with deep language understanding capabilities which can perform at par with humans on such competitive tests. For example, recently [Lai et al.2017] have released a large scale dataset for RC-MCQ collected from high school and middle school English examinations in China comprising of approximately passages and questions. The large size of this dataset makes it possible to train and evaluate complex neural network based models and measure the scientific progress on RC-MCQ.

Passage: One day, I was studying at home. Suddenly, there was a loud noise…A building in my neighborhood was on fire…A few people jumped out of the window… Those who were still on the second floor were just crying for help…Firefighters arrived at last. They fought the fire bravely. Water pipes were used and a ladder was put near the second-floor window. Then the people inside were taken out by the firefightersThanks to the firefighters, the people inside were saved and the fire was put out in the end, but many things, such as desk, pictures and clothes, were damaged.
Question: How did the people who didn’t jump out of the window get out of the building?
Option A: They were taken out by the firefighters.
Option B: They climbed down a ladder by themselves. Option C: They walked out after the fire was put out. Option D: They were taken out by doctors Correct Option: A

Figure 1: Example of RC-MCQ from RACE dataset

While answering such Multiple Choice Questions (MCQs) (e.g., Figure 1), humans typically use a combination of option elimination and option selection. More specifically, it makes sense to first try to eliminate options which are completely irrelevant to the given question. While doing so, we may also be able to discard certain portions of the passage which are not relevant to the question (because they revolve around the option which has been eliminated, e.g., portions marked in blue and orange, corresponding to Option B and Option C respectively in Figure  1). This process can then be repeated multiple times, each time eliminating an option and refining the passage (by discarding irrelevant portions). Finally, when it is no longer possible to eliminate any option, we can pick the best option from the remaining options. In contrast, the current state of the art models for RC-MCQ focus explicitly on option selection. Specifically, given a question and a passage, they first compute a question aware representation of the passage (say ). They then compute a representation for each of the options and select an option whose representation is closest to . There is no iterative process where options get eliminated and the representation of the passage gets refined in the light of this elimination.

We propose a model which tries to mimic the human process of answering MCQs. Similar to the existing state of the art method [Dhingra et al.2017], we first compute a question-aware representation of the passage (which essentially tries to retain portions of the passage which are only relevant to the question). We then use an elimination gate (depending on the passage, question and option) which takes a soft decision as to whether an option needs to be eliminated or not. Next, akin to the human process described above, we would like to discard portions of the passage representation which are aligned with this eliminated option. We do this by subtracting the component of the passage representation along the option representation (similar to Gram-Schmidt orthogonalization). The amount of orthogonalization depends on the soft decision given by the elimination gate. We repeat this process multiple times, during each pass doing a soft elimination of the options and refining the passage representation. At the end of a few passes, we expect the passage representation to be orthogonal (hence dissimilar) to the irrelevant options. Finally, we use a selection module to select the option which is most similar to the refined passage representation. We refer to this model as ElimiNet. Note that such a model will not make sense in cases where the options are highly related. For example, if the question is about life stages of a butterfly and the options are four different orderings of the words butterfly, egg, pupa, caterpillar then it does not make sense to orthogonalize the passage representation to the incorrect option representations. However, the dataset that we focus on in this work does not contain questions which have such permuted options.

We evaluate ElimiNet on the RACE dataset and compare it with Gated Attention Reader (GAR) [Dhingra et al.2017], the current state of the art model on this dataset. We show that of the

question types in this dataset our model outperforms GAR on 7 question types. We also visualize the soft elimination probabilities learnt by

ElimiNet and observe that it indeed learns to iteratively refine the passage representation and push the probability mass towards the correct option. Finally, we show that an ensemble model combining ElimiNet with GAR gives an accuracy of which is (relative) better than the best-reported performance on this dataset.The code for our model is publicly available111https://github.com/sohamparikh94/ElimiNet.

2 Related Work

Over the last few years, the availability of large scale datasets has led to an increasing interest in the task of Reading Comprehension. These datasets cover different variations of the Reading comprehension task. For example, SQuAD [Rajpurkar et al.2016], TriviaQA [Joshi et al.2017], NewsQA [Trischler et al.2016], MS MARCO [Nguyen et al.2016], NarrativeQA [Kociský et al.2017], etc. contain {passage, question, answer} where the answer matches a span of the passage or it has to be generated. On the other hand, CNN/Daily Mail [Hermann et al.2015], Children’s Book Test (CBT) [Hill et al.2015] and Who Did What (WDW) dataset [Onishi et al.2016] offer cloze-style RC where the task is to predict a missing word/entity (from the passage) in the question. Some other datasets such as MCTest [Richardson et al.2013], AI2 [Khashabi et al.2016] and RACE contain RC with multiple choice questions (RC-MCQ) where the task is to select the right answer.

The advent of these datasets and the general success of deep learning for various NLP tasks, has led to a proliferation of neural network based models for RC. For example, the models proposed in

[Xiong et al.2016, Seo et al.2016, Wang et al.2017, Hu et al.2017] address the first variant of RC requiring span prediction as in the SQuAD dataset. Similarly, the models proposed in [Chen et al.2016, Kadlec et al.2016, Cui et al.2017, Dhingra et al.2017] address the second variant of RC requiring cloze-style QA. Finally, [Lai et al.2017] adapt the the models proposed in [Chen et al.2016, Dhingra et al.2017] for cloze-style RC and use them to address the problem of RC-MCQ. Irrespective of which of the three variants of RC they address, these models use a very similar framework. Specifically, these models contain components for (i) encoding the passage (ii) encoding the question (iii) capturing interactions between the question and the passage (iv) capturing interactions between question and the options (for MCQ) (v) making multiple passes over the passage and (vi) a decoder to predict/generate/select an answer. The differences between the models arise from the specific choice of the encoder, decoder, interaction functions and iteration mechanism. Most of the current state of the art models can be seen as special instantiations of the above framework.

The key difference between our model and existing models for RC-MCQ is that we introduce components for (soft-)eliminating irrelevant options and refining the passage representation in the light of this elimination. The passage representation thus refined over multiple (soft-)elimination rounds is then used for selecting the most relevant option. To the best of our knowledge, this is the first model which introduces the idea of option elimination for RC-MCQ.

Figure 2: A simplistic diagram of the proposed model

3 Proposed Model

Given a passage of word-length , a question of word-length and options where and each option is of word-length

, the task is to predict a conditional probability distribution over the options (

i.e., to predict ). We model this distribution using a neural network which contains modules for encoding the passage/question/options, capturing the interactions between them, eliminating options and finally selecting the correct option. We refer to these as the encoder, interaction, elimination and selection modules as shown in Figure 2. Among these, the main contribution of our work is the introduction of a module for elimination. Specifically, we introduce a module to (i) decide whether an option can be eliminated (ii) refine the passage representation to account for eliminated/un-eliminated options and (iii) repeat this process multiple times. In the remainder of this section, we describe the various components of our model.

Encoder Module:

We first compute vectorial representations of the question and options. We do so by using a bidirectional recurrent neural network which contains two Gated Recurrent Units (GRU)

[Chung et al.2014], one which reads the given string (question or option) from left to right and the other which reads the string from right to left. For example, given the question

, each GRU unit computes a hidden representation for each time-step (word) as:

where is the -dimensional embedding of the question word . The final representation of each question word is a concatenation of the forward and backward representations (i.e., ). Similarly, we compute the bi-directional representations for each word in each of the options as . Just to be clear, is the representation of the -th word in the -th option (). We use separate GRU cells for the question and options, with the same GRU cell being used for all the options. Note that the encoder also computes a representation of each passage word as simply the word embedding of the passage word (i.e., ). Later on in the interaction module we use a GRU cell to compute the interactions between the passage words.

Interaction Module:

Once the basic question and passage word representations have been computed, the idea is to allow them to interact so that the passage words’ representations can be refined in the light of the question words’ representations. This is similar to how humans first independently read the passage and the question and then read the passage multiple times, trying to focus on the portions which are relevant and ignoring portions that are irrelevant (e.g., portion marked in red in Figure 1) to the question. To achieve this, we use the same multi-hop architecture for iteratively refining passage representations as proposed in Gated Attention Reader [Dhingra et al.2017]. At each hop , we use the following set of equations to compute this refinement:

where, is a matrix whose columns are as computed by the encoder. such that each element of essentially computes the importance of the -th question word for the -th passage word during hop . At the 0-th hop, is simply the embedding of the -th passage word. The goal is to refine this embedding over each hop based on interactions with the question. Next, we compute,

where computes the importance of each dimension of the current passage word representation and is then used as a gate to scale up or scale down different dimensions of the passage word representation.

We now allow these refined passage word representations to interact with each other using a bi-directional recurrent neural network to compute for the next hop.

The above process is repeated for hops wherein each hop takes as the input and computes a refined representation . After

hops, we obtain a fixed-length vector representation of the passage by combining the passage word representations using a weighted sum.

(1)

where computes the importance of each passage word and is a weighted sum of the passage representations.

Elimination Module:

The aim of the elimination module is to refine the passage representation so that it does not focus on portions which correspond to irrelevant options. To do so we first need to decide whether an option can be eliminated or not and then ensure that the passage representation gets modified accordingly. For the first part, we introduce an elimination gate to enable a soft-elimination.

Note that this gate is computed separately for each option . In particular, it depends on the final state of the bidirectional option GRU (). It also depends on the final state of the bidirectional question GRU () and the refined passage representation () computed by the interaction module. are parameters which will be learned.

Based on the above soft-elimination, we want to now refine the passage representation. For this, we compute which is the component of the passage representation () orthogonal to the option representation () and which is the component of the passage representation along the option representation.

(2)
(3)

The elimination gate then decides how much of and need to be retained.

If (eliminate, e.g., portions corresponding to Option D in Figure 1) then the passage representation will be made orthogonal to the option representation (akin to ignoring portions of the passage relevant to the option) and (don’t eliminate, e.g., portions marked in green, corresponding to Option A in Figure 1) then the passage representation will be aligned with the option representation (akin to focusing on portions of the passage relevant to the option).

Note that in equations (2) and (3) we completely subtract the components along or orthogonal to the option representation. We wanted to give the model some flexibility to decide how much of this component to subtract. To do this we introduce another gate, called the subtract gate,

where are parameters that need to be learned. We then replace the RHS of Equations 2 and 3 by and respectively. Thus the components and used in Equation (2) and (3) are gated using . One could argue that itself could encode this information but empirically we found that separating these two functionalities (elimination and subtraction) works better.

For each of the options, we independently compute representations . These are combined to obtain a single refined representation for the passage.

(4)

Note that represent the option-specific passage representations and ’s give us a way of combining these option specific representations into a single passage representation. We repeat the above process for hops wherein the -th hop takes , and as input and returns a refined computed using the above set of equations.

Selection Module

Finally, the selection module takes the refined passage representation after elimination hops and computes its bilinear similarity with each option representation.

where and are vectors and is a matrix which needs to be learned. We select the option which gives the highest score as computed above. We train the model using the cross entropy loss by normalizing the above scores (using softmax) first to obtain a probability distribution.

4 Experimental Setup

In this section, we describe the dataset used for evaluation, the hyperparameters of our model, training procedure and state of the art models used for comparison.



Dataset: We evaluate our model on the RACE dataset which contains multiple choice questions collected from high school and middle school English examinations in China. The high school portion of the dataset (RACE-H) contains , and questions in train, validation, and test sets respectively. The middle school portion of the dataset (RACE-M) contains , and questions for train, validation, and test sets respectively.
This dataset contains a wide variety of questions of varying degrees of complexity. For example, some questions ask for the most appropriate title for the passage which requires deep language understanding capabilities to comprehend the entire passage. There are some questions which ask for the meaning of a specific term or phrase in the context of the passage. Similarly, there are some questions which ask for the key idea in the passage. Finally, there are some standard Wh-type questions. Given this wide variety of questions, we wanted to see if there are specific types of questions for which an elimination module makes more sense. To do so, with the help of in-house annotators, we categorize the questions in the test dataset into the following categories using scripts with manually defined rules: (i)

Wh-question types, (ii) questions asking for the title/meaning/key idea of the passage, (iii) questions asking whether the given statement is True/False, (iv) questions asking for a quantity (e.g., how much, how many) (v) fill-in-the-blanks questions. We were able to classify

of questions in the test set into these categories and the remaining of questions were labeled as miscellaneous. The distribution of questions belonging to each of these categories in RACE-H and RACE-M are shown in Figure 3.

Figure 3: Distribution of different question types in the RACE-Mid (top) and RACE-High (bottom) portions of the dataset

Training Procedures:

We try two different ways of training the model. In the first case, we train the parameters of all the modules (encoder, interaction, elimination, and selection) together. In the second case, we first remove the elimination module and train the parameters of the remaining modules. We then fix the parameters of the encoder and interaction module and train only the elimination and selection module. The idea was to first help the model understand the document better and later focus on elimination of options (in other words, ensure that the entire learning is focused on the elimination module). Of course, we also had to learn the parameters of the selection module from scratch because it now needs to work with the refined passage representations. Empirically, we find that this pre-training step does not improve over the performance obtained by end-to-end training. Hence, we report results only for the first case (i.e., end-to-end training).

Hyperparameters: We restrict our vocabulary to the top 50K words appearing in the passage, question, and options in the dataset. We use the same vocabulary for the passage, question, and options. We use the same train, valid, test splits as provided by the authors. We tune all our models based on the accuracy achieved on the validation set. We initialize the word embeddings with dimensional Glove embeddings [Pennington et al.2014]. We experiment with both fine-tuning and not fine-tuning these word embeddings. The hidden size for BiGRU is the same across the passage, question, and option and we consider the following sizes :. We experiment with hops in the interaction module and passes in the elimination module. We add dropout at the input layer to the BiGRUs and experiment with dropout values of . We try both Adam and SGD as the optimizer. For Adam, we set the learning rate to and for SGD we try learning rates of

. In general, we find that Adam converges much faster. We train all our models for upto 50 epochs as we do not see any benefit of training beyond 50 epochs.



Models Compared: We compare our results with the current state of the art model on RACE dataset, namely, Gated Attention Reader [Dhingra et al.2017]. This model was initially proposed for cloze-style RC and is, in fact, the current state of the art model for cloze-style RC. The authors of RACE dataset adapt this model for RC-MCQ by replacing the output layer with a layer which computes the bilinear similarity between the option and passage representations.

Figure 4: Performance of ElimiNet and Gated Attention Reader (GAR) on different question categories in RACE-Full (top), RACE-Mid (mid) and RACE-High (bottom). The categories in which our model outperforms GAR are marked with *.

5 Results and Discussions

In this section, we discuss the results of our experiments.

5.1 Performance of Individual Models

We compare the accuracy of different models on RACE-Mid (middle school), RACE-High (high school) and full RACE test-set comprising of both RACE-Mid and RACE-High. For each dataset, we compare the accuracy for each question type. These results are summarized in Figure 4. We observe that on RACE-Mid ElimiNet performs better than Gated Attention Reader (GAR) on out of categories. Similarly, on RACE-High ElimiNet performs better than GAR on out of categories. Finally, on RACE-full, ElimiNet performs better than GAR on out of categories. Note that, overall on the entire test set (combining all question types) our model gives a slight improvement over GAR. The main reason for this is that the dataset is dominated by fill in the blank style questions and our model performs worse by only on such questions. However, since nearly of the questions in the dataset are fill in the blank style questions even a small drop in the performance on these questions, offsets the gains that we get on other question types.

5.2 Ensemble of Different Models

Since ElimiNet and GAR perform well on different question types we believe that taking an ensemble of these models should lead to an improvement in the overall performance. For a fair comparison, we also want to see the performance when we independently take an ensemble of GAR models and ElimiNet models. We refer to these as GAR-ensemble and ElimiNet-ensemble models. Each model in the ensemble is trained using a different hyperparameter setting and we use (we do not see any benefit of using ). The results of these experiments are summarized in Table 1. ElimiNet-ensemble performs better than GAR-ensemble and the final ensemble gives the best results. We observe the ElimiNet-ensemble performs significantly better on RACE-Mid dataset than the GAR-ensemble and gives almost the same performance on the RACE-High dataset. Overall, by taking an ensemble of the two models we get an accuracy of which is (relative) better than GAR and (relative) better than GAR-ensemble.

5.3 Effect of Subtract Gate

We wanted to see if the subtract gate enables the model to learn better (by performing partial orthogonalization/alignment). For this, we compared the accuracy with and without the subtract gate (we set the subtract gate to a vector of s). We observed that the accuracy of our model drops from to and we outperformed the GAR model only in out of categories. This indicates that the flexibility offered by the subtract gate does help the model.

Model RACE-Mid RACE-High RACE-Full
SA Reader 44.2 43.0 43.3
GA Reader (GAR) 43.7 44.2 44.1
ElimiNet 44.4 44.5 44.5
GAR Ensemble 45.7 46.2 45.9
ElimiNet Ensemble 47.7 46.1 46.5
GAR + ElimiNet (ensemble of above 2 ensembles) 47.4 47.4 47.2
Table 1: Performance of individual and ensemble models
Figure 5: Change in the probability of correct option and incorrect option (initially predicted with highest score) over multiple passes of the elimination module. The two figures correspond to two different examples from the test set.

5.4 Visualizing Shift in Probability Scores

If the elimination module is indeed learning to eliminate options and align/orthogonalize the passage representation w.r.t the uneliminated/eliminated options then we should see a shift in the probability scores as we do multiple passes of elimination. To visualize this, in Figure 5, we plot the probabilities of the correct option and the incorrect option with the highest probability before passing through module for two different test instances. We observe that as we do multiple passes of elimination, the probability mass shifts from the incorrect option (blue curve) to the correct option (green curve). This indicates that the elimination module is learning to align the passage representation with the correct option (hence, increasing its similarity) and moves it away from the incorrect option (hence, decreasing its similarity).

6 Conclusion

We focus on the task of Reading Comprehension with Multiple Choice Questions and propose a model which mimics how humans approach this task. Specifically, the model uses a combination of elimination and selection to arrive at the correct option. This is achieved by introducing an elimination module which takes a soft decision as to whether an option should be eliminated or not. It then modifies the passage representation to either align it with uneliminated options or orthogonalize it to eliminated options. The amount of orthogonalization or alignment is determined by two gating functions. This process is repeated multiple times to iteratively refine the passage representation. We evaluate our model on the recently released RACE dataset and show that it outperforms current state of the art models on out of question types. Finally, using an ensemble of our elimination-selection approach with a state of the art selection approach, we get an improvement of

over the best reported performance on RACE dataset. As future work, instead of soft elimination we would like to use reinforcement learning techniques to learn a policy for hard elimination.

Acknowledgments

We thank Google for supporting Preksha Nema through their Google India Ph.D. Fellowship program. We thank our anonymous reviewer for suggesting the butterfly example which is mentioned in the introduction.

References