Knowledge Query Network: How Knowledge Interacts with Skills

08/03/2019 ∙ by Jinseok Lee, et al. ∙ The Hong Kong University of Science and Technology 0

Knowledge Tracing (KT) is to trace the knowledge of students as they solve a sequence of problems represented by their related skills. This involves abstract concepts of students' states of knowledge and the interactions between those states and skills. Therefore, a KT model is designed to predict whether students will give correct answers and to describe such abstract concepts. However, existing methods either give relatively low prediction accuracy or fail to explain those concepts intuitively. In this paper, we propose a new model called Knowledge Query Network (KQN) to solve these problems. KQN uses neural networks to encode student learning activities into knowledge state and skill vectors, and models the interactions between the two types of vectors with the dot product. Through this, we introduce a novel concept called probabilistic skill similarity that relates the pairwise cosine and Euclidean distances between skill vectors to the odds ratios of the corresponding skills, which makes KQN interpretable and intuitive. On four public datasets, we have carried out experiments to show the following: 1. KQN outperforms all the existing KT models based on prediction accuracy. 2. The interaction between the knowledge state and skills can be visualized for interpretation. 3. Based on probabilistic skill similarity, a skill domain can be analyzed with clustering using the distances between the skill vectors of KQN. 4. For different values of the vector space dimensionality, KQN consistently exhibits high prediction accuracy and a strong positive correlation between the distance matrices of the skill vectors.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

One of the advantages of an Intelligent Tutoring System (Nwana, 1990) and massive open online courses (Kaplan and Haenlein, 2016)

is that they can potentially benefit from monitoring and tracking student activities in an adaptive learning environment, where learner modeling comes into play. A learner model provides estimates for the students’ state, and includes two inter-connected aspects: domain modeling and knowledge modeling 

(Pelánek, 2017).

A domain model (Pelánek, 2017) studies the structure within a domain of problems (For example, it finds out which skill a problem is related to: “1+2=?” to “addition of integers” and “1.3+2.5=?” to “addition of decimals”). Another task of a domain model is to discover the structure of a skill domain, which can be performed either manually or automatically (Pelánek, 2017). On the other hand, a knowledge model (Anderson et al., 1990), in an abstract sense, traces students’ knowledge while they are solving problems. Knowledge has been described in various forms by the name of knowledge state, which has no universal definition yet. In this paper, the term knowledge state has been used as a state that can describe a student’s general level of attainment of skills. Often, domain modeling and knowledge modeling are viewed to be separate; however, we tried to provide approaches for both where problem-solving records of students can be important input features for finding the latent structure of a skill domain.

KT is a research area which analyzes student activities and studies knowledge acquisition, where its main task is to describe a student’s knowledge. To elaborate, consider a student who solved a sequence of problems. Then the student’s data is given by the temporal sequence of tuples, each of which is consisted of the skill that the problem at each time step is related to and the binary correctness that indicates whether or not the student gave a right answer. By calling such tuple student response, the KT problem is formulated as follows: 1. given the student responses up to the time step , describe the student’s knowledge state at the current time step , and 2. given the skill at the next time step , predict the correctness by modeling the interaction between the student’s knowledge state at time and the skill at time , which we will call knowledge interaction. Note that the knowledge state refers to the dynamic state of a student accumulated from the student responses while a skill indicates a particular ability that needs to be learned by a student to solve a problem.

Therefore, the quality of a KT model is measured by its ability to describe the knowledge state of a student and its accuracy of predicting correctness. Additionally, since modeling the knowledge interaction is to describe how a student’s knowledge state responds to different skills, it is desirable if a KT model can explain the relationship between skills that can be inferred from the knowledge interaction. For example, we can say that “addition of integers” is independent of “subtraction of integers” if a model observes that a student does not learn the latter while learning the former. Similarly, they are dependent if the change in a student’s knowledge state of one skill affects the knowledge state of the other. We believe that modeling such skill relationship can lead to further exploration of the latent structure of the skill domain, which is the subject of domain modeling.

However, existing KT models provide limited definitions of either knowledge state, or knowledge interaction, or both. For example, Bayesian Knowledge Tracing (BKT) (Anderson et al., 1990) imposes a binary assumption on the knowledge state, which is too restrictive to be intuitive, and Deep Knowledge Tracing (DKT) (Piech et al., 2015) does not give an explanation of knowledge interaction. In this paper, we propose a new neural network KT model called Knowledge Query Network (KQN) to generalize the knowledge state and explain the knowledge interaction more descriptively. The central idea is to use the dot product between a knowledge state vector and a skill vector to define the knowledge interaction while leveraging neural networks to encode student responses and skills into vectors of the same dimensionality. Additionally, we introduce a novel concept called probabilistic skill similarity

which relates the cosine and Euclidean distances between the skill vectors to the odds ratios for the corresponding skills. Based on those distances, we explore the latent structure of a skill domain with cluster analysis. Lastly, we show that KQN is stable in predicting correctness and learning skill vectors by comparing prediction accuracy and the distance matrices of the skill vectors when the vector space dimensionality is varied.

2. Related Work

Item Response Theory (IRT) is a framework for modeling the relationship between problems and correctness (Hulin et al., 1983)

. In its simplest form, it uses a logistic regression model by estimating student proficiency and skill difficulty. However, it assumes the proficiency to be constant and does not explain any structure for problems. To overcome those limitations, Bayesian extensions of IRT have been proposed to have a hierarchical structure over items (HIRT) and temporal changes in a student’s knowledge state (TIRT)

(Wilson et al., 2016). Still, HIRT assumes constant student proficiency while TIRT lacks the ability of domain analysis.

T SID C
1 1 0
2 2 0
3 1 1
(a) Original
T SID C
1 1 0
2 1 1
T SID C
1 2 0
(b) BKT
T SID NO NO
1 1 1 0
2 2 1 1
3 1 2 1
(c) LFA
T SID S F S F
1 1 0 1 0 0
2 2 0 1 0 1
3 1 1 1 0 1
(d) PFA
Table 1. The example student Ben’s input data, one original and the others preprocessed for different KT models. In the tables above, feature names are abbreviated as follows: Time to T, Skill ID to SID, Correctness to C, Number of Opportunities to NO, Number of Successes to S, and Number of Failures to F. Note that the original data is used for neural network models.

In BKT, a student’s knowledge state is viewed as a set of binary latent variables, one for each skill, with two possible states, known and unknown (Anderson et al., 1990)

. Then a set of observable variables, each of which corresponds to correctness per skill, are conditioned on the set of the binary variables. For example, let us say we have a running example of student Ben throughout this paper, who has records as shown in Table

1. Accordingly, there will be two knowledge states and correctness variables for skills 1 and 2, a total of four variables with two independent BKT models with input data as shown in Table 0(b)

. Then the knowledge acquisition in BKT is modeled with a Hidden Markov model (HMM), where knowledge interaction is controlled by a set of interpretable equations. Since BKT lacks the ability to forget and individualize, a number of extensions have been proposed

(Yudelson et al., 2013; Khajah et al., 2016). However, most importantly, BKT’s independence assumption on different skills is considered to be highly constrained and not effective, where the model cannot leverage the whole data. For example, Ben’s history of responses on skill 1 cannot tell his responses on skill 2 since there should be two separate models for each skill.

Learning Factors Analysis (LFA) (Cen et al., 2006) models a student’s knowledge state as a set of binary variables, one for each skill. Correctness at each time step is predicted with a logistic regression model which has covariates related to students, skills, and a summary statistic of student responses, i.e., the number of past opportunities for each skill. Performance Factors Analysis (PFA) (Pavlik et al., 2009) extends LFA by separating ‘the number of opportunities per skill’ into ‘the number of correct answers per skill’ and ‘the number of incorrect answers per skill’ with the others the same, e.g., in Ben’s case, the input data are preprocessed as shown in Table 0(c) and Table 0(d).

Since an estimate for correctness is explained with student covariates, and skill-specific covariates without variable interactions in LFA and PFA, the two models do not describe how a student’s knowledge state with respect to one skill is affected by that with respect to another skill; instead, a student parameter, which is also called student proficiency, is the only factor that relates the knowledge state for different skills. Moreover, a skill is explained by the regression coefficients for the skill-specific covariates from which we cannot tell the structure of a skill domain directly.

As the first neural network KT model, DKT (Piech et al., 2015)

, given the student’s responses as input, encodes a student’s knowledge state as a summarized vector calculated from a recurrent neural network (RNN), which is a renowned neural network technique for modeling temporal data. However, DKT does not define the knowledge interaction directly. In detail, a student response at each time step is formed as a tuple

, where and refer to the problem ID and correctness, respectively. For example, Ben’s original data in Table 0(a) is expressed as at

. The input is then passed to an RNN layer, where Long Short-Term Memory (LSTM)

(Hochreiter and Schmidhuber, 1997) was used in the original paper (Piech et al., 2015), and its output hidden state is passed to a logistic function after an affine transformation, i.e., , where is the output hidden state of an LSTM layer and is an element-wise logistic function. Finally, the -th element of is used to predict correctness at the next time step given that the next problem ID is . Despite its superior prediction performance over the existing classical methods, DKT has been criticized by other papers (Wilson et al., 2016; Khajah et al., 2016) for its lack of practicality in educational applications. This is because the output hidden state is inherently hard to be interpreted as the knowledge state, and the model does not give insights into the knowledge interaction.

To make a neural network KT model more interpretable, Dynamic Key-Value Memory Networks (DKVMN) (Zhang et al., 2017) have been introduced by extending memory-augmented neural networks (MANNs) (Graves et al., 2014; Weston et al., 2014). Like DKT, DKVMN uses the original data as input. DKVMN accumulates temporal information from student responses into a dynamic matrix, or the value memory, while embedding skills with a static matrix, or the key memory

. DKVMN defines the interaction between value vectors and key vectors using attention weights calculated with cosine similarity, and predicts correctness by passing a concatenation of a weighted sum of value vectors and an embedded skill vector to a multilayer perceptron (MLP). Even though the prediction accuracy of DKVMN has proven to be higher than that of DKT, the use of an MLP for the output of the model still makes it hard to explain the knowledge interaction.

3. Our Proposed Model

3.1. Motivation

To generalize the knowledge state while describing the knowledge interaction intuitively, we suggest a model that projects a student’s knowledge and skills into the same vector space of embedding dimensionality . An important constraint is to contain the skill vectors on the

-dimensional positive orthant unit sphere, i.e., they have unit-length and positive coordinates. The logit of a probability estimate for correctness is given by the dot product between the current knowledge state vector and the skill vector of the next problem. This is only possible because both knowledge state and skill vectors lie in the same vector space.

Now, we illustrate why skill vectors are set to unit-length and constrained to a positive orthant: the former makes the logit only dependent on the direction of the related skill vector while the latter assures that learning on one skill does not decrease learning on another. For example, in a 2-D vector space, suppose that Ben has knowledge state at after two responses while there are three skill vectors, and for the skills 1 and 2, and for a third “imaginary” skill as shown in Figure 1. At , his knowledge state may change to as he answers correctly for skill 1. Then the logit with respect to increases from to while that with respect to skill 2 remains the same as . However, the logit with respect to skill 3 would decrease from to , which would then decrease the probability estimate for the correctness of skill 3. This is counter-intuitive since the datasets we are dealing with have a set of skills within the same area, e.g., mathematics.

Figure 1. Illustration of skill vectors and Ben’s knowledge state vectors at and .

3.2. Objective

Let the skill of a problem and the correctness at time be and , respectively. The correctness at each time step is viewed as a Bernoulli variable given the history of responses and the skill at time step

. Then the objective of a KT model is to find out the parameter of the Bernoulli distribution at each time step as follows:

3.3. Architecture Overview

KQN consists of three components: knowledge encoder, skill encoder, and knowledge state query. The knowledge encoder converts the temporal information from student responses into a knowledge state vector while the skill encoder embeds a skill into a skill vector. The two vectors are then passed to the knowledge state query to provide the prediction for correctness given the current knowledge state and the provided skill. The network architecture of KQN is shown in Figure 2.

Figure 2. KQN architecture drawn at time .

3.4. Inputs

The model takes two inputs: a student response at the current time step and a skill at the next time step. Each of the student responses is one-hot encoded and given as input

to an RNN layer as the following:

where is the number of skills, and is the skill at time step . Similarly, the skill at time is one-hot encoded to , where the -th element is 1 and the other elements are 0’s. In Ben’s case, his response at and the skill for the problem at are encoded to and , respectively.

3.5. Knowledge State Query

Let the knowledge state vector and the embedded skill vector , both -dimensional, be the two vectors encoded by the knowledge encoder and the skill encoder, respectively. Then the knowledge interaction is defined by the inner product of the two vectors. The logit and the corresponding prediction probability are calculated as follows:

where is a logistic function and refers to the inner product. In this way, knowledge interaction is well-defined for the following reasons:

  • If two skills are independent, their corresponding vectors are orthogonal to each other. Accordingly, an increase or a decrease in the logit with respect to one vector does not affect the logit with respect to the other vector.

  • If two skills are similar from the probabilistic perspective, then an increase in a logit with respect to one vector would lead to an increase in the logit with respect to the other vector, and vice versa.

Note that from the definition of the knowledge interaction above, it is implied that there can be at most mutually independent skills. For different values of , whether or not KQN learns the pairwise relationships between skills represented by pairwise distances was tested in experiments and shown in later sections.

As a result, KQN is approximating the parameter of the Bernoulli distribution at each time step as follows:

3.6. Knowledge State Encoder

Given input , the knowledge state encoder produces a knowledge state vector with the following equations:

where , , is an RNN layer, and is the state size of . In KQN, LSTM (Hochreiter and Schmidhuber, 1997)

and Gated Recurrent Units (GRU)

(Cho et al., 2014) have been tested as RNN variants. Also, to avoid overfitting, dropout regularization (Srivastava et al., 2014; Zaremba et al., 2014) has been used for the RNN output layer as was used for DKT in a previous work (Xiong et al., 2016).

3.7. Skill Encoder

The skill encoder embeds input to with an MLP as follows:

where , , , , and

is an element-wise ReLU activation with

(Nair and Hinton, 2010). Note that is now constrained to the -dimensional positive orthant unit sphere, which we will call for the rest of this paper for notational convenience.

3.8. Optimization

At each time step, the cross-entropy error given the probability estimate and the target correctness is calculated, and the error terms for are summed to give the total error.

The gradients of the total error with respect to the model parameters have been computed with back-propagation to be used by an optimization method.

4. Probabilistic Skill Similarity

Based on the architecture of KQN, we hereby introduce a novel concept called probabilistic skill similarity to measure the distance between skills from the probabilistic perspective.

4.1. Distance Measures for Skill Vectors

For any two skill vectors learned from KQN, the cosine distance differs from the squared Euclidean distance by only a factor of 2 since they are constrained to as follows:

4.2. Distances and Odds Ratios

Next, we show how a pairwise distance between two skill vectors is related to the logarithm of their odds ratio. Given a knowledge state vector and a skill vector , the probability estimate for correctness and the corresponding odds are calculated as follows:

Then for any two skill vectors , the logarithm of the odds ratio is characterized by their distance as follows:

where . Therefore, we say that two skills are probabilistically similar if they are ‘close’ enough based on the distance between their corresponding vectors.

5. Experiments

KQN has been tested for four tasks: correctness prediction, knowledge interaction visualization, skill domain analysis, and the sensitivity analysis of the dimensionality of the vector space. For correctness prediction, the performance of KQN was compared to that of other models on four public datasets: one synthetic and three real-world ones which are available online. Then for a sample student, knowledge interaction was visualized with a heat map to demonstrate the knowledge state query with respect to different skills. Next, the skill domain was explored with clustering based on skill distances. Finally, pairwise distances of the skill vectors in one dimensionality were compared to those in other dimensionalities to conduct the sensitivity analysis of the vector embedding dimensionality.

5.1. Datasets

The following four datasets have been used to evaluate models: ASSISTments 2009-2010, ASSISTments 2015, OLI Engineering Statics 2011, and Synthetic-5. To make a fair comparison of the correctness prediction task, we used the ones provided by the DKVMN source code available online111https://github.com/jennyzhang0215/DKVMN. The statistics of the datasets are shown in Table 2.

5.1.1. ASSISTments 2009-2010

It is a dataset222https://sites.google.com/site/assistmentsdata/home/assistment-2009-2010-data/skill-builder-data-2009-2010 collected by the ASSISTments online tutoring systems (Feng et al., 2009). It was gathered from skill builder problem sets, where students work on the problems to achieve mastery, a certain level of performance, working on similar questions. During the preprocessing, those records without skill names have been discarded. After a problem with duplicate records had been reported by a paper (Xiong et al., 2016), the dataset has since been corrected by the ASSISTments system. Therefore, those results reported by a number of previous papers are not compared in this paper.

5.1.2. ASSISTments 2015

Compared to ASSISTments 2009-2010 which has 110 distinct skill tags, this dataset333https://sites.google.com/site/assistmentsdata/home/2015-assistments-skill-builder-data contains 100 distinct ones with more than twice the number of student responses. Data records with invalid correct values that are not in have been removed.

5.1.3. OLI Engineering Statics 2011

This dataset444https://pslcdatashop.web.cmu.edu/DatasetInfo?datasetId=507 was gathered from a college level statics course in Fall 2011 (Koedinger et al., 2010). The concatenation of a problem name and a step name has been labeled as a skill tag. Note that the number of skills is much larger than those of other datasets.

5.1.4. Synthetic-5

It is a dataset555https://github.com/chrispiech/DeepKnowledgeTracing originally generated by the authors of the DKT paper (Piech et al., 2015). Each student response was generated using skill difficulty, student proficiency, and the probability of a random guess set to a constant based on IRT (Hulin et al., 1983). The dataset consists of a number of sub-datasets, and those with five concepts from version 0 to version 19 have been used, i.e., a total of 20 sub-datasets from the original dataset were used.

Dataset Students Skills Size Max Steps
ASSIST2009 4,151 110 325,637 1,261
ASSIST2015 19,840 100 683,801 618
Statics2011 333 1,223 189,297 1,181
Synthetic-5 4,000 50 200,000 50
Table 2. Statistics for all the datasets. Names of the datasets have been abbreviated. ‘Size’ and ‘Max Steps’ refer to the total number of student responses and the maximum number of time steps, respectively.
Dataset Test AUC (%)
IRT+ BKT+ DKVMN DKT DKT+KQN KQN
ASSIST2009 77.40 - 81.570.1 80.530.2 82.050.04 82.320.05
ASSIST2015 - - 72.680.1 72.520.1 73.410.02 73.400.02
Statics2011 - 75 82.840.1 80.200.2 80.270.22 83.200.05
Synthetic-5 - 80 82.730.1 80.340.1 82.580.01 82.810.01
Table 3. Accuracy of different KT models were compared based on test AUCs (%) for the correctness prediction task. IRT, BKT, and their variants were used as representatives of non-neural-network KT models. DKVMN and DKT were compared to KQN as baselines of neural-network models while DKVMN has been the previous state-of-the-art neural network KT model. Note that DKT+KQN refers to DKT with the embedded skill vectors learned from KQN.

5.2. Setup and Implementation

All the program codes for the implemented KQN and DKT were written in TensorFlow 1.5

666https://www.tensorflow.org/versions/r!1.5/. For the data splits of each dataset, we used the same ones\getrefnumberdkvmn-data\getrefnumberdkvmn-datafootnotemark: dkvmn-data used by DKVMN for a fair comparison of prediction accuracy.

5.2.1. Correctness Prediction

All the sequences of student responses were preserved in their original length without truncation. Each dataset except Synthetic-5 has been split into training, validation, and test sets with 8:2 and 7:3 for training to validation and (training+validation) to test ratios, respectively. For Synthetic-5, the corresponding ratios of 8:2 and 5:5 have been set. Hyperparameters have been grid-searched with holdout validation with early-stopping. Note that no early-stopping was used in the testing phase. The number of epochs was set to 50 and 200 during the validation and testing phases, respectively. During the testing phase, KQN was run for five times, and the mean and standard deviation of the performance metric results have been reported.

The hyperparameters of KQN and their candidate values have been set as follows:

  • Type of the RNN layer in the knowledge state encoder: LSTM, GRU.

  • Hidden state size of the chosen RNN layer: 32, 64, 128.

  • Hidden state size of the MLP layer: 32, 64, 128.

  • Dimensionality of the vector space in which the knowledge state and skills are embedded: 32, 64, 128.

The retention rate of 0.6 for the RNN dropout and the batch size of 128 were set to default. The Adam optimization method (Kingma and Ba, 2014) was used to minimize the total error .

Additionally, DKT was run with the skill vectors learned by KQN to evaluate their quality for the correctness prediction task. Specifically, at each time step, input given to DKT was set to the concatenation of two vectors: one-hot encoded correctness and the learned skill vector corresponding to the original skill . We denote DKT with such a setup as DKT+KQN. The hyperparameters of DKT+KQN have been searched in the same way as explained previously for KQN with the same dropout rate of RNN and the batch size. Meanwhile, LSTM was used for the RNN layer following past works (Piech et al., 2015; Xiong et al., 2016).

5.2.2. Knowledge Interaction Visualization

Throughout a student’s responses, prediction estimates for correctness with respect to the skills the student solved were calculated with the knowledge state query followed by the logistic function. A sample from the test set of ASSISTments 2009-2010 was used for the task. Then those estimates were visualized with a heat map to evaluate their changes as the student solved the problems.

5.2.3. Skill Domain Analysis

Skill distances have been used for clustering on the four datasets. To decide the linkage and the type of distance measures to use, we first performed flat clustering on Synthetic-5 with the number of clusters fixed to 5, the ground truth number of clusters. Then the quality of clustering or partitioning with respect to the original cluster labels was measured with the Adjusted Rand Index (ARI) (Hubert and Arabie, 1985). It has a maximum value of 1 when the clusters are formed to match the original partitioning perfectly, and a minimum value of 0 when they are randomly partitioned. Since there are 20 sub-datasets for Synthetic-5, 20 ARI scores were averaged. The linkage between clusters and the type of distance measures have been set to hyperparameters as follows:

  • Cluster linkage: average, centroid, complete, median, single, ward, weighted

  • Type of distance measure: cosine, Euclidean

After deciding which linkage and distance measure to use, the number of clusters has been explored. First, for different values of , the skills of ASSISTments 2009-2010 were clustered based on the distances computed from the skill vectors learned by KQN. Then DKT was used to quantify the quality of those clusters as follows: First, all the original skill IDs were substituted with the assigned cluster labels, where data splits remained the same as those for the correctness prediction task. Then, DKT was run five times. Finally, the average and the standard deviation of the test AUCs of DKT were reported.

5.2.4. Sensitivity Analysis of the Vector Space Dimensionality

Let be the dimensionality of the vector space in KQN and be the optimal values of obtained in the correctness prediction task previously. KQN was trained on the four datasets by varying to and with the data splits kept the same, and the other hyperparameters set to the optimal values. To analyze the effect of on the correctness prediction task and the learning of skill vectors, prediction accuracy was reported, and the three distance matrices of the skill vectors for each of were compared.

6. Results and Analysis

6.1. Correctness Prediction

2 Area Irregular Figure 41 Finding Percents 74 Multiplication and Division Positive Decimals 55 Divisibility Rules
46 Algebraic Solving 42 Pattern Finding 107 Parts of a Polynomial, Terms, Coefficient, Monomial, Exponent, Variable 57 Perimeter of a Polygon
3-3 56 Reading a Ruler or Scale 43 Write Linear Equation from Situation 4 Table 71 Angles on Parallel Lines Cut by a Transversal
63 Scale Factor 44 Square Root 12 Circle Graph

72 Write Linear Equation from Ordered Pairs

67 Percents 47 Percent Discount 32 Box and Whisker 80 Unit Conversion Within a System
78 Rate 54 Interior Angles Triangle 49 Complementary and Supplementary Angles 83 Area Parallelogram
84 Effect of Changing Dimensions of a Shape Proportionally 62 Ordering Real Numbers 53 Interior Angles Figures with More than 3 Sides 91 Polynomial Factors
85 Surface Area Cylinder 65 Scientific Notation 58 Solving for a variable 97 Choose an Equation from Given Information
86 Volume Cylinder 76 Computation with Real Numbers 59 Exponents 101 Angles - Obtuse, Acute, and Right
88 Solving Systems of Linear Equations 79 Solving Inequalities 68 Area Circle 104 Simplifying Expressions positive exponents
4-4 92 Rotations 81 Area Rectangle 70 Equation Solving More Than Two Steps 20 Addition and Subtraction Integers
93 Reflection 82 Area Triangle 75 Volume Sphere 31 Circumference
96 Interpreting Coordinate Graphs 87 Greatest Common Factor 102 Distributive Property 34 Conversion of Fraction Decimals Percents
1-13-3 14 Proportion 89 Solving Systems of Linear Equations by Graphing 1 Area Trapezoid 60 Division Fractions
66 Write Linear Equation from Graph 90 Multiplication Whole Numbers 6 Stem and Leaf Plot 77 Number Line
1-14-4 69 Least Common Multiple 98 Intercept 10 Venn Diagram 19 Multiplication Fractions
1-1 3 Probability of Two Distinct Events 99 Linear Equations 11 Histogram as Table or Graph 25 Subtraction Whole Numbers
5 Median 100 Slope 21 Multiplication and Division Integers 61 Estimation
7 Mode 105 Finding Slope from Ordered Pairs 22 Addition Whole Numbers 109 Finding Slope From Equation
4-4 8 Mean 106 Finding Slope From Situation 26 Equation Solving Two or Fewer Steps 24 Addition and Subtraction Fractions
9 Range 108 Recognize Quadratic Pattern 33 Ordering Integers 50 Pythagorean Theorem
13 Equivalent Fractions 110 Quadratic Formula to Solve Quadratic Equation 37 Ordering Positive Decimals 103 Recognize Linear Pattern
2-24-4 15 Fraction Of 16 Probability of a Single Event 38 Rounding 29 Counting Methods
18 Addition and Subtraction Positive Decimals 45 Algebraic Simplification 39 Volume Rectangular Prism 95 Midpoint
4-4 23 Absolute Value 73 Prime Number 40 Order of Operations All 64 Surface Area Rectangular Prism
4-4 28 Calculations with Similar Figures 94 Translations 48 Nets of 3D Figures 36 Unit Rate
2-2 30 Ordering Fractions 17 Scatter Plot 51 D.4.8-understanding-concept-of-probabilities
35 Percent Of 27 Order of Operations +,-,/,* () positive reals 52 Congruence
Table 4. 14 flat clusters of ASSISTments 2009-2010 skills based on the average linkage method and the Euclidean distance. In each cluster, skills are sorted in an ascending order based on skill IDs. Different clusters are separated by dashed lines.

Prediction accuracy was measured with the Area Under the ROC curve (AUC) during the testing phase. Note that the AUC of a model that guesses 0 or 1 randomly should be 50%. As representatives of non-neural-network models, BKT, IRT, and their variants have been compared with KQN while DKT and DKVMN were compared to the state-of-the-art neural network models. The AUC results for those models have been cited from other papers as follows: those of IRT and its extensions from (Wilson et al., 2016), of BKT and its variants from (Khajah et al., 2016; Xiong et al., 2016), and of DKT and DKVMN from (Zhang et al., 2017).

Test AUCs for all the datasets are shown in Table 3. Overall, KQN performed better than all the previously available KT models and showed a more stable performance with the lowest standard deviation values.

For ASSISTments 2009-2010, the test AUC of KQN was 82.32% beating the previous highest value by 0.75%. DKT+KQN showed the AUC of 82.05%, not only higher than the original DKT performance but also higher than all the others. Surprisingly, for ASSISTments 2015, DKT+KQN achieved the highest test AUC of 73.41%, even slightly higher than KQN. Both KQN and DKT+KQN performed better than all the previous results, which is promising in that KQN should be learning useful skill vectors that are transferable to other models and applications. For OLI Engineering Statics 2011, KQN achieved the highest value of 83.20%, higher than the previous highest by 0.34%. DKT+KQN showed a performance comparable to the vanilla DKT with the slightly higher test AUC of 80.27% and the standard deviation of 0.22%. Lastly, also for Synthetic-5, KQN performed the best with the highest average 82.81% and the lowest standard deviation of 0.01%. Interestingly, the standard deviation of DKT+KQN was much lower than that of the original DKT, showing that the learned skill vectors should be contributing to the stable prediction of the model.

In summary, KQN showed the best performance for the correctness prediction task compared to all the previous models while it achieved the second best result of all the models as DKT+KQN had the highest score on ASSISTments 2015. In addition to the best mean test AUC scores, our model had much lower standard deviation values compared to other models. DKT+KQN also had low standard deviation values for all the datasets except for OLI Engineering Statics 2011. Therefore, we speculate that KQN is able to produce stable prediction estimates due to its ability to learn a meaningful latent structure of the skill vectors.

6.2. Knowledge Interaction Visualization

Figure 3. Visualization of knowledge interaction by querying the knowledge state with respect to particular skills. On the x-axis, student responses are labeled while on the y-axis, all the skills contained in the responses are marked. Each column corresponds to one time step , which increases along the x-axis. Prediction estimates for correctness in percentage (%) are annotated in the grid. It is better viewed in color.

For a sample student from ASSISTments 2009-2010, prediction estimates for correctness in percentage are visualized in Figure 3 through the knowledge state query with respect to particular skills. On the x-axis, student responses with skill IDs and correctness values as tuples are marked while on the y-axis, all the skills that the student solved are sorted in ascending order from the top. The corresponding skill names can be found in Table 4.

Changes in probability estimates are mostly intuitive. For example, at , after the student solved a problem with skill 52 correctly, the probability estimate for skill 52 increased from 72% to 82%. However, it can also be observed that some changes are counter-intuitive. For example, at , as the student solved a problem with skill 92 incorrectly, the corresponding estimate increased from 23% to 24% even though the change was only 1%. This problem has also been addressed for DKT in a previous work and is still an open problem to be improved (Yeung and Yeung, 2018).

Linkage Distance ARI
average cosine 0.3180
Euclidean 0.3266
centroid cosine 0.0373
Euclidean 0.0143
complete cosine 0.2898
Euclidean 0.2898
median cosine 0.0368
Euclidean 0.0071
single cosine 0.0703
Euclidean 0.0703
ward cosine 0.3201
Euclidean 0.3234
weighted cosine 0.2996
Euclidean 0.3020
Table 5. Average ARI scores for different linkage methods and distance measures on Synthetic-5.
Number of Clusters Test AUC (%)
5 79.770.03
6 79.970.03
7 80.100.03
8 80.040.02
9 80.140.03
10 80.100.02
11 80.230.05
12 80.200.05
13 80.550.03
14 80.640.03
Table 6. Test AUCs (%) of DKT on ASSISTments 2009-2010 after replacing skill IDs with cluster labels assigned by flat clustering. The average linkage and the Euclidean distance were used.

6.3. Skill Domain Analysis

Distance Dataset
cosine ASSIST2009 0.11 0.09 0.10 0.75 0.76 0.76
ASSIST2015 0.11 0.09 0.10 0.70 0.69 0.69
Statics2011 0.15 0.09 0.20 0.44 0.55 0.37
Synthetic-5 0.12 0.10 0.11 0.78 0.78 0.79
Euclidean ASSIST2009 0.09 0.07 0.09 1.22 1.23 1.23
ASSIST2015 0.09 0.07 0.09 1.17 1.17 1.17
Statics2011 0.16 0.11 0.21 0.92 1.04 0.84
Synthetic-5 0.10 0.08 0.09 1.25 1.24 1.25
Table 7. Average pairwise distances and average differences between the pairwise distances.

Average ARI scores for clustering with different linkage methods and distance measures are reported in Table 5. ARI was the highest when the linkage was set to average, and the distance measure was set to Euclidean for clustering. Not surprisingly, the average ARI scores did not differ much when the distance measure was set to either cosine or Euclidean.

After clustering the skills of ASSISTments 2009-2010 with the linkage and the distance measure set to average and Euclidean, respectively, and substituting the original skill IDs with those cluster labels, DKT was run five times. The test AUCs of DKT are reported in Table 6. They increased gradually as the number of clusters changed from 5 to 14. The lowest test AUC was 79.77% when , not differing much from the highest test AUC of 80.64%, which means that skills were clustered preserving useful information.

On ASSISTments 2009-2010, skill IDs and skill names were grouped by 14 clusters as shown in Table 4. Different clusters were separated by dashed lines while in each cluster, skills were sorted in ascending order based on their skill IDs. Since the skills must have been grouped based on probabilistic skill similarity, a number of intuitively similar skills were clustered together. For example, ‘30 Ordering Functions’ and ‘62 Ordering Real Numbers’ were assigned to the fourth cluster while ‘33 Ordering Integers’ and ‘37 Ordering Positive Decimals’ were assigned to the eighth cluster.

6.4. Sensitivity Analysis of the Vector Space Dimensionality

For the four datasets, the test AUCs of KQNs with the embedding dimensionality and are shown in Table 8, where refers to the optimal values chosen from the holdout validation for the correctness prediction task. We could observe only a little difference in the prediction accuracy as the values of were varied.

For each pair of , the average difference between the pairwise distances of the skill vectors of two different dimensionalities was calculated as follows:

where refers to the pairwise distance between two skill vectors and , and is the number of skills. is then compared to the average pairwise distance as follows:

In Table 7, the lowest values of are indicated in bold. As can be seen, is always lower than . From this, it can be inferred that KQN learned the skill relationships better when was set to a number high enough since controls the maximum number of mutually independent skill vectors. Also, the values of were relatively low compared to the corresponding values of . For example, was only when the Euclidean distance was used for ASSISTments 2009-2010 while the corresponding and were 1.22 and 1.23, respectively.

To further evaluate the distance matrices, we performed Mantel tests (Legendre and Legendre, 1998), which measure the similarity between two distance matrices with a correlation coefficient and a p-value. has the same range as that of correlation coefficients in statistics while a p-value indicates statistical significance. The Pearson correlation and the permutation number of 999 were set for the Mantel tests.

The results of the Mantel tests are reported in Table 9, where p-values are omitted since they were in all cases, indicating that the values of are statistically significant. The fact that the values of were always the highest for and confirmed that there was the strongest positive correlation between the corresponding distance matrices. Specifically, was over for OLI Engineering Statics 2011 while it had the minimum value of for all the other datasets, proving strong positive correlations as well. Therefore, from Table 7, Table 8, and Table 9, KQN was shown to be stable in predicting correctness and learning the relationships between the skill vectors as the value of the vector space dimensionality was varied.

Dataset Test AUC (%)
ASSIST2009 82.32 82.35 82.32
ASSIST2015 73.40 73.38 73.40
Statics2011 83.20 83.17 83.16
Synthetic-5 82.81 82.79 82.82
Table 8. Test AUCs of KQNs by varying the embedding dimensionality . refers to the optimal values found from the correctness prediction task. Note that prediction accuracy may not be the highest when was set to .

7. Conclusions and Future Work

Distance Dataset
- - -
cosine ASSIST2009 0.521 0.653 0.570
ASSIST2015 0.582 0.661 0.609
Statics2011 0.616 0.825 0.609
Synthetic-5 0.495 0.526 0.511
Euclidean ASSIST2009 0.531 0.660 0.583
ASSIST2015 0.601 0.682 0.629
Statics2011 0.620 0.816 0.607
Synthetic-5 0.508 0.536 0.523
Table 9. Mantel tests on the distance matrices. p-values are not marked since they were 0.001 in all cases.

From the experiment results for the four tasks, we list the contributions of this paper as follows:

  1. KQN performs better than all the previous models on the four datasets for the correctness prediction task.

  2. KQN enables the knowledge state of a student to be queried with respect to different skills, which is helpful for interpreting the knowledge interaction through visualization.

  3. KQN’s architecture leads to the concept of probabilistic skill similarity to relate the cosine and Euclidean distances between two skill vectors to the odds ratio for the corresponding skills as introduced previously in the paper. This makes the skill vectors and their pairwise distances useful for domain modeling, e.g., with cluster analysis.

  4. KQN is robust to the changes in the dimensionality of the vector space for the knowledge state and skill vectors in that its prediction accuracy is not degraded and it learns strongly positively correlated sets of pairwise distances between the skill vectors as the value of the dimensionality is varied, or equivalently, KQN learns the latent relationships between skills stably.

Compared to other neural network models, KQN has more parameters to learn. For example, since it includes an MLP in the skill encoder in addition to an RNN in the knowledge state encoder, KQN is computationally heavier than DKT which only has an RNN for encoding student responses. Heuristically, more GPU memory was required for training KQN compared to DKT+KQN. Still, we believe that the advantages of KQN mentioned above are meaningful enough to compensate for the increase in space complexity.

KQN proposes an alternative approach to the KT problem by defining the knowledge state and skill vectors in the same vector space. It has a general form of the knowledge state and skills as vectors while defining the knowledge interaction clearly as the dot product between the two types of vectors. From the fact that the pairwise distances between skill vectors are interpreted as the logarithm of the corresponding odds ratios from the probabilistic perspective, those distances can become useful features for domain modeling to explore the latent structure of the skill domain, which can be a future direction of the KT research.

8. Acknowledgements

This research has been supported by the project ITS/227/17FP from the Innovation and Technology Fund of Hong Kong.

References

  • J. R. Anderson, C. F. Boyle, A. T. Corbett, and M. W. Lewis (1990) Cognitive modeling and intelligent tutoring. Artif. Intell. 42 (1), pp. 7–49. External Links: ISSN 0004-3702, Link, Document Cited by: §1, §1, §2.
  • H. Cen, K. Koedinger, and B. Junker (2006) Learning factors analysis–a general method for cognitive model evaluation and improvement. In International Conference on Intelligent Tutoring Systems, pp. 164–175. Cited by: §2.
  • K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio (2014) Learning phrase representations using rnn encoder–decoder for statistical machine translation. In

    Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

    ,
    pp. 1724–1734. Cited by: §3.6.
  • M. Feng, N. Heffernan, and K. Koedinger (2009) Addressing the assessment challenge with an online system that tutors as it assesses. User Modeling and User-Adapted Interaction 19 (3), pp. 243–266. Cited by: §5.1.1.
  • A. Graves, G. Wayne, and I. Danihelka (2014) Neural turing machines. arXiv preprint arXiv:1410.5401. Cited by: §2.
  • S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural Computation 9 (8), pp. 1735–1780. Cited by: §2, §3.6.
  • L. Hubert and P. Arabie (1985) Comparing partitions. Journal of Classification 2 (1), pp. 193–218. Cited by: §5.2.3.
  • C. L. Hulin, F. Drasgow, and C. K. Parsons (1983) Item response theory: application to psychological measurement. Dorsey Press. Cited by: §2, §5.1.4.
  • A. M. Kaplan and M. Haenlein (2016) Higher education and the digital revolution: about moocs, spocs, social media, and the cookie monster. Business Horizons 59 (4), pp. 441–450. Cited by: §1.
  • M. Khajah, R. V. Lindsey, and M. C. Mozer (2016) How deep is knowledge tracing?. In Proceedings of the 9th International Conference on Educational Data Mining, pp. 94–101. Cited by: §2, §2, §6.1.
  • D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §5.2.1.
  • K. R. Koedinger, R. S. Baker, K. Cunningham, A. Skogsholm, B. Leber, and J. Stamper (2010) A data repository for the edm community: the pslc datashop. Handbook of educational data mining 43, pp. 43–56. Cited by: §5.1.3.
  • P. Legendre and L. Legendre (1998) Numerical ecology, volume 24, (developments in environmental modelling). Elsevier Science Amsterdam, The Netherlands. Cited by: §6.4.
  • V. Nair and G. E. Hinton (2010) Rectified linear units improve restricted boltzmann machines. In

    Proceedings of the 27th International Conference on Machine Learning (ICML-10)

    ,
    pp. 807–814. Cited by: §3.7.
  • H. S. Nwana (1990) Intelligent tutoring systems: an overview. Artificial Intelligence Review 4 (4), pp. 251–277. Cited by: §1.
  • P. I. Pavlik, H. Cen, and K. R. Koedinger (2009) Performance factors analysis–a new alternative to knowledge tracing. In 14th International Conference on Artificial Intelligence in Education, Cited by: §2.
  • R. Pelánek (2017) Bayesian knowledge tracing, logistic models, and beyond: an overview of learner modeling techniques. User Modeling and User-Adapted Interaction 27 (3-5), pp. 313–350. Cited by: §1, §1.
  • C. Piech, J. Bassen, J. Huang, S. Ganguli, M. Sahami, L. J. Guibas, and J. Sohl-Dickstein (2015) Deep knowledge tracing. In Advances in Neural Information Processing Systems, pp. 505–513. Cited by: §1, §2, §5.1.4, §5.2.1.
  • N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov (2014) Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15 (1), pp. 1929–1958. Cited by: §3.6.
  • J. Weston, S. Chopra, and A. Bordes (2014) Memory networks. CoRR abs/1410.3916. External Links: Link, 1410.3916 Cited by: §2.
  • K. H. Wilson, Y. Karklin, B. Han, and C. Ekanadham (2016) Back to the basics: bayesian extensions of irt outperform neural networks for proficiency estimation. arXiv preprint arXiv:1604.02336. Cited by: §2, §2, §6.1.
  • X. Xiong, S. Zhao, E. Van Inwegen, and J. Beck (2016) Going deeper with deep knowledge tracing.. In Proceedings of the 9th International Conference on Educational Data Mining, pp. 545–550. Cited by: §3.6, §5.1.1, §5.2.1, §6.1.
  • C. Yeung and D. Yeung (2018) Addressing two problems in deep knowledge tracing via prediction-consistent regularization. In Proceedings of the Fifth Annual ACM Conference on Learning at Scale, pp. 5. Cited by: §6.2.
  • M. V. Yudelson, K. R. Koedinger, and G. J. Gordon (2013) Individualized bayesian knowledge tracing models. In International Conference on Artificial Intelligence in Education, pp. 171–180. Cited by: §2.
  • W. Zaremba, I. Sutskever, and O. Vinyals (2014) Recurrent neural network regularization. arXiv preprint arXiv:1409.2329. Cited by: §3.6.
  • J. Zhang, X. Shi, I. King, and D. Yeung (2017) Dynamic key-value memory networks for knowledge tracing. In Proceedings of the 26th International Conference on World Wide Web, pp. 765–774. Cited by: §2, §6.1.