With the increasing amount of information available online, there is a rising need for structuring how one should process that information and learn knowledge efficiently in a reasonable order. As a result, recent work has tried to learn prerequisite relations among concepts, or which concept is needed to learn another concept within a concept graph [22, 12, 2]. Figure 1 shows an illustration of prerequisite chains as a directed graph. In such a graph, each node is a concept, and the direction of each edge indicates the prerequisite relation. Consider two concepts and , we define as is a prerequisite concept of . For example, the concept Variational Autoencoders is a prerequisite concept of the concept Variational Graph Autoencoders. If someone wants to learn about the concept Variational Graph Autoencoders, the prerequisite concept Variation Autoencoder should appear in the prerequisite concept graph in order to create a proper study plan.
Recent work has attempted to extract such prerequisite relationships from various types of materials including Wikipedia articles, university course dependencies or MOOCs (Massive Open Online Courses) [25, 12, 22]. However, these materials either need additional steps for pre-processing and cleaning, or contain too many noisy free-texts, bringing more challenges to prerequisite relation learning or extracting. Recently, li2018should presented a collection of university lecture slide files mainly in NLP lectures with related prerequisite concept annotations. We expanded this dataset as we believe these lecture slides offer a concise yet comprehensive description of advanced topics.
Deep models such as word embeddings  and more recently contextualized word embeddings  have achieved great success in the NLP tasks as demonstrate a stronger ability to represent the semantics of the words than other traditional models. However, recent prerequisite learning approaches fail to make use of distributional semantics and advances in deep learning representations [18, 25]. In this paper, we investigate deep node embeddings within a graph structure to better capture the semantics of concepts and resources, in order to learn accurate the prerequisite relations.
In addition to learning node representations, there has been growing research in geometric deep learning 
and graph neural networks
, which apply the representational power of neural networks to graph-structured data. Notably, kipf2016semi proposed Graph Convolutional Networks (GCNs) to perform deep learning on graphs, yielding competitive results in semi-supervised learning settings. TextGCN was proposed by to model a corpus as a heterogeneous graph in order to jointly learn word and document embeddings for text classification. We build upon these ideas for constructing a resource-concept graph111We use the term resource instead of document for generalization.. Additionally, most of the mentioned methods require a subset of labels for training, a setting which is often infeasible in the real world. Limited research has been investigated learning prerequisite relations without using human annotated relations during training . In practice, it is very challenging to obtain annotated concept-concept relations, as the complexity for annotating is given concepts. To tackle this issue, we propose a method to learn prerequisite chains without any annotated concept-concept relations, which is more applicable in the real word.
Our contributions are two-fold: 1) we expand upon previous annotations to increase coverage for prerequisite chain learning in five categories, including AI (artificial intelligence), ML (machine learning), NLP, DL (deep learning) and IR (information retrieval). We also expand a previous corpus of lecture files to include an additional 5000 more lecture slides, totaling 1,717 files. More importantly, we add additional concepts, totaling 322 concepts, as well as the corresponding annotations of each concept pair, which totals 103,362 relations. 2) we present a novel graph neural model for learning prerequisite relations in an unsupervised way using deep representations as input. We model all concepts and resources in the corpus as nodes in a single heterogeneous graph and define a propagation rule to consider multiple edge types by eliminating concept-concept relations during training, making it possible to perform unsupervised learning. Our model leads to improved performance over a number of baseline models. Notably, it is the first graph-based model that attempts to make use of deep learning representations for the task of unsupervised prerequisite learning. Resources, annotations and code are publicly available online222https://github.com/Yale-LILY/LectureBank/tree/master/LectureBank2.
2 Related Work
2.1 Deep Models for Graph-structured Data
There has been much research focused on graph-structured data such as social networks and citation networks [28, 1, 8], and many deep models have achieved satisfying results. Deepwalk  was a breakthrough model which learns node representations using random walks. Node2vec 
was an improved scalable framework, achieving promising results on multi-label classification and link prediction. Besides, there has been some work like graph convolution neural networks (GCNs), which target on deep-based propagation rules within graphs. A recent work applied GCN for text classification by constructing a single text graph for a corpus based on word co-occurrence and document word relations. The experimental results showed that the proposed model achieved state-of-the-art methods on many benchmark datasets. We are inspired by this work in that we also attempt to construct a single graph for a corpus, however, we have different types of nodes and edges.
2.2 Prerequisite Chain Learning
Learning prerequisite relations between concepts has attracted much recent work in machine learning and NLP field. Existing research focuses on machine learning methods (i.e., classifiers) to measure the prerequisite relations among concepts[21, 23, 22]. Some research integrates feature engineering to represent a concept, inputting these features to a classic classifier to predict relationship of a given concept pair [22, 21]. The resources to learn those concept features include university course descriptions and materials as well as online educational data [23, 22]. Recently, li2018should introduced a dataset containing 1,352 English lecture files collected from university-level lectures as well as 208 manually-labeled prerequisite relation topics, initially introduced in . To avoid feature engineering, they applied graph-based methods including GAE and VGAE  which treat each concept as a node thus building a concept graph. They pretrained a Doc2vec model 
to infer each concept as a dense vector, and then trained the concept graph in a semi-supervised way. Finally, the model was able to recover unseen edges of a concept graph. Different from their work, we wish to do the prerequisite chain learning in an unsupervised manner, while in training, no concept relations will be provided to the model.
We manually collected English lecture slides mainly on NLP-related courses in recent years from known universities. We treated them as individual slide file in PDF or PowerPoint Presentations format. Our new collection has 529 additional files from 17 courses, which we combined with the data provided by . We ended up with a total number of 77 courses with 1,717 English lecture slide files, covering five domains. We show the final statistics in Table 1. For our experiments, we converted those files into TXT format which allowed us to load the free texts directly.
We manually expanded the size of concept list proposed by  from 208 to 322. We included concepts which were not found in their version like restricted boltzmann machine and neural parsing. Also, we re-visited their topic list and corrected a small number of the topics. For example, we combined certain topics (e.g. BLUE and ROUGE) into a single topic (machine translation evaluation). We asked two NLP PhD students to re-evaluate existing annotations from the old corpus and to provide labels for each added concept pair in the new corpus. A Cohen kappa score  of 0.6283 achieved between our annotators which can be considered as a substantial agreement. We then took the union of the annotations, where if at least one judge stated that a given concept pair had as a prerequisite of , then we define it a positive relation. We believe that the union of annotations makes more sense for our downstream application, where we want users to be able to mark which concepts they already know and displaying all potential concepts is essential. We have 1,551 positive relations on the 322 concept nodes.
4.1 Problem Definition
In our corpus, every concept is a single word or a phrase; every resource is free text extracted from the lecture files. We then wish to determine for a given concept pair , whether is a prerequisite concept of . We define the concept-resource graph as , where denotes node features or representations and denotes the adjacency matrix. In our case, the adjacency matrix is the set of relations between each node pair, or the edges between the nodes. In Figure 2, we build a single, large graph consisting of concepts (oval nodes) and resources (rectangular nodes) as nodes, and the corresponding relations as edges. So there are three types of edges in : the edge between two resource nodes (blue line), the edge between a concept node and a resource node (black solid line), and the edge between two concept nodes (black dashed line). Our goal is to learn the relations between concepts only (), so prerequisite chain learning can be formulated as a link prediction problem. Our unsupervised setting is to exclude any direct concept relations () during training, and we wish to predict these edges through message passing via the resource nodes indirectly.
Graph Convolutional Networks (GCN)  is a semi-supervised learning approach for node classification on graphs. It aims to learn the node representation in the hidden layers, given the initial node representation and the adjacency matrix . The model incorporates local graph neighborhoods to represent a current node. In a simple GCN model, a layer-wise propagation rule can be defined as the following:
where is the current layer number,
is a non-linear activation function, andis a parameter matrix that can be learned during training. We eliminate the
for the last layer output. For the task of node classification, the loss function is cross-entropy loss. Typically, a two-layer GCN (by plugging Equation1 in) is defined as:
where is the new adjacency matrix at the second graph layer.
Relational Graph Convolutional Networks (R-GCNs)  expands the types of graph nodes and edges based on the GCN model, allowing operations on large-scale relational data. In this model, an edge between a node pair and is denoted as , where
is considered a relation type, while in GCN, there is only one type. Similarly, to obtain the hidden representation of the node, we consider the local neighbors and itself; when multiple types of edges exist, different sets of weight will be considered. So the layer-wise propagation rule is defined as:
where is the set of relations or edge types in the graph, denotes the neighbors of node with relation , is the weight matrix at layer for nodes in , is the shared weight matrix at layer , is the number of weight matrices in each layer.
Variational Graph Auto-Encoders (V-GAE)  is a framework for unsupervised learning on graph-structured data based on variational auto-encoders . It takes the adjacency matrix and node features as input and tries to recover the graph adjacency matrix through the hidden layer embeddings . Specifically, the non-probabilistic graph auto-encoder (GAE) model calculates embeddings via a two-layer GCN encoder: , which is given by Equation 2.
Then, in the variational graph auto-encoder, the goal is to sample the latent parameters
from a normal distribution:
where is the matrix of mean vectors, and . The training loss then is given as the KL-divergence between the normal distribution and the sampled parameters :
In the inference stage, the reconstructed adjacency matrix is the inner product of the latent parameters : .
4.3 Proposed Model
To take multiple relations into consideration and make it possible to do unsupervised learning for concept relations, we propose our R-VGAE model. Our model builds upon R-GCN and VGAE by taking the advantages of both: R-GCN is a supervised model that deals with multiple relations; VGAE is an unsupervised graph neural network. We then make it possible to directly to train on a heterogeneous graph in an unsupervised way for link prediction, in order to learn the prerequisite relations for the concept pairs.
Our model first applies the R-GCN in Equation 3 as the encoder to obtain the latent parameters , given the initial node features and adjacency matrix : . In terms of the variational verison, as opposed to the standard VGAEs, we parameterize by the RGCN model: , and .
To predict the link between a concept pair , we followed the DistMult  method: we take the last layer output node features , and define the following score function to recover the adjacency matrix by learning a trainable weight matrix :
The loss consists of the cross-entropy reconstruction loss of adjacency matrix () and the loss from the latent parameters defined in Equation 5:
We compare two variations of our R-GAE model. Unsupervised: only the concept-resource edges and resource-resource edges are provided during training. This is an unsupervised model because no concept-concept edges are used. Semi-supervised: the model has access to concept-resource edges and resource-resource edges , as well as a percentage of the available concept-concept edges , described later.
4.4 Node Features
Sparse Embeddings We used TFIDF (term frequency–inverse document frequency) to get sparse embeddings for all nodes. We restricted the global vocabulary to be the 322 concept terms only, which means that the dimension of the node features is 322, as we aim to model keywords.
which learns n-gram embeddings during training, and here we aim to infer the embeddings of the concepts in our corpus. We trained the P2V model using only our corpus by treating each slide file as a short document as a sequence of tokens. For each resource node, we take an element-wise average of the P2V embeddings of each single token and phrases that resource covered. Similarly, for each concept node, we took element-wise average of the embeddings of each individual token and the concept phrase. In addition, we then utilized the BERT model as another type of dense embedding. We fine-tuned the masked language modeling of BERT using our corpus.
4.5 Adjacency Matrix
To construct the adjacency matrix , for each node pair
, we applied cosine similarity based on enriched TFIDF features333This means that the TFIDF features are calculated on an extended vocabulary that includes all possible tokens appeared in the corpus. as the value . Previous work has applied cosine similarity for vector space models [11, 31, 4], so we believe it is a suitable method in our case. This way we were able to generate concept-resource edge values () and resource-resource edge values (). Note that for concept-concept edge values : 1 if is a prerequisite of , 0 otherwise. These values are not computed in the unsupervised setting.
We compare our proposed model with two groups of baseline models. We report accuracy, F1 scores, the macro averaged Mean Average Precision (MAP) and Area under the ROC Curve (AUC) scores in Table 2, as done by previous research [6, 25, 20]. We split the positive relations into 9:1 (train/test), and randomly select negative relations as negative training samples, and then we run over five random seeds and report the average scores, following the same setting with kipf2016variational and li2018should.
|Concept embedding + classifier|
|R-VGAE (Our proposed model)|
Concept embedding + classifier
The first group is the concept embedding with traditional classifiers including Support Vector Machines, Logistic Regression, Naïve Bayes and Random Forest. For a given concept pair, we concatenate the dense embeddings for both concepts as input to train the classifiers, and then we report the best result. We compare Phrase2Vec (P2V) and BERT embeddings. We have two corpora: one is the old version (lb1) provided by , another one is our version (lb2). For the BERT model, we applied both the original version from Google (original) 444https://github.com/google-research/bert, and the fine-tuned language models version on our corpora (lb1, lb2) from xiao2018bertservice, and perform inference on the concepts. The P2V embeddings have 150 dimension, and the BERT embeddings have 768 dimensions. We show improvements on the BERT and P2V baselines by using our additional data via the underscored values. This indicates that the concept relations can be more accurately predicted when enriching the training corpus to train better embeddings. In our following experiments, if not specified, we applied lb2 as the training corpus.
Graph-based methods We apply the classic graph-based embedding methods DeepWalk  and Node2vec , by considering the concept nodes only. Then the positive concept relations in training set are the known sequences, allowing to train both models to infer node features. Similarly, in the testing phrase, we concatenate the node embeddings given a concept pair, and utilize the mentioned classifiers to predict the relation and report the performance of the best one. We then include VGAE and GAE methods for prerequisite chain learning following li2018should. Both methods construct the concept graph in a semi-supervised way. We apply P2V embeddings to replicate their methods, though it is possible to try additional embeddings, this is not our main focus. Finally, we compare with the original R-GCN model for link prediction proposed by schlichtkrull2018modeling and apply the same embeddings with the VGAE and GAE methods. Other semi-supervised graph methods such as GCNs require node labels and thus are not applicable to our setting. We can see that the GAE method achieves the best results among the baselines. Compare with the first group, BERT (original) still has a better performance due to its ability to represent phrases.
R-VGAE Our model can be trained in both unsupervised (US+*) and semi-supervised (SS+*) way. We also utilize various types of embeddings include P2V, TFIDF, BERT (fine-tuned) and BERT (original). The best performed model in the unsupervised setting is with P2V embeddings, marked with asterisks, and it is better than all the baseline supervised methods with a large margin. In addition, our semi-supervised setting models boost the overall performance. We show that the SS+P2V model performs the best among all the mentioned methods, with a significant improvement of 9.77% in accuracy and 10.47% in F1 score compared with the best baseline model BERT (original). This indicates that R-VGAE model does better on link prediction by bringing extra resource nodes into the graph, while the concept relation can be improved and enhanced indirectly via the connected resource nodes. We also observe that with BERT embeddings, the performance lags behind the other embedding methods for our approach. A reason might be that the dimensionality of the BERT embeddings is relatively large compared to P2V and may cause overfitting, especially when the edges are sparse; and it might not suitable to represent resources as they are a list of keywords when fine-tuning the language modeling. The P2V embeddings outperform TFIDF for both unsupervised and semi-supervised models. This shows that compared with sparse embeddings, dense embeddings can better preserve the semantic features when integrated within the R-GAE model, thus boosting the performance. Besides, as a variation of R-GCN and GAE, our model surpasses them by taking the advantages of both, comparing with R-GCN and GAE results reported in the second group.
|Concept||Gold Prerequisite Concepts||Model Output Concept|
|classic parsing methods||classic parsing methods|
|linguistics basics||linguistics basics|
|nlp introduction||nlp introduction|
|chomsky hierarchy||chomsky hierarchy|
|linear algebra||linear algebra|
|tree adjoining grammar||classic parsing methods||classic parsing methods|
|linguistics basics||linguistics basics|
|nlp introduction||nlp introduction|
|context free grammar||context free grammar|
|probabilistic context free grammars||probabilistic context free grammars|
|chomsky hierarchy||chomsky hierarchy|
|context sensitive grammar||context sensitive grammar|
We then take the recovered concept relations from our best performed model R-VGAE (SS+P2V) in Table 2), and compare them with the gold annotated relations. Note that here we only look at concept nodes. The average degree for gold graph concept nodes is 9.79, while our recovered one has an average degree of 6.10, and this means our model predicts fewer edges. We also check the most popular concepts that have the most degrees. We select dependency parsing and tree adjoining grammar as examples. In Table 3, we show a comparison of the prerequisites from the annotations and our model’s output. The upper group illustrates results for dependency parsing, where one can notice that the predicted concepts all appear in the gold results, missing only a single concept. This shows that even though our model predicts less number of relations, it still predicts correct relations. The lower group shows the comparison for the concept tree adjoining grammar, our model gives precise prerequisite concepts among all eight concepts from the gold set. When a concept has a certain amount number of prerequisite concepts, our model is able to provide a comprehensive concept set with a good quality. In the real word, especially in a learner’s scenario, he or she wants to learn the new concept with enough prerequisite knowledge, which our model tends to provide.
7 Conclusion and Future Work
In this paper we introduced an expanded dataset for prerequisite chain learning with additional an 5,000 lecture slides, totaling 1,717 files. We also provided prerequisite relation annotations for each concept pair among 322 concepts. Additionally, we proposed an unsupervised learning method which makes use of advances in graph-based deep learning algorithms. Our method avoids any feature engineering to learn concept representations. Experimental results demonstrate that our model performs well in an unsupervised setting and is able to further benefit when labeled data is available. In future work, we would like to perform a more comprehensive model comparison and evaluation by bringing other possible variations of graph-based models to learn a concept graph. Another interesting direction is to apply multi-task learning to the proposed model by adding a node classification task if there are node labels available. A part of the future work would also include developing educational applications for learners to find out their study path for certain concepts.
Graph based anomaly detection and description: a survey. Data mining and knowledge discovery 29 (3), pp. 626–688. Cited by: §2.1.
-  (2018) Mining MOOC Lecture Transcripts to Construct Concept Dependency Graphs.. International Educational Data Mining Society. Cited by: §1, §1.
-  (2018) Unsupervised Statistical Machine Translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pp. 3632–3642. Cited by: §4.4.
-  (2016) Automatic labelling of topics with neural embeddings. arXiv preprint arXiv:1612.05340. Cited by: §4.5.
-  (2017) Geometric Deep Learning: Going Beyond Euclidean Data. IEEE Signal Process. Mag. 34 (4), pp. 18–42. Cited by: §1.
-  (2016) Data-driven Automated Induction of Prerequisite Structure Graphs.. International Educational Data Mining Society. Cited by: §5.
-  (1960) A Coefficient of Agreement for Nominal Scales. Educational and psychological measurement 20 (1), pp. 37–46. Cited by: §3.2.
-  (2016) Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems, pp. 3844–3852. Cited by: §2.1.
-  (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §1, §4.4.
-  (2018) TutorialBank: A Manually-Collected Corpus for Prerequisite Chains, Survey Extraction and Resource Recommendation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 611–620. Cited by: §2.2.
W2VLDA: almost unsupervised system for aspect based sentiment analysis. Expert Systems with Applications 91, pp. 127–137. Cited by: §4.5.
-  (2016) Modeling Concept Dependencies in a Scientific Corpus. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vol. 1, pp. 866–875. Cited by: §1, §1.
-  (2005) A new model for learning in graph domains. In Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., Vol. 2, pp. 729–734. Cited by: §1.
-  (2016) Node2vec: scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Cited by: §2.1, Table 2, §5.
-  (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §4.2.
-  (2016) Variational Graph Auto-Encoders. Bayesian Deep Learning Workshop (NIPS 2016). Cited by: §4.2.
-  (2017) Semi-Supervised Classification with Graph Convolutional Networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, Cited by: §2.2, §4.2.
-  (2017) Semi-Supervised Techniques for Mining Learning Outcomes and Prerequisites. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 907–915. Cited by: §1.
-  (2014) Distributed representations of sentences and documents. In International conference on machine learning, pp. 1188–1196. Cited by: §2.2.
-  (2019) What Should I Learn First: Introducing Lecturebank for NLP Education and Prerequisite Chain Learning. In 33rd AAAI Conference on Artificial Intelligence (AAAI-19). Cited by: §3.1, §3.2, Table 2, §5, §5.
Investigating active learning for concept prerequisite learning. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §2.2.
-  (2017) Recovering Concept Prerequisite Relations from University Course Dependencies. In Thirty-First AAAI Conference on Artificial Intelligence, Cited by: §1, §1, §2.2.
-  (2016) Learning concept graphs from online educational data. Journal of Artificial Intelligence Research 55, pp. 1059–1090. Cited by: §2.2.
-  (2013) Distributed Representations of Words and Phrases and Their Compositionality. In Advances in neural information processing systems, pp. 3111–3119. Cited by: §1, §4.4.
-  (2017) Prerequisite Relation Learning for Concepts in MOOCs. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1447–1456. Cited by: §1, §1, §5.
-  (2014) DeepWalk: online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, New York, NY, USA, pp. 701–710. External Links: Cited by: §2.1, Table 2, §5.
-  (2018) Modeling Relational Data with Graph Convolutional Networks. In European Semantic Web Conference, pp. 593–607. Cited by: §4.2, Table 2.
-  (2008) Collective classification in network data. AI magazine 29 (3), pp. 93–93. Cited by: §2.1.
-  (2014) Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575. Cited by: §4.3.
-  (2018) Graph Convolutional Networks for Text Classification. In 33rd AAAI Conference on Artificial Intelligence (AAAI-19). Cited by: §1, §2.1.
-  (2018) Learning transferable features for open-domain question answering. In 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. Cited by: §4.5.