With advances in Artificial Intelligence in Education (AIEd) and the ever-growing scale of Interactive Educational Systems (IESs), data-driven approach has become a common recipe for various tasks such as knowledge tracing and learning path recommendation. Unfortunately, collecting real students' interaction data is often challenging, which results in the lack of public large-scale benchmark dataset reflecting a wide variety of student behaviors in modern IESs. Although several datasets, such as ASSISTments, Junyi Academy, Synthetic and STATICS, are publicly available and widely used, they are not large enough to leverage the full potential of state-of-the-art data-driven models and limits the recorded behaviors to question-solving activities. To this end, we introduce EdNet, a large-scale hierarchical dataset of diverse student activities collected by Santa, a multi-platform self-study solution equipped with artificial intelligence tutoring system. EdNet contains 131,441,538 interactions from 784,309 students collected over more than 2 years, which is the largest among the ITS datasets released to the public so far. Unlike existing datasets, EdNet provides a wide variety of student actions ranging from question-solving to lecture consumption and item purchasing. Also, EdNet has a hierarchical structure where the student actions are divided into 4 different levels of abstractions. The features of EdNet are domain-agnostic, allowing EdNet to be extended to different domains easily. The dataset is publicly released under Creative Commons Attribution-NonCommercial 4.0 International license for research purposes. We plan to host challenges in multiple AIEd tasks with EdNet to provide a common ground for the fair comparison between different state of the art models and encourage the development of practical and effective methods.READ FULL TEXT VIEW PDF
Learning Path Recommendation is the heart of adaptive learning, the
The problem of task planning for artificial agents remains largely unsol...
Interactive Educational Systems (IES) enabled researchers to trace stude...
Knowledge tracing (KT) is the problem of modeling each student's mastery...
The world has transitioned into a new phase of online learning in respon...
The process of writing code and use of features in an integrated develop...
We present a novel intelligent tutoring system which builds upon
Knowledge tracing, the task of modelling a student’s knowledge state through his learning activities over time, is a long-standing challenge of Artificial Intelligence in Education (AIEd). Since understanding a student’s knowledge state is a primary step for many research areas, such as learning path recommendation, score prediction and dropout prediction, knowledge tracing has been considered as one of the most fundamental problems in AIEd. With advances in data science and the increasing availability of Interactive Educational Systems (IESs), data-driven models that learns the complex nature of student behaviors from interaction data has become a common recipe for knowledge tracing(Piech et al., 2015; Zhang et al., 2017; Huang et al., 2019b; Pandey and Karypis, 2019; Lee et al., 2019). However, the AIEd research community currently lacks a large-scale benchmark dataset which reflects a wide variety of student behaviors available in modern IESs. Although several datasets, such as ASSISTments (Feng et al., 2009; Pardos et al., 2014), Junyi Academy (Chang et al., 2015), Synthetic (Piech et al., 2015) and STATICS, are available to public and widely used by AIEd researchers, they are not large enough to leverage the full potential of data-driven models and limits the record to question-solving activities.
In this paper, we introduce EdNet, a large-scale hierarchical dataset consisting of student interaction logs collected over more than 2 years from Santa, a multi-platform, self-study solution equipped with artificial intelligence tutoring system that aids students to prepare the TOEIC® (Test of English for International Communication®) test. To the best of our knowledge, EdNet is the largest dataset open to public, containing 131,441,538 interactions from 784,309 students. Aside from question-solving logs, EdNet also contains diverse student behaviors including self-studying activities, choice elimination, course payment and many more. EdNet has a hierarchical structure where the possible student actions in Santa are divided into 4 different levels of abstractions, so that a user can select the appropriate level suited for AIEd tasks such as knowledge tracing and learning path recommendation. We release EdNet to the public under Creative Commons Attribution-NonCommercial 4.0 International license only for research purpose.111https://github.com/riiid/ednet
This paper is organized as follows. We first introduce the properties of EdNet: large-scaled, diverse in action types, hierarchical, and served in multiple platforms. Especially, we describe the four-level hierarchical structure of EdNet. In Section 3, we have an overview of existing datasets in education that are widely used in various AIEd works. Finally, we propose two possible applications of EdNet: knowledge tracing and learning path recommendation via reinforcement learning.
EdNet is the dataset of all student-system interactions collected over 2 years by Santa, a multi-platform AI tutoring service with about 780K students in Korea available through Android, iOS and Web. Santa aims to prepare students for the TOEIC (Test of English for International Communication®) Listening and Reading Test. The test consists of seven parts, each named from Part 1 to Part 7. Part 1 to 3 forms the listening section, and Part 4 to 7 forms the reading session. Each section is timed and consists of 100 questions. The final score is relative, ranging from 0 to 990 with a score gap of 5. Each student communicates his needs and actions through Santa, to which the system responds by providing video lectures, assessing his response or giving expert’s commentary. Santa’s UI and data-gathering process is described in Figure 2. Accordingly, the EdNet dataset contains various features of student actions such as the learning material he has consumed or the time he spent for solving a given question.
Large Scale EdNet is composed of a total of 131,441,538 interactions collected from 784,309 students of Santa since 2017. Each student has generated 441.12 interactions while using Santa on average. EdNet, based on those interactions, makes researchers possible to access to a large-scale real-world ITS data. Moreover, Santa provides a total 13,169 problems and 1,021 lectures tagged with 293 types of skills, and each of them has been consumed 95,294,926 times and 601,805 times, respectively. To the best of our knowledge, this is the largest dataset in education available to the public in terms of the total number of students, interactions, and interaction types.
Diversity The number of interaction types in Table 7 shows that EdNet offers the most diverse set of interactions among all existing ITS data. The set of behaviors directly related to learning is also richer than other datasets, as EdNet includes learning activities such as reading explanations and watching lectures not provided by others. Such diversity enables researchers to analyze students from various perspectives. For example, purchasing logs may help to analyze student’s engagement for learning. Also, student’s target scores, real scores, and contents information are provided separately (see Table 6 and 1).
Hierarchy EdNet has a hierarchical structure of different data points as shown Figure 1. To provide various kinds of actions in a consistent and organized manner, EdNet offers the datasets in four different levels each named KT1, KT2, KT3 and KT4. As the level of the dataset increases, the number of actions and types of actions involved also increase as shown in Table 7. The details of each dataset is described in Section 2.2
Multi-platform In the age where students have access to various devices spanning from personal computers to smartphones and AI speakers, it is inevitable for ITSs to offer the access from multiple platforms. Accordingly, Santa is a multi-platform system available in iOS, Android and Web and EdNet contains data points gathered from both mobile and desktop. This allows the study of AIEd models suited for future multi-platform ITSs, utilizing the data collected from different platforms in a consistent manner.
The raw records obtained by Santa accurately and thoroughly represent each student’s learning process. However, the unprocessed details of raw records are hard to be utilized directly for a particular AIEd task. In order to aid the process of extracting meaningful information, we pre-process the collected records into four datasets of different levels of abstractions named KT1, KT2, KT3 and KT4. The resolution of each dataset increases in the given order, starting from question-response interaction sequences used by most deep knowledge tracing models to the complete list of user actions gathered by Santa. The datasets were designed with particular tasks in mind so that one can use them readily for AIEd applications such as knowledge tracing, score prediction and dropout prediction. We describe each dataset of EdNet as the following.
EdNet-KT1 In the most simplest form, the learning session of a student can be described as a sequence of question-response pairs. That is,
where is the ’th question suggested by ITS and is the student’s response to
. This is the format used by various deep-learning knowledge tracing models such as Deep Knowledge Tracing(Piech et al., 2015) and Self-Attentive Knowledge Tracing (Pandey and Karypis, 2019). EdNet-KT1 is the record of Santa collected since Apr. 18, 2017 following this question-response sequence format.
A major property of EdNet is that the questions come in bundles. That is, a collection of questions sharing a common passage, picture or listening material. For example, questions of ID q2319, q2320 and q2321 may share the same reading passage. In this case, the questions are said to form a bundle and will be given to the student with corresponding shared material. When a bundle is given, a student have access to all the problems and has to respond all of them in order to complete the bundle. Considering this, KT1 consists of a table of actions for each student with the following columns (see Table 2).
: The moment the question was given, represented as Unix timestamp in milliseconds.
solving_id: Each bundle in the learning session of a student is indexed incrementally in the order of appearance, starting form 1. For example, the third bundle solved by a student during his learning session has the solving_id of 3.
question_id: The unique ID of the question given to the student.
user_answer: The student’s answer to the given question, recorded as an alphabet between ‘a’ and ‘d’ inclusively.
elapsed_time: The time the student took to submit his answer in milliseconds.
To provide educational context of each question, such as the correct answer or the set of knowledge concepts involved, a table of question data used by Santa is given separately. The table contains the following columns (see upper left of Table 1).
question_id: A unique ID representing each question. All question ID starts with letter ‘q’ followed by a positive number.
bundle_id: The ID of the bundle that contains the problem. All bundle ID starts with letter ‘b’ followed by a positive number.
explanation_id: Whenever a student finishes solving a bundle, a corresponding explanation of the bundle made by domain experts is given. A unique ID starting with letter ‘e’ followed by a positive number is assigned to each explanation.
correct_answer: The correct answer, which is one of the alphabets ‘a’, ‘b’, ‘c’ or ‘d’, is recorded.
part: The part of the TOEIC exam the question belongs to is recorded as a number.
tags: The set of knowledge concepts required for solving the problem is given as a semicolon-separated list. Each knowledge concept is represented as a unique positive integer.
deployed_at: The time the question was added to Santa database and deployed to students, recorded as Unix timestamp in milliseconds.
This table can be utilized in many different ways. For example, one may consider a bundle as a single unit of interaction and run a knowledge tracing model on the sequence of bundles instead of the sequence of questions. Also, models that compute a student’s level of understanding on each explicit knowledge concepts, such as Bayesian Knowledge Tracing (Corbett and Anderson, 1994; Yudelson et al., 2013), may refer to the tags column to find the explicit list of knowledge concepts involved for each activity.
A major drawback of the question-response sequence format is that it cannot account for the inherent heterogeneity of students’ actions. For example, a student may alternately select one of two answer choices before submitting his final answer, which possibly signals that he is unsure of either of the options. Due to the restriction of question-response format, a dataset following such format like EdNet-KT1 can’t effectively represent such situation. To overcome this limitation, Santa have collected the full behavior of students since Aug. 27, 2018. As a result, the datasets EdNet-KT2, EdNet-KT3 and EdNet-KT4 of action sequences of each user are compiled. Each action represents a single unit of behavior made by a student in the Santa UI, such as watching a video lecture, choosing a response option, or reading a passage. By recording a student’s behavior as-is, the datasets represent each student’s behavior more accurately and allows AIEd models to incorporate finer details of learning history. EdNet-KT2, the simplest action-based dataset of EdNet, consists of the actions related to question-solving activities. In specific, it consists of the following columns.
timestamp: The moment the action was made is recorded as Unix timestamp in milliseconds.
action_type: Represents the type of the action.
enter is recorded when student first receives and views a question bundle through UI.
respond is recorded when the student selects an answer choice to one of the questions in the bundle. A student can respond to the same question multiple times. In this case, only the last response before submitting his final answer is considered as his response.
submit is recorded when the student submits his final answers to the the given bundle.
item_id: The ID of item involved with the action. For EdNet-KT2, only the IDs of questions and bundles are recorded. A bundle is assigned for actions of type enter and submit. A question is assigned for the actions of type respond.
source: A student may participate in question-solving activities from different modes of studying offered by Santa. The mode from which the question was offered is denoted as a source. Each possible kind of sources are described in the GitHub repository.
user_answer: For actions of type respond, the student’s submitted answer is recorded as an alphabet between ‘a’ and ‘d’, inclusive.
platform: The platform used by the student, which is either mobile or web, is recorded.
Note that the features of KT1 can be fully recovered by the columns of KT2, and KT2 contains further information such as the study mode of student or the intermediate responses provided by student.
EdNet-KT3 In Santa, a student may participate in various learning activities aside from solving questions, such as reading through experts’ commentary on a question or watching lectures provided by the system. EdNet-KT3 incorporates such learning activities by adding the following actions to the EdNet-KT2 dataset.
Reading explanations: Whenever a student enters the explanation view in Santa UI or exits the view, a corresponding action of type enter and quit with explanation ID as item_id is recorded.
Watching lectures: Whenever a student plays a lecture video or stops watching the video, a corresponding action of type enter and quit with lecture ID as item_id is recorded.
Such actions can be utilized by to infer the impact of learning activities to each student’s knowledge state. For example, one may analyze the time each student have spent studying a given material by subtracting the timestamps of enter and quit actions and use this to study the effect of students’ different learning behaviors. To provide educational context of each lecture item, a table representing the following features of each lecture item is given separately (see upper right of Table 1).
lecture_id: The unique ID of each lecture, starting with letter ‘l’ followed by a unique positive number.
part: The part of the TOEIC exam that corresponds to the lecture. For lectures not targeted to a particular part, a number of 0 is assigned.
tags: The set of knowledge concepts taught in semicolon-separated list. Each knowledge concept, represented as a unique positive integer, coincides with the tags used in the question information table.
video_length: Running time of the lecture video in milliseconds.
deployed_at: The time the lecture was first introduced in Santa, recorded as UNIX timestamp in milliseconds.
EdNet-KT4 In EdNet-KT4, a complete list of actions collected by Santa is provided. In particular, the following types of actions are added to EdNet-KT3 (see Table 5).
erase_choice/undo_erase_choice: For user’s convenience, a student can hide an answer choice by erasing it. He can also undo his action to consider the choice again. The act of erasing a choice and undoing it are given as actions of type erase_choice and undo_erase_choice respectively. The answer choice erased/un-erased is supplied in the user_answer column.
play_audio/pause_audio/play_video/pause_video: A student can play or pause a given multimedia asset. For videos, he can also navigate to different moments of the video by moving his cursor to different places. Such actions are denoted as one of the action types play_audio, pause_audio, play_video or pause_video. A column cursor_time is added to EdNet-KT4, to represent the moment where he has played or paused the media.
pay/refund: By default, a free user is offered 10 questions of Part 2 and 5 each daily. By purchasing a payment item, the student have full access to questions of all parts. A table of payment items is provided separately (see lower left of Table 1). Items of type pass allows solving all questions for the time duration in milliseconds. Items of type paygo allows student to solve the specific number of bundles denoted by column number_of_bundles.
enroll_coupon: A student may enter his promotion coupon code to receive corresponding benefits, which allows him to use all the features of Santa for the fixed amount of time like pass. The ID of his coupon and the time he entered the coupon is recorded as an action of type enroll_coupon. A table of coupons is provided separately (see lower right of Table 1).
The purpose of EdNet-KT4 is to provide very fine details of Santa, allowing access to features and tasks specific to a particular ITS design. For example, one may analyze the impact of purchasing a paid course in studying behavior.
In this section, we describe the algorithm used by Santa for recommending educational contents to students.
For recommending questions, the process is divided into diagnosis and learning phase. When a student uses Santa
for the first time, the system first recommends several questions to assess his knowledge status. Before August 2017, a fixed set of 214 questions developed by TOEIC experts were used for diagnosis phase. After August 2017, the number of questions used for diagnosis was reduced to 30. From Steptember 2018, the recommendation strategy for diagnosis was changed to an algorithm based on Item Response Theory (IRT). First, a question is randomly selected among the questions in Part 2 and 5. Then, the question with highest estimated entropy was proposed for each step until the end of diagnosis phase. Here theentropy of a question is defined as
where is the total number of choices of and
is the predicted probability that the student choose-th choice for each . We estimate via the Collaborative Filtering (CF) model introduced in (Lee et al., 2016). Note that this is a widely used methodology for diagnosis in education (Lord, 2012; Baker, 2001). A minimum of 7 and a maximum of 11 questions are given for diagnostics, ending when the average of entropy values of all questions are sufficiently small. Recommendation in the second phase (learning phase) is also done with help of our CF model. We divide the whole set of questions into three parts based on the predicted correctness probability for each question. Once the student selects a level of difficulty among ‘easy’, ‘normal’, and ‘hard’, the algorithm suggest a question that are chosen randomly from the corresponding set.
Lecture recommendation is based on tags of questions and lectures, which can be find in the contents information data (Table 1), and the predicted correctness probabilities of questions by the CF model. Once the number of incorrect answers to questions with particular tags exceeds certain threshold, Santa offers lectures with corresponding tags. It also provides lectures if the average correctness rate of questions with particular tags decreased by more than a certain threshold. For example, assume that a student answers incorrectly to 3 questions with ‘gerund’ and ‘present participle’ tag. Then Santa suggest a video lecture that explains differences between ‘gerund’ and ‘present participle’. Note that with lectures, Santa also provides several related questions as exercises.
Synthetic is a public dataset222https://github.com/chrispiech/DeepKnowledgeTracing/tree/master/data/synthetic made by the authors of Deep Knowledge Tracing (Piech et al., 2015). Based on Item Response Theory (IRT), a total of 4K virtual students answering a fixed set of 50 questions drawn from concepts are generated. More precisely, each student has a ”skill” for each concept represented by a single number , and each exercise has a difficulty represented by a number . The probability that the student answer a question correctly is modelled by the 3PL model of IRT given as , where is the probability of student guessing randomly (which is , as we assume that there are 4 choices for each question.) Two datasets with are available online.
ASSISTments datasets (Feng et al., 2009; Pardos et al., 2014) are collected from the ASSISTment system, an online tutoring system which provides instructional assistance while assessing students at the same time. Each ASSISTment consists of an original question from Massachusetts Comprehensive Assessment System (MCAS) 8th math test items, and a list of scaffolding questions created by domain experts. For each ASSISTment, a student first attempts an original question. If the student fails to answer the original question, scaffolding questions unfolds to guide the student towards the sub-steps required to solve the original question correctly. While solving the scaffolding questions, the student can give answers to the questions or ask for help-seeking behavior. Depending on the student actions, the system gives suitable feedbacks such as hints, buggy messages or answers to the scaffolding questions.
Statistics of each dataset are shown in Table 7. We provide description for each dataset below:
ASSISTments-2009 This is the ASSISTments dataset collected from 2009 to 2010 (Feng et al., 2009). The dataset contains question information, such as skills associated to the question, and logs for student interaction to the question, such as student answer, response correctness and the number of hint attempts to the question in chronological order. There are two versions of the dataset: skill builder and non-skill builder. However, since many works (Piech et al., 2015; Zhang et al., 2017; Pandey and Karypis, 2019) only consider the skill builder dataset in experimental studies, we only report statistics of the skill builder dataset. There are 3 types of interactions - attempt, hint, and scaffolding, which are represented as 0, 1, and 2 in the first_action column respectively.
ASSISTments-2012 This ASSISTments dataset is gathered from 2012 to 2013. The dataset is constructed to investigate correlation between student affective states and performance (Pardos et al., 2014). Authors of the paper estimated student affective states by applying automated affect and behavior detectors to students logs from the ASSISTment system. As a result, not only the dataset contains student attempt logs for questions, the dataset also includes student affective states, such as frustration, confusion, concentration and boredom. The types of interactions are exactly same as ASSISTments-2009.
ASSISTments-2015 This ASSISTments dataset is gathered in the year of 2015. The dataset only contains student attempt logs on 100 problem sets with the highest number of student attempts. Unlike the previous ASSISTment datasets consisting of rows with abundant features from student attempt logs, each row in this dataset only contains user ID, log ID, problem set ID and corresponding user answer correctness.
STATICS2011 dataset contains 335 engineering student’s question-solving logs of one-semester statics course via online educational system developed by Carnegie Mellon University. The dataset can be found in PSLC datashop (Koedinger et al., 2010), which is not public but can be obtained by request333https://pslcdatashop.web.cmu.edu/DatasetInfo?datasetId=507. Raw data has 361,092 interactions, including question-solving and UI-related logs (such as pressing buttons). There are total 5 types of interactions: ATTEMPT, HINT_REQUEST, SAVE_ATTEMPT, SUBMIT_ATTEMPT, and VIEW_PAGE. There are total 27 kinds of tags, which is a concatenation of Unit, Module, and Section. Concatenation of Problem Name and Step Name can be regarded as a question id, where there are 1244 different questions. The dataset is used by several works on KT, such as DKVMN (Zhang et al., 2017), SAKT (Pandey and Karypis, 2019), and SKVMN (Abdelrahman and Wang, 2019). Note that the pre-processed data444https://github.com/jennyzhang0215/DKVMN/tree/master/data/STATICS used in above works are slightly smaller that the raw data.
Junyi Academy555http://www.junyiacademy.org/ is an e-learning platform in Taiwan similar to Khan Academy that provides about 700 mathematics questions. The Junyi Academy dataset is first introduced in (Chang et al., 2015), which is also available online in PSLC datashop666https://pslcdatashop.web.cmu.edu/DatasetInfo?datasetId=1198. The raw dataset contains 25,925,922 question-solving interactions of 247,606 users. There are total 835 questions, and 722 of them are served to students at least once. For each question, there are two kinds of information that can be considered as tags - topic and area. There are 40 different kinds of topics (except nan denoting the absence of topic) that are assigned to each question, such as absolute-value, circle-properties, and fractions. area can be considered as a more general concept of topic, where each area contains several topics. There are a total of 7 areas (except nan) including arithmetic, logics, and algebra. It also provides question information data, such as knowledge map, prerequisites, expert-annotated time limit and similarity between two exercises, separately. Such data is widely used in various educational tasks including KT (Abdelrahman and Wang, 2019) and question recommendation (Liu et al., 2019).
PSLC (Pittsburgh Science of Learning Center) datashop777https://pslcdatashop.web.cmu.edu/ is an open data repository maintained by Carnegie Mellon University (Koedinger et al., 2010). Various learning interaction datasets including STATICS2011, Junyi Academy datasets and also Elementary Chinese Course and Intelligent Writing Tutor are available.
In this section, we discuss the suitability of EdNet for developing, testing and benchmarking knowledge tracing models. Also, the possibility of developing student simulators with EdNet for training reinforcement learning based tutoring strategy is addressed.
Knowledge tracing is the act of modeling a student’s knowledge state through his actions in learning activities. As it has many applications from student performance prediction to learning path suggestion (Corbett and Anderson, 1994; Yudelson et al., 2013; Piech et al., 2015), the task became one of the most fundamental AIEd tasks after its introduction in the classical paper of Corbett and Anderson (Corbett and Anderson, 1994). In (Corbett and Anderson, 1994)
, the probability of a student mastering a particular concept is estimated by a Hidden Markov Model based on his activities involving the concept. With this model, they estimated the performance of students using a programming ITS tutor and provided optimum learning paths based on each individual’s level of understanding on each concept. This method, later named as Bayesian Knowledge Tracing (BKT), and its variants are still extensively studied(Yudelson et al., 2013; van De Sande, 2013; Sao Pedro et al., 2013) and remains a prominent method for knowledge tracing.
Meanwhile, methods based on other models such as deep learning (Piech et al., 2015; Zhang et al., 2017; Huang et al., 2019b; Lee et al., 2019; Pandey and Karypis, 2019), item response theory (Khajah et al., 2014) and collaborative filtering (Lee et al., 2016; Thai-Nghe et al., 2010) have also emerged. We only review the deep learning models here, but note that EdNet contains all the features required for a general knowledge tracing model. Deep Knowledge Tracing (DKT) (Piech et al., 2015)
, the first model to use deep neural network for knowledge tracing, is based on recurrent neural network. Subsequently, other models using various neural network mechanisms such as memory-augmented neural network (DKVNM(Zhang et al., 2017)
), bidirectional long-short term memory (EKT(Huang et al., 2019b), NPA (Lee et al., 2019)) and Transformer (SAKT (Pandey and Karypis, 2019)) was developed. Unlike BKT models, which require the explicit list of skills involved for each activity, deep learning models are trained over each student’s sequence of activities directly.
We argue that EdNet is suited for benchmarking existing models and developing new models by unlocking their full potential for large-scale ITSs. First, the dataset provides a common environment for models with different paradigms to learn from the same context. The simplest dataset of the EdNet hierarchy, KT1, already contains all features required by existing deep learning models like DKT and SAKT. Also, a Bayesian Knowledge Tracing model can be trained by using the provided list of required skills for each question. This allows a fair comparison of the models on the same educational context.
Second, the dataset provides a wide variety of features of students’ learning behavior. Since KT2 and KT3 contains finer details than a typical knowledge tracing database similar to KT1, one can address additional features like consumption of self-learning material or time spent on each activity for advanced knowledge tracing. For example, the record of self-learning materials tagged with the same set of skill tags used by questions allows BKT models to incorporate self-learning behaviors with question-solving activities. Also, deep learning based models can utilize these additional behaviors as new input features.
Finally, the dataset allows the model to learn from student interactions in an actual large-scale ITS environment. With advances in technology and the increasing needs for artificial intelligence in education, it is inevitable for the data produced by ITSs to scale over time. EdNet contains far more amount and types of interactions than the common datasets used by aforementioned models including BKT, DKT, DKVNM, EKT, SAKT and many more. As a larger number of training data enables higher accuracy for general models and deeper architectures for neural network models, we expect the dataset to allow the opportunity for testing new knowledge tracing models scalable to a large amount of data points.
Even if a student’s knowledge state is fully understood, an ITS is still required to provide an individualized, optimal learning path to achieve the final goal of effective education. In this regard, educational content recommendation has been studied extensively (Xu et al., 2016; Pardos and Jiang, 2019) with Reinforcement Learning (RL) emerging as a prominent method (Huang et al., 2019a; Zhou et al., 2019; Reddy et al., 2017; Mandel, 2017). In the context of RL, a policy (e.g., tutoring strategy) is trained to maximize the reward function that evaluates the overall educational effect of the agent (tutor) over time. Most RL algorithms require repeated evaluations of agent for training. As evaluating a tutoring strategy with real student is extremely costly, most methods use simulated students in order to train the policy. In fact, other areas successfully incorporating RL, like motion planing, train agents on both carefully designed simulators and high-cost real data to gain benefits from both (Abbeel et al., 2007; Cutler et al., 2014). Accordingly, it can be said that both the cost-efficiency and fidelity of the simulator used in RL are crucial for successfully training a tutoring strategy.
EdNet offers multiple levels of features from which simulators of varying fidelity can be developed (see Figure 3). For instance, one may build a simulator that generates a virtual student’s response to newly suggested questions by training a model that predicts the response from his question-solving history with KT1. With KT2 and KT3, more detailed actions such as watching self-learning lectures, purchasing or eliminating a choice can be simulated in the same way. Note that for the aforementioned examples, the sequence of generated is considered as the state of the environment. Each option trade-offs simplicity with fidelity. For example, a KT1-based simulator does not simulate all actions made by a real student, but can be more accurate in the predicted actions or inferred efficiently. On the other hand, a KT3-based simulator have high fidelity by simulating, but could be less accurate or less time-efficient to infer. By allowing all options for this trade-off, EdNet gives the opportunity to find the most appropriate simulator that suits the particular goal of the agent being trained.
In this paper, we introduced EdNet, a large-scale dataset in education gathered by a multi-platform service Santa. EdNet includes diverse information of each user, and it is much larger than any other existing dataset in education. Also, the hierarchical structure of EdNet allows researchers to approach diverse tasks in AIEd like Knowledge Tracing and Reinforcement Learning from various perspectives. The dataset will be continuously updated, and we believe that EdNet can provide fertile soil for further developments in AIEd.
The authors would like to thank all the members of Riiid! for leading the Santa service successfully. EdNet could not be collected without their efforts.