SAINT+: Integrating Temporal Features for EdNet Correctness Prediction

10/19/2020
by   Dongmin Shin, et al.
0

We propose SAINT+, a successor of SAINT which is a Transformer based knowledge tracing model that separately processes exercise information and student response information. Following the architecture of SAINT, SAINT+ has an encoder-decoder structure where the encoder applies self-attention layers to a stream of exercise embeddings, and the decoder alternately applies self-attention layers and encoder-decoder attention layers to streams of response embeddings and encoder output. Moreover, SAINT+ incorporates two temporal feature embeddings into the response embeddings: elapsed time, the time taken for a student to answer, and lag time, the time interval between adjacent learning activities. We empirically evaluate the effectiveness of SAINT+ on EdNet, the largest publicly available benchmark dataset in the education domain. Experimental results show that SAINT+ achieves state-of-the-art performance in knowledge tracing with an improvement of 1.25 in area under receiver operating characteristic curve compared to SAINT, the current state-of-the-art model in EdNet dataset.

READ FULL TEXT
research
02/14/2020

Towards an Appropriate Query, Key, and Value Computation for Knowledge Tracing

Knowledge tracing, the act of modeling a student's knowledge through lea...
research
01/30/2021

MUSE: Multi-Scale Temporal Features Evolution for Knowledge Tracing

Transformer based knowledge tracing model is an extensively studied prob...
research
05/19/2022

Cross-Enhancement Transformer for Action Segmentation

Temporal convolutions have been the paradigm of choice in action segment...
research
02/10/2021

Last Query Transformer RNN for knowledge tracing

This paper presents an efficient model to predict a student's answer cor...
research
06/13/2018

Double Path Networks for Sequence to Sequence Learning

Encoder-decoder based Sequence to Sequence learning (S2S) has made remar...
research
07/19/2021

Action Forecasting with Feature-wise Self-Attention

We present a new architecture for human action forecasting from videos. ...
research
11/11/2022

An Improved End-to-End Multi-Target Tracking Method Based on Transformer Self-Attention

This study proposes an improved end-to-end multi-target tracking algorit...

Please sign up or login with your details

Forgot password? Click here to reset