DeepAI
Log In Sign Up

MUSE: Multi-Scale Temporal Features Evolution for Knowledge Tracing

01/30/2021
by   Chengwei Zhang, et al.
0

Transformer based knowledge tracing model is an extensively studied problem in the field of computer-aided education. By integrating temporal features into the encoder-decoder structure, transformers can processes the exercise information and student response information in a natural way. However, current state-of-the-art transformer-based variants still share two limitations. First, extremely long temporal features cannot well handled as the complexity of self-attention mechanism is O(n2). Second, existing approaches track the knowledge drifts under fixed a window size, without considering different temporal-ranges. To conquer these problems, we propose MUSE, which is equipped with multi-scale temporal sensor unit, that takes either local or global temporal features into consideration. The proposed model is capable to capture the dynamic changes in users knowledge states at different temporal-ranges, and provides an efficient and powerful way to combine local and global features to make predictions. Our method won the 5-th place over 3,395 teams in the Riiid AIEd Challenge 2020.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/19/2020

SAINT+: Integrating Temporal Features for EdNet Correctness Prediction

We propose SAINT+, a successor of SAINT which is a Transformer based kno...
02/14/2020

Towards an Appropriate Query, Key, and Value Computation for Knowledge Tracing

Knowledge tracing, the act of modeling a student's knowledge through lea...
05/19/2022

Cross-Enhancement Transformer for Action Segmentation

Temporal convolutions have been the paradigm of choice in action segment...
03/24/2022

Beyond Fixation: Dynamic Window Visual Transformer

Recently, a surge of interest in visual transformers is to reduce the co...
11/27/2022

Semantic-Aware Local-Global Vision Transformer

Vision Transformers have achieved remarkable progresses, among which Swi...
07/01/2022

DALG: Deep Attentive Local and Global Modeling for Image Retrieval

Deeply learned representations have achieved superior image retrieval pe...
09/01/2022

Multi-Scale Contrastive Co-Training for Event Temporal Relation Extraction

Extracting temporal relationships between pairs of events in texts is a ...