Predicting Recall Probability to Adaptively Prioritize Study

02/28/2018
by   Shane Mooney, et al.
0

Students have a limited time to study and are typically ineffective at allocating study time. Machine-directed study strategies that identify which items need reinforcement and dictate the spacing of repetition have been shown to help students optimize mastery (Mozer & Lindsey 2017). The large volume of research on this matter is typically conducted in constructed experimental settings with fixed instruction, content, and scheduling; in contrast, we aim to develop methods that can address any demographic, subject matter, or study schedule. We show two methods that model item-specific recall probability for use in a discrepancy-reduction instruction strategy. The first model predicts item recall probability using a multiple logistic regression (MLR) model based on previous answer correctness and temporal spacing of study. Prompted by literature suggesting that forgetting is better modeled by the power law than an exponential decay (Wickelgren 1974), we compare the MLR approach with a Recurrent Power Law (RPL) model which adaptively fits a forgetting curve. We then discuss the performance of these models against study datasets comprised of millions of answers and show that the RPL approach is more accurate and flexible than the MLR model. Finally, we give an overview of promising future approaches to knowledge modeling.

READ FULL TEXT

page 11

page 12

research
04/23/2020

Adaptive Forgetting Curves for Spaced Repetition Language Learning

The forgetting curve has been extensively explored by psychologists, edu...
research
05/14/2019

DAS3H: Modeling Student Learning and Forgetting for Optimally Scheduling Distributed Practice of Skills

Spaced repetition is among the most studied learning strategies in the c...
research
04/25/2014

Input anticipating critical reservoirs show power law forgetting of unexpected input events

Usually, reservoir computing shows an exponential memory decay. This pap...
research
07/26/2020

Deep Knowledge Tracing with Convolutions

Knowledge tracing (KT) has recently been an active research area of comp...
research
05/12/2021

Slower is Better: Revisiting the Forgetting Mechanism in LSTM for Slower Information Decay

Sequential information contains short- to long-range dependencies; howev...
research
10/16/2021

Tackling Multi-Answer Open-Domain Questions via a Recall-then-Verify Framework

Open domain questions are likely to be open-ended and ambiguous, leading...

Please sign up or login with your details

Forgot password? Click here to reset