Automatic learner summary assessment for reading comprehension

06/18/2019
by   Menglin Xia, et al.
0

Automating the assessment of learner summaries provides a useful tool for assessing learner reading comprehension. We present a summarization task for evaluating non-native reading comprehension and propose three novel approaches to automatically assess the learner summaries. We evaluate our models on two datasets we created and show that our models outperform traditional approaches that rely on exact word match on this task. Our best model produces quality assessments close to professional examiners.

READ FULL TEXT
research
04/30/2022

A Survey of Machine Narrative Reading Comprehension Assessments

As the body of research on machine narrative comprehension grows, there ...
research
11/28/2022

Automatically generating question-answer pairs for assessing basic reading comprehension in Swedish

This paper presents an evaluation of the quality of automatically genera...
research
12/30/2019

The Shmoop Corpus: A Dataset of Stories with Loosely Aligned Summaries

Understanding stories is a challenging reading comprehension problem for...
research
12/07/2020

Semantics Altering Modifications for Evaluating Comprehension in Machine Reading

Advances in NLP have yielded impressive results for the task of machine ...
research
04/14/2021

Mitigating the Effects of Reading Interruptions by Providing Reviews and Previews

As reading on mobile devices is becoming more ubiquitous, content is con...
research
04/04/2019

Frustratingly Poor Performance of Reading Comprehension Models on Non-adversarial Examples

When humans learn to perform a difficult task (say, reading comprehensio...
research
10/22/2021

A Framework for Learning Assessment through Multimodal Analysis of Reading Behaviour and Language Comprehension

Reading comprehension, which has been defined as gaining an understandin...

Please sign up or login with your details

Forgot password? Click here to reset