DeepAI
Log In Sign Up

Automated essay scoring using efficient transformer-based language models

02/25/2021
by   Christopher M. Ormerod, et al.
0

Automated Essay Scoring (AES) is a cross-disciplinary effort involving Education, Linguistics, and Natural Language Processing (NLP). The efficacy of an NLP model in AES tests it ability to evaluate long-term dependencies and extrapolate meaning even when text is poorly written. Large pretrained transformer-based language models have dominated the current state-of-the-art in many NLP tasks, however, the computational requirements of these models make them expensive to deploy in practice. The goal of this paper is to challenge the paradigm in NLP that bigger is better when it comes to AES. To do this, we evaluate the performance of several fine-tuned pretrained NLP models with a modest number of parameters on an AES dataset. By ensembling our models, we achieve excellent results with fewer parameters than most pretrained transformer-based models.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/06/2021

GPT-3 Models are Poor Few-Shot Learners in the Biomedical Domain

Deep neural language models have set new breakthroughs in many tasks of ...
02/23/2022

Short-answer scoring with ensembles of pretrained language models

We investigate the effectiveness of ensembles of pretrained transformer-...
06/01/2021

Comparing Test Sets with Item Response Theory

Recent years have seen numerous NLP datasets introduced to evaluate the ...
08/30/2021

The effects of data size on Automated Essay Scoring engines

We study the effects of data size and quality on the performance on Auto...
04/04/2020

Conversational Question Reformulation via Sequence-to-Sequence Architectures and Pretrained Language Models

This paper presents an empirical study of conversational question reform...
12/02/2019

EduBERT: Pretrained Deep Language Models for Learning Analytics

The use of large pretrained neural networks to create contextualized wor...