Embedding-based system for the Text part of CALL v3 shared task

08/07/2019
by   Volodymyr Sokhatskyi, et al.
0

This paper presents a scoring system that has shown the top result on the text subset of CALL v3 shared task. The presented system is based on text embeddings, namely NNLM nnlm and BERT Bert. The distinguishing feature of the given approach is that it does not rely on the reference grammar file for scoring. The model is compared against approaches that use the grammar file and proves the possibility to achieve similar and even higher results without a predefined set of correct answers. The paper describes the model itself and the data preparation process that played a crucial role in the model training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/05/2020

NUAA-QMUL at SemEval-2020 Task 8: Utilizing BERT and DenseNet for Internet Meme Emotion Analysis

This paper describes our contribution to SemEval 2020 Task 8: Memotion A...
research
09/03/2019

Attributed Rhetorical Structure Grammar for Domain Text Summarization

This paper presents a new approach of automatic text summarization which...
research
02/10/2022

Development and Comparison of Scoring Functions in Curriculum Learning

Curriculum Learning is the presentation of samples to the machine learni...
research
01/10/2022

Fully automatic scoring of handwritten descriptive answers in Japanese language tests

This paper presents an experiment of automatically scoring handwritten d...
research
10/21/2020

Stacking Neural Network Models for Automatic Short Answer Scoring

Automatic short answer scoring is one of the text classification problem...
research
09/19/2020

Prior Art Search and Reranking for Generated Patent Text

Generative models, such as GPT-2, have demonstrated impressive results r...
research
02/27/2020

A Primer in BERTology: What we know about how BERT works

Transformer-based models are now widely used in NLP, but we still do not...

Please sign up or login with your details

Forgot password? Click here to reset