TorontoCL at CMCL 2021 Shared Task: RoBERTa with Multi-Stage Fine-Tuning for Eye-Tracking Prediction

04/15/2021
by   Bai Li, et al.
0

Eye movement data during reading is a useful source of information for understanding language comprehension processes. In this paper, we describe our submission to the CMCL 2021 shared task on predicting human reading patterns. Our model uses RoBERTa with a regression layer to predict 5 eye-tracking features. We train the model in two stages: we first fine-tune on the Provo corpus (another eye-tracking dataset), then fine-tune on the task data. We compare different Transformer models and apply ensembling methods to improve the performance. Our final submission achieves a MAE score of 3.929, ranking 3rd place out of 13 teams that participated in this shared task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/30/2022

Zero Shot Crosslingual Eye-Tracking Data Prediction using Multilingual Transformer Models

Eye tracking data during reading is a useful source of information to un...
research
04/27/2021

LAST at CMCL 2021 Shared Task: Predicting Gaze Data During Reading with a Gradient Boosting Decision Tree Approach

A LightGBM model fed with target word lexical characteristics and featur...
research
03/31/2023

WebQAmGaze: A Multilingual Webcam Eye-Tracking-While-Reading Dataset

We create WebQAmGaze, a multilingual low-cost eye-tracking-while-reading...
research
12/12/2021

Reading Task Classification Using EEG and Eye-Tracking Data

The Zurich Cognitive Language Processing Corpus (ZuCo) provides eye-trac...
research
02/21/2021

Dynamic Graph Modeling of Simultaneous EEG and Eye-tracking Data for Reading Task Identification

We present a new approach, that we call AdaGTCN, for identifying human r...
research
10/09/2021

Leveraging recent advances in Pre-Trained Language Models forEye-Tracking Prediction

Cognitively inspired Natural Language Pro-cessing uses human-derived beh...

Please sign up or login with your details

Forgot password? Click here to reset