Multi-Task Learning based Online Dialogic Instruction Detection with Pre-trained Language Models

07/15/2021
by   Yang Hao, et al.
0

In this work, we study computational approaches to detect online dialogic instructions, which are widely used to help students understand learning materials, and build effective study habits. This task is rather challenging due to the widely-varying quality and pedagogical styles of dialogic instructions. To address these challenges, we utilize pre-trained language models, and propose a multi-task paradigm which enhances the ability to distinguish instances of different classes by enlarging the margin between categories via contrastive loss. Furthermore, we design a strategy to fully exploit the misclassified examples during the training stage. Extensive experiments on a real-world online educational data set demonstrate that our approach achieves superior performance compared to representative baselines. To encourage reproducible results, we make our implementation online available at <https://github.com/AIED2021/multitask-dialogic-instruction>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/24/2022

DialogID: A Dialogic Instruction Dataset for Improving Teaching Effectiveness in Online Environments

Online dialogic instructions are a set of pedagogical instructions used ...
research
05/16/2020

Automatic Dialogic Instruction Detection for K-12 Online One-on-one Classes

Online one-on-one class is created for highly interactive and immersive ...
research
07/15/2021

Solving ESL Sentence Completion Questions via Pre-trained Neural Language Models

Sentence completion (SC) questions present a sentence with one or more b...
research
05/23/2023

Pre-training Multi-task Contrastive Learning Models for Scientific Literature Understanding

Scientific literature understanding tasks have gained significant attent...
research
05/30/2021

Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance Sacrifice

Pre-trained contextualized language models (PrLMs) have led to strong pe...
research
05/17/2023

Evaluating Object Hallucination in Large Vision-Language Models

Inspired by the superior language abilities of large language models (LL...
research
07/21/2023

Making Pre-trained Language Models both Task-solvers and Self-calibrators

Pre-trained language models (PLMs) serve as backbones for various real-w...

Please sign up or login with your details

Forgot password? Click here to reset