Trusting Language Models in Education

08/07/2023
by   Jogi Suda Neto, et al.
0

Language Models are being widely used in Education. Even though modern deep learning models achieve very good performance on question-answering tasks, sometimes they make errors. To avoid misleading students by showing wrong answers, it is important to calibrate the confidence - that is, the prediction probability - of these models. In our work, we propose to use an XGBoost on top of BERT to output the corrected probabilities, using features based on the attention mechanism. Our hypothesis is that the level of uncertainty contained in the flow of attention is related to the quality of the model's response itself.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/08/2021

Improving language models by retrieving from trillions of tokens

We enhance auto-regressive language models by conditioning on document c...
research
06/28/2022

Kwame for Science: An AI Teaching Assistant Based on Sentence-BERT for Science Education in West Africa

Africa has a high student-to-teacher ratio which limits students' access...
research
05/28/2023

Conformal Prediction with Large Language Models for Multi-Choice Question Answering

As large language models continue to be widely developed, robust uncerta...
research
06/26/2019

Creating A Neural Pedagogical Agent by Jointly Learning to Review and Assess

Machine learning plays an increasing role in intelligent tutoring system...
research
06/01/2021

Parameter-Efficient Neural Question Answering Models via Graph-Enriched Document Representations

As the computational footprint of modern NLP systems grows, it becomes i...
research
01/19/2022

Evaluating Machine Common Sense via Cloze Testing

Language models (LMs) show state of the art performance for common sense...
research
12/02/2020

How Can We Know When Language Models Know?

Recent works have shown that language models (LM) capture different type...

Please sign up or login with your details

Forgot password? Click here to reset