Multimodal Lecture Presentations Dataset: Understanding Multimodality in Educational Slides

08/17/2022
by   Dong Won Lee, et al.
26

Lecture slide presentations, a sequence of pages that contain text and figures accompanied by speech, are constructed and presented carefully in order to optimally transfer knowledge to students. Previous studies in multimedia and psychology attribute the effectiveness of lecture presentations to their multimodal nature. As a step toward developing AI to aid in student learning as intelligent teacher assistants, we introduce the Multimodal Lecture Presentations dataset as a large-scale benchmark testing the capabilities of machine learning models in multimodal understanding of educational content. Our dataset contains aligned slides and spoken language, for 180+ hours of video and 9000+ slides, with 10 lecturers from various subjects (e.g., computer science, dentistry, biology). We introduce two research tasks which are designed as stepping stones towards AI agents that can explain (automatically captioning a lecture presentation) and illustrate (synthesizing visual figures to accompany spoken explanations) educational content. We provide manual annotations to help implement these two research tasks and evaluate state-of-the-art models on them. Comparing baselines and human student performances, we find that current models struggle in (1) weak crossmodal alignment between slides and spoken text, (2) learning novel visual mediums, (3) technical language, and (4) long-range sequences. Towards addressing this issue, we also introduce PolyViLT, a multimodal transformer trained with a multi-instance learning loss that is more effective than current approaches. We conclude by shedding light on the challenges and opportunities in multimodal understanding of educational presentations.

READ FULL TEXT

page 14

page 15

page 23

page 27

page 31

research
11/01/2018

How2: A Large-scale Dataset for Multimodal Language Understanding

In this paper, we introduce How2, a multimodal collection of instruction...
research
02/17/2023

Multimodal Propaganda Processing

Propaganda campaigns have long been used to influence public opinion via...
research
10/05/2019

MUTLA: A Large-Scale Dataset for Multimodal Teaching and Learning Analytics

Automatic analysis of teacher and student interactions could be very imp...
research
02/27/2023

Multimodal Speech Recognition for Language-Guided Embodied Agents

Benchmarks for language-guided embodied agents typically assume text-bas...
research
08/19/2023

UniDoc: A Universal Large Multimodal Model for Simultaneous Text Detection, Recognition, Spotting and Understanding

In the era of Large Language Models (LLMs), tremendous strides have been...
research
06/12/2023

The BEA 2023 Shared Task on Generating AI Teacher Responses in Educational Dialogues

This paper describes the results of the first shared task on the generat...

Please sign up or login with your details

Forgot password? Click here to reset