Look Before you Speak: Visually Contextualized Utterances

12/10/2020
by   Paul Hongsuck Seo, et al.
2

While most conversational AI systems focus on textual dialogue only, conditioning utterances on visual context (when it's available) can lead to more realistic conversations. Unfortunately, a major challenge for incorporating visual context into conversational dialogue is the lack of large-scale labeled datasets. We provide a solution in the form of a new visually conditioned Future Utterance Prediction task. Our task involves predicting the next utterance in a video, using both visual frames and transcribed speech as context. By exploiting the large number of instructional videos online, we train a model to solve this task at scale, without the need for manual annotations. Leveraging recent advances in multimodal learning, our model consists of a novel co-attentional multimodal video transformer, and when trained on both textual and visual context, outperforms baselines that use textual inputs alone. Further, we demonstrate that our model trained for this task on unlabelled videos achieves state-of-the-art performance on a number of downstream VideoQA benchmarks such as MSRVTT-QA, MSVD-QA, ActivityNet-QA and How2QA.

READ FULL TEXT

page 1

page 7

page 9

page 14

page 15

research
09/12/2018

Game-Based Video-Context Dialogue

Current dialogue systems focus more on textual and speech context knowle...
research
10/05/2018

MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations

Emotion recognition in conversations is a challenging Artificial Intelli...
research
01/20/2022

End-to-end Generative Pretraining for Multimodal Video Captioning

Recent video and language pretraining frameworks lack the ability to gen...
research
12/16/2022

Werewolf Among Us: A Multimodal Dataset for Modeling Persuasion Behaviors in Social Deduction Games

Persuasion modeling is a key building block for conversational agents. E...
research
10/19/2021

A non-hierarchical attention network with modality dropout for textual response generation in multimodal dialogue systems

Existing text- and image-based multimodal dialogue systems use the tradi...
research
10/30/2019

Time to Take Emoji Seriously: They Vastly Improve Casual Conversational Models

Graphical emoji are ubiquitous in modern-day online conversations. So is...
research
04/30/2022

Opponent Modeling in Negotiation Dialogues by Related Data Adaptation

Opponent modeling is the task of inferring another party's mental state ...

Please sign up or login with your details

Forgot password? Click here to reset