MMIU: Dataset for Visual Intent Understanding in Multimodal Assistants

10/13/2021
by   Alkesh Patel, et al.
0

In multimodal assistant, where vision is also one of the input modalities, the identification of user intent becomes a challenging task as visual input can influence the outcome. Current digital assistants take spoken input and try to determine the user intent from conversational or device context. So, a dataset, which includes visual input (i.e. images or videos for the corresponding questions targeted for multimodal assistant use cases, is not readily available. The research in visual question answering (VQA) and visual question generation (VQG) is a great step forward. However, they do not capture questions that a visually-abled person would ask multimodal assistants. Moreover, many times questions do not seek information from external knowledge. In this paper, we provide a new dataset, MMIU (MultiModal Intent Understanding), that contains questions and corresponding intents provided by human annotators while looking at images. We, then, use this dataset for intent classification task in multimodal digital assistant. We also experiment with various approaches for combining vision and language features including the use of multimodal transformer for classification of image-question pairs into 14 intents. We provide the benchmark results and discuss the role of visual and text features for the intent classification task on our dataset.

READ FULL TEXT

page 1

page 2

research
05/28/2023

HaVQA: A Dataset for Visual Question Answering and Multimodal Research in Hausa Language

This paper presents HaVQA, the first multimodal dataset for visual quest...
research
12/08/2020

Edited Media Understanding: Reasoning About Implications of Manipulated Images

Multimodal disinformation, from `deepfakes' to simple edits that deceive...
research
03/25/2019

Question Embeddings Based on Shannon Entropy: Solving intent classification task in goal-oriented dialogue system

Question-answering systems and voice assistants are becoming major part ...
research
04/04/2022

On Explaining Multimodal Hateful Meme Detection Models

Hateful meme detection is a new multimodal task that has gained signific...
research
11/11/2020

Intentonomy: a Dataset and Study towards Human Intent Understanding

An image is worth a thousand words, conveying information that goes beyo...
research
09/09/2022

MIntRec: A New Dataset for Multimodal Intent Recognition

Multimodal intent recognition is a significant task for understanding hu...
research
04/19/2019

Integrating Text and Image: Determining Multimodal Document Intent in Instagram Posts

Computing author intent from multimodal data like Instagram posts requir...

Please sign up or login with your details

Forgot password? Click here to reset