Multi-modal dialog for browsing large visual catalogs using exploration-exploitation paradigm in a joint embedding space

01/28/2019
by   Arkabandhu Chowdhury, et al.
0

We present a multi-modal dialog system to assist online shoppers in visually browsing through large catalogs. Visual browsing is different from visual search in that it allows the user to explore the wide range of products in a catalog, beyond the exact search matches. We focus on a slightly asymmetric version of the complete multi-modal dialog where the system can understand both text and image queries but responds only in images. We formulate our problem of "showing k best images to a user" based on the dialog context so far, as sampling from a Gaussian Mixture Model in a high dimensional joint multi-modal embedding space, that embed both the text and the image queries. Our system remembers the context of the dialog and uses an exploration-exploitation paradigm to assist in visual browsing. We train and evaluate the system on a multi-modal dialog dataset that we generate from large catalog data. Our experiments are promising and show that the agent is capable of learning and can display relevant results with an average cosine similarity of 0.85 to the ground truth. Our preliminary human evaluation also corroborates the fact that such a multi-modal dialog system for visual browsing is well-received and is capable of engaging human users.

READ FULL TEXT

page 1

page 7

page 8

research
10/13/2019

Granular Multimodal Attention Networks for Visual Dialog

Vision and language tasks have benefited from attention. There have been...
research
06/17/2023

Query2GMM: Learning Representation with Gaussian Mixture Model for Reasoning over Knowledge Graphs

Logical query answering over Knowledge Graphs (KGs) is a fundamental yet...
research
10/26/2021

ViDA-MAN: Visual Dialog with Digital Humans

We demonstrate ViDA-MAN, a digital-human agent for multi-modal interacti...
research
09/13/2023

Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and Ethics

Multi-modal large language models (MLLMs) are trained based on large lan...
research
11/03/2020

A spatial hue similarity measure for assessment of colourisation

Automatic colourisation of grey-scale images is an ill-posed multi-modal...
research
07/19/2023

(Ab)using Images and Sounds for Indirect Instruction Injection in Multi-Modal LLMs

We demonstrate how images and sounds can be used for indirect prompt and...
research
09/30/2019

A Dynamic Strategy Coach for Effective Negotiation

Negotiation is a complex activity involving strategic reasoning, persuas...

Please sign up or login with your details

Forgot password? Click here to reset