TRACE: Transform Aggregate and Compose Visiolinguistic Representations for Image Search with Text Feedback

09/03/2020
by   Surgan Jandial, et al.
11

The ability to efficiently search for images over an indexed database is the cornerstone for several user experiences. Incorporating user feedback, through multi-modal inputs provide flexible and interaction to serve fine-grained specificity in requirements. We specifically focus on text feedback, through descriptive natural language queries. Given a reference image and textual user feedback, our goal is to retrieve images that satisfy constraints specified by both of these input modalities. The task is challenging as it requires understanding the textual semantics from the text feedback and then applying these changes to the visual representation. To address these challenges, we propose a novel architecture TRACE which contains a hierarchical feature aggregation module to learn the composite visio-linguistic representations. TRACE achieves the SOTA performance on 3 benchmark datasets: FashionIQ, Shoes, and Birds-to-Words, with an average improvement of at least  5.7 respectively in R@K metric. Our extensive experiments and ablation studies show that TRACE consistently outperforms the existing techniques by significant margins both quantitatively and qualitatively.

READ FULL TEXT

page 4

page 8

page 9

page 10

page 14

research
03/08/2022

Image Search with Text Feedback by Additive Attention Compositional Learning

Effective image retrieval with text feedback stands to impact a range of...
research
04/05/2023

Calibrating Cross-modal Feature for Text-Based Person Searching

We present a novel and effective method calibrating cross-modal features...
research
09/21/2020

Multi-Modal Reasoning Graph for Scene-Text Based Fine-Grained Image Classification and Retrieval

Scene text instances found in natural images carry explicit semantic inf...
research
07/05/2023

Multi-Modal Prototypes for Open-Set Semantic Segmentation

In semantic segmentation, adapting a visual system to novel object categ...
research
06/30/2020

Modality-Agnostic Attention Fusion for visual search with text feedback

Image retrieval with natural language feedback offers the promise of cat...
research
08/16/2021

ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration

Vision-and-language pretraining (VLP) aims to learn generic multimodal r...
research
08/21/2023

UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with Action Understanding and Feedback in Natural Language

We introduce UbiPhysio, a milestone framework that delivers fine-grained...

Please sign up or login with your details

Forgot password? Click here to reset