-
Multi-Modal Reasoning Graph for Scene-Text Based Fine-Grained Image Classification and Retrieval
Scene text instances found in natural images carry explicit semantic inf...
read it
-
Modality-Agnostic Attention Fusion for visual search with text feedback
Image retrieval with natural language feedback offers the promise of cat...
read it
-
Saliency-Guided Attention Network for Image-Sentence Matching
This paper studies the task of matching image and sentence, where learni...
read it
-
Telling the What while Pointing the Where: Fine-grained Mouse Trace and Language Supervision for Improved Image Retrieval
Existing image retrieval systems use text queries to provide a natural a...
read it
-
Aligning Visual Regions and Textual Concepts: Learning Fine-Grained Image Representations for Image Captioning
In image-grounded text generation, fine-grained representations of the i...
read it
-
Personalized Multimodal Feedback Generation in Education
The automatic evaluation for school assignments is an important applicat...
read it
-
What Looks Good with my Sofa: Multimodal Search Engine for Interior Design
In this paper, we propose a multi-modal search engine for interior desig...
read it
TRACE: Transform Aggregate and Compose Visiolinguistic Representations for Image Search with Text Feedback
The ability to efficiently search for images over an indexed database is the cornerstone for several user experiences. Incorporating user feedback, through multi-modal inputs provide flexible and interaction to serve fine-grained specificity in requirements. We specifically focus on text feedback, through descriptive natural language queries. Given a reference image and textual user feedback, our goal is to retrieve images that satisfy constraints specified by both of these input modalities. The task is challenging as it requires understanding the textual semantics from the text feedback and then applying these changes to the visual representation. To address these challenges, we propose a novel architecture TRACE which contains a hierarchical feature aggregation module to learn the composite visio-linguistic representations. TRACE achieves the SOTA performance on 3 benchmark datasets: FashionIQ, Shoes, and Birds-to-Words, with an average improvement of at least 5.7 respectively in R@K metric. Our extensive experiments and ablation studies show that TRACE consistently outperforms the existing techniques by significant margins both quantitatively and qualitatively.
READ FULL TEXT
Comments
There are no comments yet.