Efficient Zero-shot Visual Search via Target and Context-aware Transformer

11/24/2022
by   Zhiwei Ding, et al.
0

Visual search is a ubiquitous challenge in natural vision, including daily tasks such as finding a friend in a crowd or searching for a car in a parking lot. Human rely heavily on relevant target features to perform goal-directed visual search. Meanwhile, context is of critical importance for locating a target object in complex scenes as it helps narrow down the search area and makes the search process more efficient. However, few works have combined both target and context information in visual search computational models. Here we propose a zero-shot deep learning architecture, TCT (Target and Context-aware Transformer), that modulates self attention in the Vision Transformer with target and contextual relevant information to enable human-like zero-shot visual search performance. Target modulation is computed as patch-wise local relevance between the target and search images, whereas contextual modulation is applied in a global fashion. We conduct visual search experiments on TCT and other competitive visual search models on three natural scene datasets with varying levels of difficulty. TCT demonstrates human-like performance in terms of search efficiency and beats the SOTA models in challenging visual search tasks. Importantly, TCT generalizes well across datasets with novel objects without retraining or fine-tuning. Furthermore, we also introduce a new dataset to benchmark models for invariant visual search under incongruent contexts. TCT manages to search flexibly via target and context modulation, even under incongruent contexts.

READ FULL TEXT

page 2

page 4

page 7

research
07/18/2018

Finding any Waldo: zero-shot invariant and efficient visual search

Searching for a target object in a cluttered scene constitutes a fundame...
research
04/24/2019

Context-Aware Zero-Shot Learning for Object Recognition

Zero-Shot Learning (ZSL) aims at classifying unlabeled objects by levera...
research
11/29/2022

Context-Aware Robust Fine-Tuning

Contrastive Language-Image Pre-trained (CLIP) models have zero-shot abil...
research
04/06/2021

When Pigs Fly: Contextual Reasoning in Synthetic and Natural Scenes

Context is of fundamental importance to both human and machine vision – ...
research
03/27/2023

Gazeformer: Scalable, Effective and Fast Prediction of Goal-Directed Human Attention

Predicting human gaze is important in Human-Computer Interaction (HCI). ...
research
06/05/2021

Visual Search Asymmetry: Deep Nets and Humans Share Similar Inherent Biases

Visual search is a ubiquitous and often challenging daily task, exemplif...
research
11/16/2016

Fast On-Line Kernel Density Estimation for Active Object Localization

A major goal of computer vision is to enable computers to interpret visu...

Please sign up or login with your details

Forgot password? Click here to reset