Webly Supervised Concept Expansion for General Purpose Vision Models

by   Amita Kamath, et al.

General purpose vision (GPV) systems are models that are designed to solve a wide array of visual tasks without requiring architectural changes. Today, GPVs primarily learn both skills and concepts from large fully supervised datasets. Scaling GPVs to tens of thousands of concepts by acquiring data to learn each concept for every skill quickly becomes prohibitive. This work presents an effective and inexpensive alternative: learn skills from fully supervised datasets, learn concepts from web image search results, and leverage a key characteristic of GPVs – the ability to transfer visual knowledge across skills. We use a dataset of 1M+ images spanning 10k+ visual concepts to demonstrate webly-supervised concept expansion for two existing GPVs (GPV-1 and VL-T5) on 3 benchmarks - 5 COCO based datasets (80 primary concepts), a newly curated series of 5 datasets based on the OpenImages and VisualGenome repositories ( 500 concepts) and the Web-derived dataset (10k+ concepts). We also propose a new architecture, GPV-2 that supports a variety of tasks – from vision tasks like classification and localization to vision+language tasks like QA and captioning to more niche ones like human-object interaction recognition. GPV-2 benefits hugely from web data, outperforms GPV-1 and VL-T5 across these benchmarks, and does well in a 0-shot setting at action and attribute recognition.


page 1

page 4

page 7

page 14

page 15

page 16

page 17


Towards General Purpose Vision Systems

A special purpose learning system assumes knowledge of admissible tasks ...

GRIT: General Robust Image Task Benchmark

Computer vision models excel at making predictions when the test distrib...

Florence: A New Foundation Model for Computer Vision

Automated visual understanding of our diverse and open world demands com...

Can machines learn to see without visual databases?

This paper sustains the position that the time has come for thinking of ...

One Model, Multiple Tasks: Pathways for Natural Language Understanding

This paper presents a Pathways approach to handle many tasks at once. Ou...

Dense Captioning with Joint Inference and Visual Context

Dense captioning is a newly emerging computer vision topic for understan...

One Big Net For Everything

I apply recent work on "learning to think" (2015) and on PowerPlay (2011...