Internet Explorer: Targeted Representation Learning on the Open Web

02/27/2023
by   Alexander C. Li, et al.
0

Modern vision models typically rely on fine-tuning general-purpose models pre-trained on large, static datasets. These general-purpose models only capture the knowledge within their pre-training datasets, which are tiny, out-of-date snapshots of the Internet – where billions of images are uploaded each day. We suggest an alternate approach: rather than hoping our static datasets transfer to our desired tasks after large-scale pre-training, we propose dynamically utilizing the Internet to quickly train a small-scale model that does extremely well on the task at hand. Our approach, called Internet Explorer, explores the web in a self-supervised manner to progressively find relevant examples that improve performance on a desired target dataset. It cycles between searching for images on the Internet with text queries, self-supervised training on downloaded images, determining which images were useful, and prioritizing what to search for next. We evaluate Internet Explorer across several datasets and show that it outperforms or matches CLIP oracle performance by using just a single GPU desktop to actively query the Internet for 30–40 hours. Results, visualizations, and videos at https://internet-explorer-ssl.github.io/

READ FULL TEXT

page 1

page 2

page 5

page 12

page 13

page 18

page 19

page 20

research
10/17/2021

Deep Clustering For General-Purpose Audio Representations

We introduce DECAR, a self-supervised pre-training approach for learning...
research
12/20/2021

Are Large-scale Datasets Necessary for Self-Supervised Pre-training?

Pre-training models on large scale datasets, like ImageNet, is a standar...
research
06/17/2021

An Evaluation of Self-Supervised Pre-Training for Skin-Lesion Analysis

Self-supervised pre-training appears as an advantageous alternative to s...
research
10/21/2020

Contrastive Learning of General-Purpose Audio Representations

We introduce COLA, a self-supervised pre-training approach for learning ...
research
04/18/2020

Self-Supervised Representation Learning on Document Images

This work analyses the impact of self-supervised pre-training on documen...
research
03/31/2023

Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?

We present the largest and most comprehensive empirical study of pre-tra...
research
06/20/2014

Caffe: Convolutional Architecture for Fast Feature Embedding

Caffe provides multimedia scientists and practitioners with a clean and ...

Please sign up or login with your details

Forgot password? Click here to reset