Billion-Scale Pretraining with Vision Transformers for Multi-Task Visual Representations

08/12/2021
by   Josh Beal, et al.
5

Large-scale pretraining of visual representations has led to state-of-the-art performance on a range of benchmark computer vision tasks, yet the benefits of these techniques at extreme scale in complex production systems has been relatively unexplored. We consider the case of a popular visual discovery product, where these representations are trained with multi-task learning, from use-case specific visual understanding (e.g. skin tone classification) to general representation learning for all visual content (e.g. embeddings for retrieval). In this work, we describe how we (1) generate a dataset with over a billion images via large weakly-supervised pretraining to improve the performance of these visual representations, and (2) leverage Transformers to replace the traditional convolutional backbone, with insights into both system and performance improvements, especially at 1B+ image scale. To support this backbone model, we detail a systematic approach to deriving weakly-supervised image annotations from heterogenous text signals, demonstrating the benefits of clustering techniques to handle the long-tail distribution of image labels. Through a comprehensive study of offline and online evaluation, we show that large-scale Transformer-based pretraining provides significant benefits to industry computer vision applications. The model is deployed in a production visual shopping system, with 36 improvement in click-through volume. We conduct extensive experiments to better understand the empirical relationships between Transformer-based architectures, dataset scale, and the performance of production vision systems.

READ FULL TEXT

page 3

page 8

research
04/27/2022

Offline Visual Representation Learning for Embodied Navigation

How should we learn visual representations for embodied agents that must...
research
05/25/2022

Primitive3D: 3D Object Dataset Synthesis from Randomly Assembled Primitives

Numerous advancements in deep learning can be attributed to the access t...
research
05/22/2023

Efficient Large-Scale Vision Representation Learning

In this article, we present our approach to single-modality vision repre...
research
07/21/2022

UFO: Unified Feature Optimization

This paper proposes a novel Unified Feature Optimization (UFO) paradigm ...
research
08/11/2022

MILAN: Masked Image Pretraining on Language Assisted Representation

Self-attention based transformer models have been dominating many comput...
research
07/11/2023

PIGEON: Predicting Image Geolocations

We introduce PIGEON, a multi-task end-to-end system for planet-scale ima...
research
06/10/2019

UniDual: A Unified Model for Image and Video Understanding

Although a video is effectively a sequence of images, visual perception ...

Please sign up or login with your details

Forgot password? Click here to reset