Improving Fractal Pre-training

10/06/2021
by   Connor Anderson, et al.
12

The deep neural networks used in modern computer vision systems require enormous image datasets to train them. These carefully-curated datasets typically have a million or more images, across a thousand or more distinct categories. The process of creating and curating such a dataset is a monumental undertaking, demanding extensive effort and labelling expense and necessitating careful navigation of technical and social issues such as label accuracy, copyright ownership, and content bias. What if we had a way to harness the power of large image datasets but with few or none of the major issues and concerns currently faced? This paper extends the recent work of Kataoka et. al. (2020), proposing an improved pre-training dataset based on dynamically-generated fractal images. Challenging issues with large-scale image datasets become points of elegance for fractal pre-training: perfect label accuracy at zero cost; no need to store/transmit large image archives; no privacy/demographic bias/concerns of inappropriate content, as no humans are pictured; limitless supply and diversity of images; and the images are free/open-source. Perhaps surprisingly, avoiding these difficulties imposes only a small penalty in performance. Leveraging a newly-proposed pre-training task – multi-instance prediction – our experiments demonstrate that fine-tuning a network pre-trained using fractals attains 92.7-98.1% of the accuracy of an ImageNet pre-trained network.

READ FULL TEXT

page 5

page 7

page 8

page 12

page 13

page 14

page 15

research
05/05/2020

Multi-task pre-training of deep neural networks

In this work, we investigate multi-task learning as a way of pre-trainin...
research
05/05/2020

Multi-task pre-training of deep neural networks for digital pathology

In this work, we investigate multi-task learning as a way of pre-trainin...
research
06/18/2022

Replacing Labeled Real-image Datasets with Auto-generated Contours

In the present work, we show that the performance of formula-driven supe...
research
12/24/2019

Large Scale Learning of General Visual Representations for Transfer

Transfer of pre-trained representations improves sample efficiency and s...
research
11/13/2017

PRE-render Content Using Tiles (PRECUT). 1. Large-Scale Compound-Target Relationship Analyses

Visualizing a complex network is computationally intensive process and d...
research
10/08/2021

Inferring Offensiveness In Images From Natural Language Supervision

Probing or fine-tuning (large-scale) pre-trained models results in state...
research
11/29/2022

Procedural Image Programs for Representation Learning

Learning image representations using synthetic data allows training neur...

Please sign up or login with your details

Forgot password? Click here to reset