SketchTransfer: A Challenging New Task for Exploring Detail-Invariance and the Abstractions Learned by Deep Networks

12/25/2019
by   Alex Lamb, et al.
40

Deep networks have achieved excellent results in perceptual tasks, yet their ability to generalize to variations not seen during training has come under increasing scrutiny. In this work we focus on their ability to have invariance towards the presence or absence of details. For example, humans are able to watch cartoons, which are missing many visual details, without being explicitly trained to do so. As another example, 3D rendering software is a relatively recent development, yet people are able to understand such rendered scenes even though they are missing details (consider a film like Toy Story). The failure of machine learning algorithms to do this indicates a significant gap in generalization between human abilities and the abilities of deep networks. We propose a dataset that will make it easier to study the detail-invariance problem concretely. We produce a concrete task for this: SketchTransfer, and we show that state-of-the-art domain transfer algorithms still struggle with this task. The state-of-the-art technique which achieves over 95% on MNIST SVHN transfer only achieves 59% accuracy on the SketchTransfer task, which is much better than random (11% accuracy) but falls short of the 87% accuracy of a classifier trained directly on labeled sketches. This indicates that this task is approachable with today's best methods but has substantial room for improvement.

READ FULL TEXT

page 2

page 3

page 4

page 7

page 8

research
11/24/2017

Geometric robustness of deep networks: analysis and improvement

Deep convolutional neural networks have been shown to be vulnerable to a...
research
11/01/2018

Excessive Invariance Causes Adversarial Vulnerability

Despite their impressive performance, deep neural networks exhibit strik...
research
12/14/2017

MentorNet: Regularizing Very Deep Neural Networks on Corrupted Labels

Recent studies have discovered that deep networks are capable of memoriz...
research
07/08/2022

Combining Deep Learning with Good Old-Fashioned Machine Learning

We present a comprehensive, stacking-based framework for combining deep ...
research
12/12/2016

Neurogenesis Deep Learning

Neural machine learning methods, such as deep neural networks (DNN), hav...
research
06/15/2021

Encouraging Intra-Class Diversity Through a Reverse Contrastive Loss for Better Single-Source Domain Generalization

Traditional deep learning algorithms often fail to generalize when they ...

Please sign up or login with your details

Forgot password? Click here to reset