Drawing out of Distribution with Neuro-Symbolic Generative Models

06/03/2022
by   Yichao Liang, et al.
17

Learning general-purpose representations from perceptual inputs is a hallmark of human intelligence. For example, people can write out numbers or characters, or even draw doodles, by characterizing these tasks as different instantiations of the same generic underlying process – compositional arrangements of different forms of pen strokes. Crucially, learning to do one task, say writing, implies reasonable competence at another, say drawing, on account of this shared process. We present Drawing out of Distribution (DooD), a neuro-symbolic generative model of stroke-based drawing that can learn such general-purpose representations. In contrast to prior work, DooD operates directly on images, requires no supervision or expensive test-time inference, and performs unsupervised amortised inference with a symbolic stroke model that better enables both interpretability and generalization. We evaluate DooD on its ability to generalise across both data and tasks. We first perform zero-shot transfer from one dataset (e.g. MNIST) to another (e.g. Quickdraw), across five different datasets, and show that DooD clearly outperforms different baselines. An analysis of the learnt representations further highlights the benefits of adopting a symbolic stroke model. We then adopt a subset of the Omniglot challenge tasks, and evaluate its ability to generate new exemplars (both unconditionally and conditionally), and perform one-shot classification, showing that DooD matches the state of the art. Taken together, we demonstrate that DooD does indeed capture general-purpose representations across both data and task, and takes a further step towards building general and robust concept-learning systems.

READ FULL TEXT

page 6

page 16

page 20

page 21

page 23

research
06/25/2020

Learning Task-General Representations with Generative Neuro-Symbolic Modeling

A hallmark of human intelligence is the ability to interact directly wit...
research
04/16/2021

Learning Evolved Combinatorial Symbols with a Neuro-symbolic Generative Model

Humans have the ability to rapidly understand rich combinatorial concept...
research
03/16/2016

One-Shot Generalization in Deep Generative Models

Humans have an impressive ability to reason about new concepts and exper...
research
06/02/2023

ChatGPT for Zero-shot Dialogue State Tracking: A Solution or an Opportunity?

Recent research on dialogue state tracking (DST) focuses on methods that...
research
05/02/2019

Learning Programmatically Structured Representations with Perceptor Gradients

We present the perceptor gradients algorithm -- a novel approach to lear...
research
06/22/2017

A Useful Motif for Flexible Task Learning in an Embodied Two-Dimensional Visual Environment

Animals (especially humans) have an amazing ability to learn new tasks q...
research
02/04/2022

Webly Supervised Concept Expansion for General Purpose Vision Models

General purpose vision (GPV) systems are models that are designed to sol...

Please sign up or login with your details

Forgot password? Click here to reset