Controllable Path of Destruction

05/29/2023
by   Matthew Siper, et al.
5

Path of Destruction (PoD) is a self-supervised method for learning iterative generators. The core idea is to produce a training set by destroying a set of artifacts, and for each destructive step create a training instance based on the corresponding repair action. A generator trained on this dataset can then generate new artifacts by repairing from arbitrary states. The PoD method is very data-efficient in terms of original training examples and well-suited to functional artifacts composed of categorical data, such as game levels and discrete 3D structures. In this paper, we extend the Path of Destruction method to allow designer control over aspects of the generated artifacts. Controllability is introduced by adding conditional inputs to the state-action pairs that make up the repair trajectories. We test the controllable PoD method in a 2D dungeon setting, as well as in the domain of small 3D Lego cars.

READ FULL TEXT

page 3

page 5

page 6

page 7

research
02/21/2022

Path of Destruction: Learning an Iterative Level Generator Using a Small Dataset

We propose a new procedural content generation method which learns itera...
research
06/13/2018

Self-Supervised Feature Learning by Learning to Spot Artifacts

We introduce a novel self-supervised learning method based on adversaria...
research
08/09/2023

Mathematical Artifacts Have Politics: The Journey from Examples to Embedded Ethics

We extend Langdon Winner's idea that artifacts have politics into the re...
research
04/05/2023

MUFIN: Improving Neural Repair Models with Back-Translation

Automated program repair is the task of automatically repairing software...
research
05/06/2021

Learning Controllable Content Generators

It has recently been shown that reinforcement learning can be used to tr...
research
06/23/2023

Dermoscopic Dark Corner Artifacts Removal: Friend or Foe?

One of the more significant obstacles in classification of skin cancer i...
research
06/17/2022

Evolution through Large Models

This paper pursues the insight that large language models (LLMs) trained...

Please sign up or login with your details

Forgot password? Click here to reset