Towards creativity characterization of generative models via group-based subset scanning

04/01/2021
by   Celia Cintas, et al.
5

Deep generative models, such as Variational Autoencoders (VAEs), have been employed widely in computational creativity research. However, such models discourage out-of-distribution generation to avoid spurious sample generation, limiting their creativity. Thus, incorporating research on human creativity into generative deep learning techniques presents an opportunity to make their outputs more compelling and human-like. As we see the emergence of generative models directed to creativity research, a need for machine learning-based surrogate metrics to characterize creative output from these models is imperative. We propose group-based subset scanning to quantify, detect, and characterize creative processes by detecting a subset of anomalous node-activations in the hidden layers of generative models. Our experiments on original, typically decoded, and "creatively decoded" (Das et al 2020) image datasets reveal that the proposed subset scores distribution is more useful for detecting creative processes in the activation space rather than the pixel space. Further, we found that creative samples generate larger subsets of anomalies than normal or non-creative samples across datasets. The node activations highlighted during the creative decoding process are different from those responsible for normal sample generation.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/19/2018

Subset Scanning Over Neural Network Activations

This work views neural networks as data generating systems and applies a...
10/14/2020

Disentangled Dynamic Graph Deep Generation

Deep generative models for graphs have exhibited promising performance i...
03/03/2019

Self-adversarial Variational Autoencoder with Gaussian Anomaly Prior Distribution for Anomaly Detection

Recently, deep generative models have become increasingly popular in uns...
04/29/2019

Generative models as parsimonious descriptions of sensorimotor loops

The Bayesian brain hypothesis, predictive processing and variational fre...
04/21/2018

Eval all, trust a few, do wrong to none: Comparing sentence generation models

In this paper, we study recent neural generative models for text generat...
02/01/2022

Right for the Right Latent Factors: Debiasing Generative Models via Disentanglement

A key assumption of most statistical machine learning methods is that th...
04/04/2018

Gaussian Process Subset Scanning for Anomalous Pattern Detection in Non-iid Data

Identifying anomalous patterns in real-world data is essential for under...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.