Hangul Fonts Dataset: a Hierarchical and Compositional Dataset for Interrogating Learned Representations

05/23/2019
by   Jesse A. Livezey, et al.
0

Interpretable representations of data are useful for testing a hypothesis or to distinguish between multiple potential hypotheses about the data. In contrast, applied machine learning, and specifically deep learning (DL), is often used in contexts where performance is valued over interpretability. Indeed, deep networks (DNs) are often treated as "black boxes", and it is not well understood what and how they learn from a given dataset. This lack of understanding seriously hinders adoption of DNs as data analysis tools in science and poses numerous research questions. One problem is that current deep learning research datasets either have very little hierarchical structure or are too complex for their structure to be analyzed, impeding precise predictions of hierarchical representations. To address this gap, we present a benchmark dataset with known hierarchical and compositional structure and a set of methods for performing hypothesis-driven data analysis using DNs. The Hangul Fonts Dataset is composed of 35 fonts, each with 11,172 written syllables consisting of 19 initial consonants, 21 medial vowels, and 28 final consonants. The rules for combining and modifying individual Hangul characters into blocks can be encoded, with translation, scaling, and style variation that depend on precise block content, as well as naturalistic variation across fonts. Thus, the Hangul Fonts Dataset will provide an intermediate complexity dataset with well-defined, hierarchical features to interrogate learned representations. We first present a summary of the structure of the dataset. Using a set of unsupervised and supervised methods, we find that deep network representations contain structure related to the geometrical hierarchy of the characters. Our results lay the foundation for a better understanding of what deep networks learn from complex, structured datasets.

READ FULL TEXT
research
11/29/2018

On the Transferability of Representations in Neural Networks Between Datasets and Tasks

Deep networks, composed of multiple layers of hierarchical distributed r...
research
03/26/2018

Deep learning as a tool for neural data analysis: speech classification and cross-frequency coupling in human sensorimotor cortex

A fundamental challenge in neuroscience is to understand what structure ...
research
11/13/2017

Visual Concepts and Compositional Voting

It is very attractive to formulate vision in terms of pattern theory Mum...
research
11/11/2019

Compositional Hierarchical Tensor Factorization: Representing Hierarchical Intrinsic and Extrinsic Causal Factors

Visual objects are composed of a recursive hierarchy of perceptual whole...
research
04/06/2018

Hierarchical Disentangled Representations

Deep latent-variable models learn representations of high-dimensional da...
research
04/14/2021

Compositional Hierarchical #Tensor Factorization: Representing Hierarchical Intrinsic and Extrinsic #Causal Factors #macinelearning #womenwhocode

Visual objects are composed of a recursive hierarchy of perceptual whole...
research
12/20/2014

Discovering Hidden Factors of Variation in Deep Networks

Deep learning has enjoyed a great deal of success because of its ability...

Please sign up or login with your details

Forgot password? Click here to reset