Quantifying the effect of representations on task complexity

12/19/2019
by   Julian Zilly, et al.
5

We examine the influence of input data representations on learning complexity. For learning, we posit that each model implicitly uses a candidate model distribution for unexplained variations in the data, its noise model. If the model distribution is not well aligned to the true distribution, then even relevant variations will be treated as noise. Crucially however, the alignment of model and true distribution can be changed, albeit implicitly, by changing data representations. "Better" representations can better align the model to the true distribution, making it easier to approximate the input-output relationship in the data without discarding useful data variations. To quantify this alignment effect of data representations on the difficulty of a learning task, we make use of an existing task complexity score and show its connection to the representation-dependent information coding length of the input. Empirically we extract the necessary statistics from a linear regression approximation and show that these are sufficient to predict relative learning performance outcomes of different data representations and neural network types obtained when utilizing an extensive neural network architecture search. We conclude that to ensure better learning outcomes, representations may need to be tailored to both task and model to align with the implicit distribution of model and task.

READ FULL TEXT

page 2

page 13

research
09/01/2022

Complexity of Representations in Deep Learning

Deep neural networks use multiple layers of functions to map an object r...
research
06/16/2020

Gradient Alignment in Deep Neural Networks

One cornerstone of interpretable deep learning is the high degree of vis...
research
06/30/2016

Neural Network-based Word Alignment through Score Aggregation

We present a simple neural network for word alignment that builds source...
research
08/12/2023

Neural Latent Aligner: Cross-trial Alignment for Learning Representations of Complex, Naturalistic Neural Data

Understanding the neural implementation of complex human behaviors is on...
research
03/31/2021

Convolutional Dynamic Alignment Networks for Interpretable Classifications

We introduce a new family of neural network models called Convolutional ...
research
01/27/2023

Alignment with human representations supports robust few-shot learning

Should we care whether AI systems have representations of the world that...
research
06/17/2020

Revealing the Invisible with Model and Data Shrinking for Composite-database Micro-expression Recognition

Composite-database micro-expression recognition is attracting increasing...

Please sign up or login with your details

Forgot password? Click here to reset