Neural Coarse-Graining: Extracting slowly-varying latent degrees of freedom with neural networks

09/01/2016
by   Nicholas Guttenberg, et al.
0

We present a loss function for neural networks that encompasses an idea of trivial versus non-trivial predictions, such that the network jointly determines its own prediction goals and learns to satisfy them. This permits the network to choose sub-sets of a problem which are most amenable to its abilities to focus on solving, while discarding 'distracting' elements that interfere with its learning. To do this, the network first transforms the raw data into a higher-level categorical representation, and then trains a predictor from that new time series to its future. To prevent a trivial solution of mapping the signal to zero, we introduce a measure of non-triviality via a contrast between the prediction error of the learned model with a naive model of the overall signal statistics. The transform can learn to discard uninformative and unpredictable components of the signal in favor of the features which are both highly predictive and highly predictable. This creates a coarse-grained model of the time-series dynamics, focusing on predicting the slowly varying latent parameters which control the statistics of the time-series, rather than predicting the fast details directly. The result is a semi-supervised algorithm which is capable of extracting latent parameters, segmenting sections of time-series with differing statistics, and building a higher-level representation of the underlying dynamics from unlabeled data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/09/2021

Meta-Learning for Koopman Spectral Analysis with Short Time-series

Koopman spectral analysis has attracted attention for nonlinear dynamica...
research
05/05/2015

Autoencoding Time Series for Visualisation

We present an algorithm for the visualisation of time series. To that en...
research
02/16/2018

Pattern Localization in Time Series through Signal-To-Model Alignment in Latent Space

In this paper, we study the problem of locating a predefined sequence of...
research
07/08/2020

Accuracy of neural networks for the simulation of chaotic dynamics: precision of training data vs precision of the algorithm

We explore the influence of precision of the data and the algorithm for ...
research
06/08/2020

Liquid Time-constant Networks

We introduce a new class of time-continuous recurrent neural network mod...
research
06/28/2023

Trend patterns statistics for assessing irreversibility in cryptocurrencies: time-asymmetry versus inefficiency

In this paper, we present a measure of time irreversibility using trend ...

Please sign up or login with your details

Forgot password? Click here to reset