
Datadriven Reconstruction of Nonlinear Dynamics from Sparse Observation
We present a datadriven model to reconstruct nonlinear dynamics from a ...
06/10/2019 ∙ by Kyongmin Yeo, et al. ∙ 0 ∙ shareread it

Herding Dynamic Weights for Partially Observed Random Field Models
Learning the parameters of a (potentially partially observable) random f...
05/09/2012 ∙ by Max Welling, et al. ∙ 0 ∙ shareread it

A State Space Approach for PiecewiseLinear Recurrent Neural Networks for Reconstructing Nonlinear Dynamics from Neural Measurements
The computational properties of neural systems are often thought to be i...
12/23/2016 ∙ by Daniel Durstewitz, et al. ∙ 0 ∙ shareread it

Exponentiated Inverse Power Lindley Distribution and its Applications
In this article,a three parameter generalisation of inverse lindley dist...
08/22/2018 ∙ by Rameesa Jan, et al. ∙ 0 ∙ shareread it

Convergence Rates of Gradient Descent and MM Algorithms for Generalized BradleyTerry Models
We show tight convergence rate bounds for gradient descent and MM algori...
01/01/2019 ∙ by Milan Vojnovic, et al. ∙ 8 ∙ shareread it

Linear Range in Gradient Descent
This paper defines linear range as the range of parameter perturbations ...
05/11/2019 ∙ by Angxiu Ni, et al. ∙ 0 ∙ shareread it
Herding as a Learning System with EdgeofChaos Dynamics
Herding defines a deterministic dynamical system at the edge of chaos. It generates a sequence of model states and parameters by alternating parameter perturbations with state maximizations, where the sequence of states can be interpreted as "samples" from an associated MRF model. Herding differs from maximum likelihood estimation in that the sequence of parameters does not converge to a fixed point and differs from an MCMC posterior sampling approach in that the sequence of states is generated deterministically. Herding may be interpreted as a"perturb and map" method where the parameter perturbations are generated using a deterministic nonlinear dynamical system rather than randomly from a Gumbel distribution. This chapter studies the distinct statistical characteristics of the herding algorithm and shows that the fast convergence rate of the controlled moments may be attributed to edge of chaos dynamics. The herding algorithm can also be generalized to models with latent variables and to a discriminative learning setting. The perceptron cycling theorem ensures that the fast moment matching property is preserved in the more general framework.
READ FULL TEXT