Log In Sign Up

Improved Estimation in Time Varying Models

by   Doina Precup, et al.

Locally adapted parameterizations of a model (such as locally weighted regression) are expressive but often suffer from high variance. We describe an approach for reducing the variance, based on the idea of estimating simultaneously a transformed space for the model, as well as locally adapted parameterizations in this new space. We present a new problem formulation that captures this idea and illustrate it in the important context of time varying models. We develop an algorithm for learning a set of bases for approximating a time varying sparse network; each learned basis constitutes an archetypal sparse network structure. We also provide an extension for learning task-driven bases. We present empirical results on synthetic data sets, as well as on a BCI EEG classification task.


page 1

page 2

page 3

page 4


Joint Mean-Vector and Var-Matrix estimation for Locally Stationary VAR(1) processes

During the last two decades, locally stationary processes have been wide...

Sparse Bayesian State-Space and Time-Varying Parameter Models

In this chapter, we review variance selection for time-varying parameter...

Detecting multiple change-points in the time-varying Ising model

This work focuses on the estimation of change-points in a time-varying I...

On the numerical solution of stochastic oscillators driven by time-varying and random forces

In this work, we provide a specifc trigonometric stochastic numerical me...

Indirect inference for locally stationary ARMA processes with stable innovations

The class of locally stationary processes assumes that there is a time-v...

Score Driven Generalized Fitness Model for Sparse and Weighted Temporal Networks

While the vast majority of the literature on models for temporal network...