DeepAI
Log In Sign Up

Improved Estimation in Time Varying Models

06/27/2012
by   Doina Precup, et al.
0

Locally adapted parameterizations of a model (such as locally weighted regression) are expressive but often suffer from high variance. We describe an approach for reducing the variance, based on the idea of estimating simultaneously a transformed space for the model, as well as locally adapted parameterizations in this new space. We present a new problem formulation that captures this idea and illustrate it in the important context of time varying models. We develop an algorithm for learning a set of bases for approximating a time varying sparse network; each learned basis constitutes an archetypal sparse network structure. We also provide an extension for learning task-driven bases. We present empirical results on synthetic data sets, as well as on a BCI EEG classification task.

READ FULL TEXT

page 1

page 2

page 3

page 4

04/23/2021

Joint Mean-Vector and Var-Matrix estimation for Locally Stationary VAR(1) processes

During the last two decades, locally stationary processes have been wide...
07/25/2022

Sparse Bayesian State-Space and Time-Varying Parameter Models

In this chapter, we review variance selection for time-varying parameter...
10/18/2019

Detecting multiple change-points in the time-varying Ising model

This work focuses on the estimation of change-points in a time-varying I...
01/09/2021

On the numerical solution of stochastic oscillators driven by time-varying and random forces

In this work, we provide a specifc trigonometric stochastic numerical me...
06/21/2020

Indirect inference for locally stationary ARMA processes with stable innovations

The class of locally stationary processes assumes that there is a time-v...
02/20/2022

Score Driven Generalized Fitness Model for Sparse and Weighted Temporal Networks

While the vast majority of the literature on models for temporal network...