Learning from Censored and Dependent Data: The case of Linear Dynamics

04/11/2021
by   Orestis Plevrakis, et al.
10

Observations from dynamical systems often exhibit irregularities, such as censoring, where values are recorded only if they fall within a certain range. Censoring is ubiquitous in practice, due to saturating sensors, limit-of-detection effects, and image-frame effects. In light of recent developments on learning linear dynamical systems (LDSs), and on censored statistics with independent data, we revisit the decades-old problem of learning an LDS, from censored observations (Lee and Maddala (1985); Zeger and Brookmeyer (1986)). Here, the learner observes the state x_t ∈ℝ^d if and only if x_t belongs to some set S_t ⊆ℝ^d. We develop the first computationally and statistically efficient algorithm for learning the system, assuming only oracle access to the sets S_t. Our algorithm, Stochastic Online Newton with Switching Gradients, is a novel second-order method that builds on the Online Newton Step (ONS) of Hazan et al. (2007). Our Switching-Gradient scheme does not always use (stochastic) gradients of the function we want to optimize, which we call "censor-aware" function. Instead, in each iteration, it performs a simple test to decide whether to use the censor-aware, or another "censor-oblivious" function, for getting a stochastic gradient. In our analysis, we consider a "generic" Online Newton method, which uses arbitrary vectors instead of gradients, and we prove an error-bound for it. This can be used to appropriately design these vectors, leading to our Switching-Gradient scheme. This framework significantly deviates from the recent long line of works on censored statistics (e.g., Daskalakis et al. (2018); Kontonis et al. (2019); Daskalakis et al. (2019)), which apply Stochastic Gradient Descent (SGD), and their analysis reduces to establishing conditions for off-the-shelf SGD-bounds.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/25/2018

Online Newton Step Algorithm with Estimated Gradient

Online learning with limited information feedback (bandit) tries to solv...
research
02/11/2018

SGD and Hogwild! Convergence Without the Bounded Gradients Assumption

Stochastic gradient descent (SGD) is the optimization algorithm of choic...
research
04/30/2019

Hitting Time of Stochastic Gradient Langevin Dynamics to Stationary Points: A Direct Analysis

Stochastic gradient Langevin dynamics (SGLD) is a fundamental algorithm ...
research
01/09/2022

Stability Based Generalization Bounds for Exponential Family Langevin Dynamics

We study generalization bounds for noisy stochastic mini-batch iterative...
research
05/02/2023

Revisiting Gradient Clipping: Stochastic bias and tight convergence guarantees

Gradient clipping is a popular modification to standard (stochastic) gra...
research
03/10/2021

Streaming Linear System Identification with Reverse Experience Replay

We consider the problem of estimating a stochastic linear time-invariant...
research
06/20/2021

Privacy Amplification via Iteration for Shuffled and Online PNSGD

In this paper, we consider the framework of privacy amplification via it...

Please sign up or login with your details

Forgot password? Click here to reset