Dynamic Least-Squares Regression
A common challenge in large-scale supervised learning, is how to exploit new incremental data to a pre-trained model, without re-training the model from scratch. Motivated by this problem, we revisit the canonical problem of dynamic least-squares regression (LSR), where the goal is to learn a linear model over incremental training data. In this setup, data and labels (𝐀^(t), 𝐛^(t)) ∈ℝ^t × d×ℝ^t evolve in an online fashion (t≫ d), and the goal is to efficiently maintain an (approximate) solution to min_𝐱^(t)𝐀^(t)𝐱^(t) - 𝐛^(t)_2 for all t∈ [T]. Our main result is a dynamic data structure which maintains an arbitrarily small constant approximate solution to dynamic LSR with amortized update time O(d^1+o(1)), almost matching the running time of the static (sketching-based) solution. By contrast, for exact (or even 1/poly(n)-accuracy) solutions, we show a separation between the static and dynamic settings, namely, that dynamic LSR requires Ω(d^2-o(1)) amortized update time under the OMv Conjecture (Henzinger et al., STOC'15). Our data structure is conceptually simple, easy to implement, and fast both in theory and practice, as corroborated by experiments over both synthetic and real-world datasets.
READ FULL TEXT