Optimal Dynamic Sensor Subset Selection for Tracking a Time-Varying Stochastic Process
Motivated by the Internet-of-things and sensor networks for cyberphysical systems, the problem of dynamic sensor activation for the tracking of a time-varying process is examined. The tradeoff is between energy efficiency, which decreases with the number of active sensors, and fidelity, which increases with the number of active sensors. The problem of minimizing the time-averaged mean-squared error over infinite horizon is examined under the constraint of the mean number of active sensors. The proposed methods artfully combine three key ingredients: Gibbs sampling, stochastic approximation for learning, and modifications to consensus algorithms to create a high performance, energy efficient tracking mechanisms with active sensor selection. The following progression of scenarios are considered: centralized tracking of an i.i.d. process; distributed tracking of an i.i.d. process and finally distributed tracking of a Markov chain. The challenge of the i.i.d. case is that the process has a distribution parameterized by a known or unknown parameter which must be learned. The key theoretical results prove that the proposed algorithms converge to local optima for the two i.i.d process cases; numerical results suggest that global optimality is in fact achieved. The proposed distributed tracking algorithm for a Markov chain, based on Kalman-consensus filtering and stochastic approximation, is seen to offer an error performance comparable to that of a competetive centralized Kalman filter.
READ FULL TEXT