Generalising Cost-Optimal Particle Filtering

by   Andrew Warrington, et al.

We present an instance of the optimal sensor scheduling problem with the additional relaxation that our observer makes active choices whether or not to observe and how to observe. We mask the nodes in a directed acyclic graph of the model that are observable, effectively optimising whether or not an observation should be made at each time step. The reason for this is simple: it is prudent to seek to reduce sensor costs, since resources (e.g. hardware, personnel and time) are finite. Consequently, rather than treating our plant as if it had infinite sensing resources, we seek to jointly maximise the utility of each perception. This reduces resource expenditure by explicitly minimising an observation-associated cost (e.g. battery use) while also facilitating the potential to yield better state estimates by virtue of being able to use more perceptions in noisy or unpredictable regions of state-space (e.g. a busy traffic junction). We present a general formalisation and notation of this problem, capable of encompassing much of the prior art. To illustrate our formulation, we pose and solve two example problems in this domain. Finally we suggest active areas of research to improve and further generalise this approach.


page 1

page 2

page 3

page 4


Cost and Reliability Aware Scheduling of Workflows Across Multiple Clouds with Security Constraints

Many real-world scientific workflows can be represented by a Directed Ac...

Efficient Learning by Directed Acyclic Graph For Resource Constrained Prediction

We study the problem of reducing test-time acquisition costs in classifi...

Reducing the LQG Cost with Minimal Communication

We study the linear quadratic Gaussian (LQG) control problem, in which t...

Age-Optimal UAV Scheduling for Data Collectionwith Battery Recharging

We study route scheduling of a UAV for data collec-tion from remote sens...

FlipDyn: A game of resource takeovers in dynamical systems

We introduce a game in which two players with opposing objectives seek t...

Solving Constrained Reinforcement Learning through Augmented State and Reward Penalties

Constrained Reinforcement Learning has been employed to enforce safety c...

Navigate-and-Seek: a Robotics Framework for People Localization in Agricultural Environments

The agricultural domain offers a working environment where many human la...

Please sign up or login with your details

Forgot password? Click here to reset