Task-Driven Estimation and Control via Information Bottlenecks

09/20/2018 ∙ by Vincent Pacelli, et al. ∙ Princeton University 0

Our goal is to develop a principled and general algorithmic framework for task-driven estimation and control for robotic systems. State-of-the-art approaches for controlling robotic systems typically rely heavily on accurately estimating the full state of the robot (e.g., a running robot might estimate joint angles and velocities, torso state, and position relative to a goal). However, full state representations are often excessively rich for the specific task at hand and can lead to significant computational inefficiency and brittleness to errors in state estimation. In contrast, we present an approach that eschews such rich representations and seeks to create task-driven representations. The key technical insight is to leverage the theory of information bottlenecksto formalize the notion of a "task-driven representation" in terms of information theoretic quantities that measure the minimality of a representation. We propose novel iterative algorithms for automatically synthesizing (offline) a task-driven representation (given in terms of a set of task-relevant variables (TRVs)) and a performant control policy that is a function of the TRVs. We present online algorithms for estimating the TRVs in order to apply the control policy. We demonstrate that our approach results in significant robustness to unmodeled measurement uncertainty both theoretically and via thorough simulation experiments including a spring-loaded inverted pendulum running to a goal location.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

State-of-the-art techniques for controlling robotic systems typically rely heavily on accurately estimating the full state of the system and maintaining rich geometric representations of their environment. For example, a common approach to navigation is to build a dense occupancy map produced by scanning the environment and to use this map for planning and control. Similarly, control for walking or running robots typically involves estimating the full state of the robot (e.g., joint angles, velocities, and position relative to a goal location). However, such representations are often overly detailed when compared to a task-driven representation.

One example of a task-driven representation is the “gaze heuristic” from cognitive psychology

[1, 2, 3]. When attempting to catch a ball, an agent can estimate the ball’s position and velocity, model how it will evolve in conjunction with environmental factors like wind, integrate the pertinent differential equations, and plan a trajectory in order to arrive at the ball’s final location. In contrast, cognitive psychology studies have shown that humans use a dramatically simpler strategy that entails maintaining the angle that the human’s gaze makes with the ball at a constant value. This method reduces a number of hard-to-monitor variables (e.g., wind speed) into a single easily-estimated variable. Modulating this variable alone results in accomplishing the task.

Fig. 1: A schematic of our technical approach. We seek to synthesize (offline) a minimalistic set of task-relevant variables (TRVs) that create a bottleneck between the full state and the control input . These TRVs are estimated online in order to apply the policy . We demonstrate our approach on a spring-loaded inverted pendulum model whose goal is to run to a target location. Our approach automatically synthesizes a one-dimensional TRV sufficient for achieving this task.

The gaze heuristic example highlights the two primary advantages of using a task-driven representation. First, a control policy that uses such a representation is more efficient to employ online since fewer variables need to be estimated. Second, since only a few prominent variables need to be estimated, fewer sources of measurement uncertainty result in a more robust policy. While one can sometimes manually design task-driven representations for a given task, we currently lack a principled theoretical and algorithmic framework for automatically synthesizing such representations. The goal of this paper is to develop precisely such an algorithmic approach.

Statement of Contributions. The main technical contribution of this paper is to formulate the synthesis of task-driven representations as an optimization problem using information bottleneck theory [4]. We present offline algorithms that encode the full state of the system into a set of task-relevant variables (TRVs) and simultaneously identify a performant policy (restricted to be a function of the TRVs) using novel iterative algorithms that exploit the structure of this optimization problem in a number of dynamical settings including discrete-state, linear-Gaussian, and nonlinear systems. We present online algorithms for estimating the TRVs in order to apply the control policy. We demonstrate that our approach yields policies that are robust to unmodeled measurement uncertainty both theoretically (using results from the theory of risk metrics) and in a number of simulation experiments including running using a spring-loaded inverted pendulum model (Figure 1).

I-a Related Work

By far the most common approach in practice for controlling robotic systems with nonlinear dynamics and partially observable state is to independently design an estimator for the full

state (e.g., a Kalman filter

[5]) and a controller that assumes perfect state information (e.g., designed using robust control techniques such as H-infinity control [6], sliding mode control [7, 8], passivity-based control [9, 10], or Lyapunov-based control [8, 10]). While this strategy is optimal for Linear-Quadratic-Gaussian (LQG) problems due to the separation principle [11], it can produce a brittle system due to errors in the state estimate for nonlinear systems (since the separation principle does not generally hold in this setting). We demonstrate that our task-driven approach affords significant robustness when compared to approaches that perform full state estimation and control assuming the separation principle (see Section V for numerical examples). Moreover, in contrast to traditional robust estimation and control techniques, our approach does not rely on explicit models of measurement uncertainty. We demonstrate that robustness can be achieved implicitly as a by-product of task-driven representations.

Historically, the work on designing information constrained controllers has been pursued within the networked control theory literature [12, 13, 14]. Recently, the optimal co-design of data-efficient sensors and performant controllers has also been explored beyond network applications. One set of approaches — inspired by the cognitive psychology concept of bounded rationality [15, 16] — is to limit the information content of a control policy measured with respect to a default stochastic policy [17, 18, 19, 20, 21]. Another set of examples comes from the sensor selection problem in robotics, which involves selecting a minimal number of sensors or features to use for estimating the robot’s state [22, 23, 24]. While the work highlighted above shares our goal of designing “minimalistic” (i.e., informationally frugal) controllers, our approach here is fundamentally different. In particular, our goal is to design task-driven representations that form abstractions which are sufficient for the purpose of control. Online estimation and control is performed purely based on this representation without resorting to estimating the full state of the system (in contrast to the work highlighted above, which either assumes access to the full state or designs estimators for it).

A number of previous authors consider the construction of minimal-information representations. In information theory, this approach is typically referred to as the Information Bottleneck (IB) [4]. Recently, these ideas have been co-opted for designing control policies. In [25], a learning-based approach is suggested to find minimal-information state representations and control policies which use them. Our work differs in that we provide analytic (i.e., model-based) methods for finding such representations and policies and we explicitly characterize the resulting robustness. Another branch of work considers the construction of LQG policies that achieve a performance goal while minimizing an information-theoretic quantity such as the mutual information between inputs and outputs [26, 27] or Massey’s directed information [28, 29]. In contrast to this work, we handle nonlinear systems and also present robustness results for the resulting controllers.

The work on actionable information in vision [30, 31] attempts to find invariant and task-relevant representations for visual tasks in the form of minimal complete representations (minimal sufficient statistics of a full or “complete” representation). While highly ambitious in scope, this work has largely been devoted to studying visual decision tasks (e.g., recognition, detection, categorization). The algorithmic approach taken in [30, 31] is thus tied to the specifics of visual problems (e.g., designing visual feature detectors that are invariant to nuisance factors such as contrast, scale, and translation). Our goals and technical approach here are complementary. We do not specifically target visual decision problems; instead, we seek to develop a general framework that is applicable to a broad range of robotic control problems and allows us to automatically synthesize task-driven representations (without manually designing feature detectors).

Ii Task-Driven Representations and Controllers

Our goal in this section is to formally define the notion of a “task-driven representation” and formulate an optimization problem that automatically synthesizes such representations. We focus on control tasks in this paper and assume that a task is defined in terms of a (partially observable) Markov decision process ((PO)MDP). Let

denote the full state of the system and denote the control input at time step . Here the state space and input space may either be discrete or continuous. We assume that the dynamics of the system are known and given by potentially time-varying transitions . Let be a sequence of cost functions that encode the robot’s desired behavior. The robot’s goal is to identify a control policy that minimizes the expected value of these cost functions:

The key idea behind our technical approach is to define a principled information-theoretic notion of minimality of representations for tasks. Figure 1 illustrates the main idea for doing this. Here are task-relevant variables (TRVs) that constitute a representation; one can think of the representation as a “sketch” of the full state that the robot uses for computing control inputs via . Ideally, such a representation filters out all the information from the state that is not relevant for computing control inputs. A minimalistic representation should thus create a bottleneck between the full state and the control input. We make this notion precise by leveraging the theory of information bottlenecks [4] and finding a stochastic mapping that minimizes the mutual information between and


while ensuring that the resulting control policy performs well (i.e., achieves low expected cost). Here,

represents the Kullback-Leibler divergence between two distributions. Intuitively, minimizing the mutual information thus corresponds to designing TRVs

that are as independent of the state as possible. The map is thus “squeezing out” all the irrelevant information from the state while only maintaining enough information for choosing good control inputs. Such a representation therefore formalizes our notion of a task-driven representation.

Formally, we pose the problem of finding task-driven representations as the following optimization problem, which we refer to as :

We note that the unconstrained problem is equivalent to a constrained version where the mutual information is minimized subject to a constraint on the expected cost.

So far, we have limited our discussion to the case where the robot has access to the full state of the system. However, the real benefit of the task-driven perspective is evident in the partially observable setting, where the robot only indirectly observes the state of the system via sensor measurements , where

. We denote the probability of observing a particular measurement in a given state by

. The prevalent approach for handling such settings is to design an estimator for the full state . Our key idea here is to perform estimation for the TRVs instead of . Specifically, our overall approach has two phases:

Offline. Synthesize the maps by solving .

Online. Estimate TRVs current using sensor measurements and use this estimate to compute control inputs via .

Perhaps the clearest benefit of our approach is the fact that the representation formed by may be significantly lower-dimensional than the full state of the system; this can thus lead to significant reductions in online computations. Another advantage of the task-driven approach is robustness to estimation errors. To see this, let

denote the joint distribution over the state, representation, and inputs at time

that results when we apply the policy obtained by solving Problem in the fully observable setting. Now, let denote the distribution for the partially observable setting, i.e., when we estimate online using the robot’s sensor measurements and use this estimate to compute control inputs.

Theorem II.1

Let be the distribution resulting from any estimator that satisfies the following condition:


Then, we have the following upper bound on the total expected cost:


where is the entropic risk metric [32, Example 6.20]:


By the well-known Donsker-Varadhan change of measure formula [33, Theorem 2.3.2], we have:


Then, using condition (2) and inequality (II), we obtain:


Summing over time gives us the desired result.

Intuitively, this theorem shows that any estimator for (in the partially observable setting) that results in a distribution that is “close enough” to the distribution in the fully observable case (i.e., when their KL divergence is less than times the KL divergence between and the joint distribution over and that results when is assumed to be independent of ), the expected cost of the controller in the partially observable case is guaranteed to be bounded by the right hand side (RHS) of (3). Notice that this RHS is almost identical to the cost function of . In particular, the expected value operator is a linearization of the entropic risk metric . Thus, by solving , we are minimizing (a linear approximation of) an upper bound on the expected cost even when our state is only partially observable (as long as our estimator for ensures condition (2)). Once has been solved, we can use Theorem II.1 to obtain a robustness bound by evaluating the RHS of (3).

Iii Algorithms for Synthesizing Representations

In this section, we outline our approach for solving offline. We note that is non-convex in general. While one could potentially apply general non-convex optimization techniques such as gradient-based methods, computing gradients quickly becomes computationally expensive due to the large number of decision variables involved (even in the setting with finite state and action spaces, we have decision variables corresponding to and for every possible value of and at every time step). Our key insight here is to exploit the structure of to propose an efficient iterative algorithm in three different dynamical settings: discrete, linear-Gaussian, and nonlinear-Gaussian. In each setting, the algorithm iterates over the following three steps:

  1. Fix and solve for using the forward dynamical equations.

  2. Fix and solve for by satisfying necessary conditions for optimality.

  3. Fix and solve for by solving a convex optimization problem.

In our implementation, we iterate over these steps until convergence (or until an iteration limit is reached). While we cannot currently guarantee convergence, our iterative procedure is extremely efficient (since all the computations above can be performed either in closed-form or via a convex optimization problem) and produces good solutions in practice (see Section V). We describe instantiations of each step for three different dynamical settings below.

Iii-a Discrete Systems

In order to solve Step 1, note that the forward dynamics of the system for fixed are given by:


The Lagrangian functional for is where

where are Lagrange multipliers. The Lagrange multipliers for the normalization constraints on the distribution variables are omitted since they do not contribute to the analysis. The following proposition demonstrates the structure of using the first-order necessary condition (FONC) for optimality [34].

Theorem III.1

A necessary condition for to be optimal for is that


where and


See Section -A.

This proposition demonstrates that is a Boltzmann distribution with as the partition function and playing the role of inverse temperature. In order to solve Step 2, we simply evaluate the closed-form expression (9).

It is easily verified that the function is the cost-to-go function for . Thus can be written as a dynamic programing problem using :

This allows us to solve Step 3. In particular, when are fixed,

is a linear programming problem in

and can thus be solved efficiently.

Iii-B Linear-Gaussian Systems with Quadratic Costs

A discrete-time linear-Gaussian (LG) system is defined by the transition system


where , and . We assume that the cost function is quadratic:

with . We explicitly parameterize the TRVs and control policy as:


where the random variable

is additive process noise. This structure dictates that are Gaussians respectively, with . This allows for both the forward dynamics (Step 1) and Eq. 9 (Step 2) to be computed in closed form. The latter is derived in the following theorem.

Theorem III.2

Define the notational shorthand . For the LG system, the necessary condition Eq. 9 is equivalent to the conditions


where the cost-to-go function is the recursively defined quadratic function with values and


See Section -B.

Finally, when are fixed, is the unconstrained convex quadratic program where


This program can be solved very efficiently (e.g., using active-set or interior point methods) [35].

Iii-C Nonlinear-Gaussian Systems

When the dynamics are nonlinear-Gaussian (NLG), i.e. when Eq. 12 is changed to


minimizing is challenging due to no longer being Gaussian. We tackle this challenge by leveraging our results for the LG setting and adapting the iterative Linear Quadratic Regulator (iLQR) algorithm [36, 37, 38].

Given an initial nominal trajectory , the matrices are produced by linearizing along the trajectory. The pair describes the dynamics of a perturbation in the neighborhood of for a perturbed control input in the neighborhood of :


We compute (a quadratic approximation of) the perturbation costs subject to Eq. 18. We can then apply the solution method outlined in Section III-B to search for an optimal . We then update the nominal state and input trajectories to and repeat the entire process until the nominal trajectory converges.

Iv Online Estimation and Control

Once the task-driven representation and policy have been synthesized offline, we can leverage it for computationally-efficient and robust online control. The key idea behind our online approach is to use the robot’s sensor measurements to only estimate the TRVs . Once has been estimated, the control policy can be applied. This is in stark contrast to most prevalent approaches for controlling robotic systems today, which aim to accurately estimate the full state . We describe our online estimation approach in more detail below.

We maintain a belief distribution over the TRV-space and update it at each time using a Bayes filter [5]. Specifically, we perform two steps every :

  1. Process Update. The system model is used to update the belief-state to the current time step: .

  2. Measurement Update. The measurement model is used to integrate the observation into the belief-state: .

To apply this filter, the distributions and are precomputed offline

. Bayes’ theorem states

. Consequently,

In the discrete case, the above equations can be evaluated directly. In the LG case, the measurement and update steps take the form of the traditional Kalman updates applied to an LG system induced on the TRV-space by and the system dynamics. This structure is elucidated in the following theorem.

Theorem IV.1

The measurement and process updates for a Bayesian filter on the TRVs in the LG case are the Kalman filter measurement and process updates for the LG system



See Section -C.

In the NLG case, we use an extended Kalman filter that maintains a belief over (the TRV corresponding to the perturbed system). Specifically, we use the linearized dynamics of and apply our approach for LG systems.

Given a belief , we compute the control input by sampling , where is the maximum likelihood TRV .

V Examples

In this section, we demonstrate the efficacy of our task-driven control approach on a discrete-state scenario as well as on a spring-loaded inverted pendulum (SLIP) model. To select the value of used in each scenario, the algorithm is run 10 times with set to evenly spaced values in a listed interval. The controller selected is the one with the lowest value of subject to the expected performance of the controller being below a particular value. The intuition behind this choice is that it selects the controller that has removed the most state information from the TRVs while maintaining a prescribed level of performance.

V-a Lava Problem

The first example (Fig. 2), adapted from [39] demonstrates a simple setting where the separation principle does not hold. If the robot’s belief distribution is while residing in state 4, the optimal action corresponding to the maximum-likelihood estimate (MLE) of the state is to move right, while the true optimal action is to move left.

Our algorithm was run with a three-dimensional TRV space and for 30 iterations. The smallest value of found to satisfy the constraint that the expected cost was negative was . In each of the trials conducted, the robot’s initial condition was sampled from the aforementioned belief distribution. Online, the robot was modeled to have a faulty sensor that only localizes to the correct state with a probability of 0.5, with a uniformly random incorrect state returned the rest of the time.

Fig. 3 depicts our algorithm’s performance compared with an approach that assumes the separation principle, i.e., solves the MDP (assuming perfect state information) and then performs MLE for the state online. Interestingly, our algorithm produces a deterministic policy that moves the robot left three times — which ensures it is in state 1 — then moves right twice and remains in the goal. With this policy, it is impossible for the robot to end up in the lava state, producing

a more performant, lower variance trajectory than the separation principle-based solution under measurement noise

. The algorithm arrives at a deterministic solution because, at low values of , the Boltzmann distribution in Eq. 9 is almost uniform — thereby requiring to be essentially open-loop to provide an effective policy. Note that despite the policy being deterministic, there is still a distribution over the costs due to the random initial condition.

Fig. 2: The lava scenario [39] consists of five states connected in a line. The robot is allowed to move one step in either direction unless it enters the lava pit on the far right, which is an absorbing state. The robot receives a reward of 5 points every time step it ends in the goal state and a penalty of -1 otherwise. A terminal reward of 10 points is given if the robot is in the goal a terminal penalty of -10 points if it is in the lava.

V-B SLIP Model Problems

Next, we apply the NLG variant of our algorithm to the SLIP model [40, 41, 42], which is depicted in Fig. 1. The SLIP model is a common model in robotics for prototyping legged locomotion control strategies. It consists of a single leg whose lateral motion is derived from a spring/piston combination. At each touchdown, the state of the robot is given by where is displacement of the head from the origin, is the current touchdown angle, and are the radial and angular velocities respectively. The control input provided to the system is , the change in touchdown angle at the next touchdown. The additional parameters are the head mass, , the spring constant, , gravity , and leg length . Despite its simplicity, the return map at touchdown eludes a closed-form description, so numerical integration by MATLAB’s ode45 is used to compute and linearize the return map numerically.

The algorithm’s goal is to find a trajectory that, after three hops, places the head of the robot at . This experiment is reminiscent of a set of classical experiments in cognitive psychology that examined the cognitive information used by humans for foot placement while running [43, 44]. Our NLG algorithm was run with , control cost matrices for all , and a terminal state cost as the squared distance of the robot from . The initial state distribution was Gaussian with mean , which is in the vicinity of a fixed point of the return map, and covariance , and the process covariance matrix was . The selected value was .

The results for our simulation are shown in Fig. 4. The algorithm is compared with two iLQG solutions, one presented with the correct measurement model and one presented with an incorrect measurement model. The former is a locally optimal solution to the problem due to the separation principle. However, when modeling error is introduced, the iLQG solution’s performance degrades rapidly. Meanwhile, the TRV-based control policy is notably more performant at a notably lower variance despite this modeling error. In addition, the solution found by our algorithm satisfies for all . Therefore, the online estimator needs to only track a single TRV corresponding to this subspace.

Fig. 3: This figure summarizes the outcome of 500 simulations of the Lava Problem with different control strategies. Each controller used a Bayesian filter to track the current belief distribution. The exact MDP solution (blue) applied the control input corresponding to its maximum-likelihood estimate (MLE) of the state. The TRV solutions sampled from the conditional distribution corresponding to their stochastic control policies given the MLE estimates from the current belief distribution either over the state space (red) or TRV space (green).
Fig. 4: This figure summarizes the outcome of 500 simulations of the SLIP Problem with different control strategies. Measurement covariance matrices were randomly sampled. For the iLQG control policy, a Kalman filter was used to track the current state estimate, and the control corresponding to the MLE state was applied. For the TRV policy, a Kalman filter was maintained on the TRVs, and the control input corresponding to the MLE TRV state was applied.

Vi Conclusion

We presented an algorithmic approach for task-driven estimation and control for robotic systems. In contrast to prevalent approaches for control that rely on accurately estimating the full state of the system, our approach automatically synthesizes a set of task-relevant variables (TRVs) that are sufficient for achieving the task. Our key insight is to pose the problem of finding TRVs as an optimization problem using the theory of information bottlenecks. We solve this problem offline in order to identify TRVs and a control policy based on the TRVs. Our online approach estimates the TRVs in order to apply the policy. Our theoretical results suggest that the task-driven approach affords robustness to unmodeled measurement uncertainty. This is validated using thorough numerical simulations, which include a SLIP model running to a target location. Our simulation results also demonstrate the ability of our approach to find highly compressed TRVs (e.g., a one-dimensional TRV for the SLIP model).

Challenges and Future Work. On the algorithmic front, we plan to develop approaches that directly minimize the RHS of (3) instead of a linear approximation of it. We expect that this may lead to improved robustness (as suggested by Theorem II.1). On the practical front, we plan to implement our approach on a hardware platform that mimics the gaze heuristic example and on other examples including navigation problems (where the full state representation includes a description of the environment, e.g., in terms of an occupancy map). Perhaps the most exciting future direction is to explore active versions of our approach where the control policy seeks to actively minimize task-relevant uncertainty in contrast to current approaches to active perception (e.g., based on belief space planning) that attempt to minimize uncertainty associated with the full state.

We believe that the approach presented in this paper along with the indicated future directions represent an important step towards developing a principled and general algorithmic framework for task-driven estimation and control.

This appendix provides proofs for the main theorems stated in this article.

-a Proof of Theorem iii.1

The first step of this derivation is to determine the structure of the value function, . The functional derivative of with respect to is


Invoking the FONC yields


Next, we repeat the process for the decision variable . The functional derivative of is


where is the Lagrange multiplier corresponding to the normalization constraint of . Invoking the FONC again and solving for yields



is a probability distribution and must be normalized, it is the case that