I Introduction
Stateoftheart techniques for controlling robotic systems typically rely heavily on accurately estimating the full state of the system and maintaining rich geometric representations of their environment. For example, a common approach to navigation is to build a dense occupancy map produced by scanning the environment and to use this map for planning and control. Similarly, control for walking or running robots typically involves estimating the full state of the robot (e.g., joint angles, velocities, and position relative to a goal location). However, such representations are often overly detailed when compared to a taskdriven representation.
One example of a taskdriven representation is the “gaze heuristic” from cognitive psychology
[1, 2, 3]. When attempting to catch a ball, an agent can estimate the ball’s position and velocity, model how it will evolve in conjunction with environmental factors like wind, integrate the pertinent differential equations, and plan a trajectory in order to arrive at the ball’s final location. In contrast, cognitive psychology studies have shown that humans use a dramatically simpler strategy that entails maintaining the angle that the human’s gaze makes with the ball at a constant value. This method reduces a number of hardtomonitor variables (e.g., wind speed) into a single easilyestimated variable. Modulating this variable alone results in accomplishing the task.The gaze heuristic example highlights the two primary advantages of using a taskdriven representation. First, a control policy that uses such a representation is more efficient to employ online since fewer variables need to be estimated. Second, since only a few prominent variables need to be estimated, fewer sources of measurement uncertainty result in a more robust policy. While one can sometimes manually design taskdriven representations for a given task, we currently lack a principled theoretical and algorithmic framework for automatically synthesizing such representations. The goal of this paper is to develop precisely such an algorithmic approach.
Statement of Contributions. The main technical contribution of this paper is to formulate the synthesis of taskdriven representations as an optimization problem using information bottleneck theory [4]. We present offline algorithms that encode the full state of the system into a set of taskrelevant variables (TRVs) and simultaneously identify a performant policy (restricted to be a function of the TRVs) using novel iterative algorithms that exploit the structure of this optimization problem in a number of dynamical settings including discretestate, linearGaussian, and nonlinear systems. We present online algorithms for estimating the TRVs in order to apply the control policy. We demonstrate that our approach yields policies that are robust to unmodeled measurement uncertainty both theoretically (using results from the theory of risk metrics) and in a number of simulation experiments including running using a springloaded inverted pendulum model (Figure 1).
Ia Related Work
By far the most common approach in practice for controlling robotic systems with nonlinear dynamics and partially observable state is to independently design an estimator for the full
state (e.g., a Kalman filter
[5]) and a controller that assumes perfect state information (e.g., designed using robust control techniques such as Hinfinity control [6], sliding mode control [7, 8], passivitybased control [9, 10], or Lyapunovbased control [8, 10]). While this strategy is optimal for LinearQuadraticGaussian (LQG) problems due to the separation principle [11], it can produce a brittle system due to errors in the state estimate for nonlinear systems (since the separation principle does not generally hold in this setting). We demonstrate that our taskdriven approach affords significant robustness when compared to approaches that perform full state estimation and control assuming the separation principle (see Section V for numerical examples). Moreover, in contrast to traditional robust estimation and control techniques, our approach does not rely on explicit models of measurement uncertainty. We demonstrate that robustness can be achieved implicitly as a byproduct of taskdriven representations.Historically, the work on designing information constrained controllers has been pursued within the networked control theory literature [12, 13, 14]. Recently, the optimal codesign of dataefficient sensors and performant controllers has also been explored beyond network applications. One set of approaches — inspired by the cognitive psychology concept of bounded rationality [15, 16] — is to limit the information content of a control policy measured with respect to a default stochastic policy [17, 18, 19, 20, 21]. Another set of examples comes from the sensor selection problem in robotics, which involves selecting a minimal number of sensors or features to use for estimating the robot’s state [22, 23, 24]. While the work highlighted above shares our goal of designing “minimalistic” (i.e., informationally frugal) controllers, our approach here is fundamentally different. In particular, our goal is to design taskdriven representations that form abstractions which are sufficient for the purpose of control. Online estimation and control is performed purely based on this representation without resorting to estimating the full state of the system (in contrast to the work highlighted above, which either assumes access to the full state or designs estimators for it).
A number of previous authors consider the construction of minimalinformation representations. In information theory, this approach is typically referred to as the Information Bottleneck (IB) [4]. Recently, these ideas have been coopted for designing control policies. In [25], a learningbased approach is suggested to find minimalinformation state representations and control policies which use them. Our work differs in that we provide analytic (i.e., modelbased) methods for finding such representations and policies and we explicitly characterize the resulting robustness. Another branch of work considers the construction of LQG policies that achieve a performance goal while minimizing an informationtheoretic quantity such as the mutual information between inputs and outputs [26, 27] or Massey’s directed information [28, 29]. In contrast to this work, we handle nonlinear systems and also present robustness results for the resulting controllers.
The work on actionable information in vision [30, 31] attempts to find invariant and taskrelevant representations for visual tasks in the form of minimal complete representations (minimal sufficient statistics of a full or “complete” representation). While highly ambitious in scope, this work has largely been devoted to studying visual decision tasks (e.g., recognition, detection, categorization). The algorithmic approach taken in [30, 31] is thus tied to the specifics of visual problems (e.g., designing visual feature detectors that are invariant to nuisance factors such as contrast, scale, and translation). Our goals and technical approach here are complementary. We do not specifically target visual decision problems; instead, we seek to develop a general framework that is applicable to a broad range of robotic control problems and allows us to automatically synthesize taskdriven representations (without manually designing feature detectors).
Ii TaskDriven Representations and Controllers
Our goal in this section is to formally define the notion of a “taskdriven representation” and formulate an optimization problem that automatically synthesizes such representations. We focus on control tasks in this paper and assume that a task is defined in terms of a (partially observable) Markov decision process ((PO)MDP). Let
denote the full state of the system and denote the control input at time step . Here the state space and input space may either be discrete or continuous. We assume that the dynamics of the system are known and given by potentially timevarying transitions . Let be a sequence of cost functions that encode the robot’s desired behavior. The robot’s goal is to identify a control policy that minimizes the expected value of these cost functions:The key idea behind our technical approach is to define a principled informationtheoretic notion of minimality of representations for tasks. Figure 1 illustrates the main idea for doing this. Here are taskrelevant variables (TRVs) that constitute a representation; one can think of the representation as a “sketch” of the full state that the robot uses for computing control inputs via . Ideally, such a representation filters out all the information from the state that is not relevant for computing control inputs. A minimalistic representation should thus create a bottleneck between the full state and the control input. We make this notion precise by leveraging the theory of information bottlenecks [4] and finding a stochastic mapping that minimizes the mutual information between and
(1) 
while ensuring that the resulting control policy performs well (i.e., achieves low expected cost). Here,
represents the KullbackLeibler divergence between two distributions. Intuitively, minimizing the mutual information thus corresponds to designing TRVs
that are as independent of the state as possible. The map is thus “squeezing out” all the irrelevant information from the state while only maintaining enough information for choosing good control inputs. Such a representation therefore formalizes our notion of a taskdriven representation.Formally, we pose the problem of finding taskdriven representations as the following optimization problem, which we refer to as :
We note that the unconstrained problem is equivalent to a constrained version where the mutual information is minimized subject to a constraint on the expected cost.
So far, we have limited our discussion to the case where the robot has access to the full state of the system. However, the real benefit of the taskdriven perspective is evident in the partially observable setting, where the robot only indirectly observes the state of the system via sensor measurements , where
. We denote the probability of observing a particular measurement in a given state by
. The prevalent approach for handling such settings is to design an estimator for the full state . Our key idea here is to perform estimation for the TRVs instead of . Specifically, our overall approach has two phases:Offline. Synthesize the maps by solving .
Online. Estimate TRVs current using sensor measurements and use this estimate to compute control inputs via .
Perhaps the clearest benefit of our approach is the fact that the representation formed by may be significantly lowerdimensional than the full state of the system; this can thus lead to significant reductions in online computations. Another advantage of the taskdriven approach is robustness to estimation errors. To see this, let
denote the joint distribution over the state, representation, and inputs at time
that results when we apply the policy obtained by solving Problem in the fully observable setting. Now, let denote the distribution for the partially observable setting, i.e., when we estimate online using the robot’s sensor measurements and use this estimate to compute control inputs.Theorem II.1
Let be the distribution resulting from any estimator that satisfies the following condition:
(2)  
Then, we have the following upper bound on the total expected cost:
(3) 
where is the entropic risk metric [32, Example 6.20]:
(4) 
By the wellknown DonskerVaradhan change of measure formula [33, Theorem 2.3.2], we have:
(5) 
Then, using condition (2) and inequality (II), we obtain:
(6)  
(7) 
Summing over time gives us the desired result.
Intuitively, this theorem shows that any estimator for (in the partially observable setting) that results in a distribution that is “close enough” to the distribution in the fully observable case (i.e., when their KL divergence is less than times the KL divergence between and the joint distribution over and that results when is assumed to be independent of ), the expected cost of the controller in the partially observable case is guaranteed to be bounded by the right hand side (RHS) of (3). Notice that this RHS is almost identical to the cost function of . In particular, the expected value operator is a linearization of the entropic risk metric . Thus, by solving , we are minimizing (a linear approximation of) an upper bound on the expected cost even when our state is only partially observable (as long as our estimator for ensures condition (2)). Once has been solved, we can use Theorem II.1 to obtain a robustness bound by evaluating the RHS of (3).
Iii Algorithms for Synthesizing Representations
In this section, we outline our approach for solving offline. We note that is nonconvex in general. While one could potentially apply general nonconvex optimization techniques such as gradientbased methods, computing gradients quickly becomes computationally expensive due to the large number of decision variables involved (even in the setting with finite state and action spaces, we have decision variables corresponding to and for every possible value of and at every time step). Our key insight here is to exploit the structure of to propose an efficient iterative algorithm in three different dynamical settings: discrete, linearGaussian, and nonlinearGaussian. In each setting, the algorithm iterates over the following three steps:

Fix and solve for using the forward dynamical equations.

Fix and solve for by satisfying necessary conditions for optimality.

Fix and solve for by solving a convex optimization problem.
In our implementation, we iterate over these steps until convergence (or until an iteration limit is reached). While we cannot currently guarantee convergence, our iterative procedure is extremely efficient (since all the computations above can be performed either in closedform or via a convex optimization problem) and produces good solutions in practice (see Section V). We describe instantiations of each step for three different dynamical settings below.
Iiia Discrete Systems
In order to solve Step 1, note that the forward dynamics of the system for fixed are given by:
(8) 
The Lagrangian functional for is where
where are Lagrange multipliers. The Lagrange multipliers for the normalization constraints on the distribution variables are omitted since they do not contribute to the analysis. The following proposition demonstrates the structure of using the firstorder necessary condition (FONC) for optimality [34].
Theorem III.1
A necessary condition for to be optimal for is that
(9) 
where and
(10)  
(11) 
See Section A.
This proposition demonstrates that is a Boltzmann distribution with as the partition function and playing the role of inverse temperature. In order to solve Step 2, we simply evaluate the closedform expression (9).
It is easily verified that the function is the costtogo function for . Thus can be written as a dynamic programing problem using :
This allows us to solve Step 3. In particular, when are fixed,
is a linear programming problem in
and can thus be solved efficiently.IiiB LinearGaussian Systems with Quadratic Costs
A discretetime linearGaussian (LG) system is defined by the transition system
(12) 
where , and . We assume that the cost function is quadratic:
with . We explicitly parameterize the TRVs and control policy as:
(13) 
where the random variable
is additive process noise. This structure dictates that are Gaussians respectively, with . This allows for both the forward dynamics (Step 1) and Eq. 9 (Step 2) to be computed in closed form. The latter is derived in the following theorem.Theorem III.2
Define the notational shorthand . For the LG system, the necessary condition Eq. 9 is equivalent to the conditions
(14)  
where the costtogo function is the recursively defined quadratic function with values and
(15) 
See Section B.
Finally, when are fixed, is the unconstrained convex quadratic program where
(16)  
This program can be solved very efficiently (e.g., using activeset or interior point methods) [35].
IiiC NonlinearGaussian Systems
When the dynamics are nonlinearGaussian (NLG), i.e. when Eq. 12 is changed to
(17) 
minimizing is challenging due to no longer being Gaussian. We tackle this challenge by leveraging our results for the LG setting and adapting the iterative Linear Quadratic Regulator (iLQR) algorithm [36, 37, 38].
Given an initial nominal trajectory , the matrices are produced by linearizing along the trajectory. The pair describes the dynamics of a perturbation in the neighborhood of for a perturbed control input in the neighborhood of :
(18) 
We compute (a quadratic approximation of) the perturbation costs subject to Eq. 18. We can then apply the solution method outlined in Section IIIB to search for an optimal . We then update the nominal state and input trajectories to and repeat the entire process until the nominal trajectory converges.
Iv Online Estimation and Control
Once the taskdriven representation and policy have been synthesized offline, we can leverage it for computationallyefficient and robust online control. The key idea behind our online approach is to use the robot’s sensor measurements to only estimate the TRVs . Once has been estimated, the control policy can be applied. This is in stark contrast to most prevalent approaches for controlling robotic systems today, which aim to accurately estimate the full state . We describe our online estimation approach in more detail below.
We maintain a belief distribution over the TRVspace and update it at each time using a Bayes filter [5]. Specifically, we perform two steps every :

Process Update. The system model is used to update the beliefstate to the current time step: .

Measurement Update. The measurement model is used to integrate the observation into the beliefstate: .
To apply this filter, the distributions and are precomputed offline
. Bayes’ theorem states
. Consequently,In the discrete case, the above equations can be evaluated directly. In the LG case, the measurement and update steps take the form of the traditional Kalman updates applied to an LG system induced on the TRVspace by and the system dynamics. This structure is elucidated in the following theorem.
Theorem IV.1
The measurement and process updates for a Bayesian filter on the TRVs in the LG case are the Kalman filter measurement and process updates for the LG system
(19)  
(20) 
where
See Section C.
In the NLG case, we use an extended Kalman filter that maintains a belief over (the TRV corresponding to the perturbed system). Specifically, we use the linearized dynamics of and apply our approach for LG systems.
Given a belief , we compute the control input by sampling , where is the maximum likelihood TRV .
V Examples
In this section, we demonstrate the efficacy of our taskdriven control approach on a discretestate scenario as well as on a springloaded inverted pendulum (SLIP) model. To select the value of used in each scenario, the algorithm is run 10 times with set to evenly spaced values in a listed interval. The controller selected is the one with the lowest value of subject to the expected performance of the controller being below a particular value. The intuition behind this choice is that it selects the controller that has removed the most state information from the TRVs while maintaining a prescribed level of performance.
Va Lava Problem
The first example (Fig. 2), adapted from [39] demonstrates a simple setting where the separation principle does not hold. If the robot’s belief distribution is while residing in state 4, the optimal action corresponding to the maximumlikelihood estimate (MLE) of the state is to move right, while the true optimal action is to move left.
Our algorithm was run with a threedimensional TRV space and for 30 iterations. The smallest value of found to satisfy the constraint that the expected cost was negative was . In each of the trials conducted, the robot’s initial condition was sampled from the aforementioned belief distribution. Online, the robot was modeled to have a faulty sensor that only localizes to the correct state with a probability of 0.5, with a uniformly random incorrect state returned the rest of the time.
Fig. 3 depicts our algorithm’s performance compared with an approach that assumes the separation principle, i.e., solves the MDP (assuming perfect state information) and then performs MLE for the state online. Interestingly, our algorithm produces a deterministic policy that moves the robot left three times — which ensures it is in state 1 — then moves right twice and remains in the goal. With this policy, it is impossible for the robot to end up in the lava state, producing
a more performant, lower variance trajectory than the separation principlebased solution under measurement noise
. The algorithm arrives at a deterministic solution because, at low values of , the Boltzmann distribution in Eq. 9 is almost uniform — thereby requiring to be essentially openloop to provide an effective policy. Note that despite the policy being deterministic, there is still a distribution over the costs due to the random initial condition.VB SLIP Model Problems
Next, we apply the NLG variant of our algorithm to the SLIP model [40, 41, 42], which is depicted in Fig. 1. The SLIP model is a common model in robotics for prototyping legged locomotion control strategies. It consists of a single leg whose lateral motion is derived from a spring/piston combination. At each touchdown, the state of the robot is given by where is displacement of the head from the origin, is the current touchdown angle, and are the radial and angular velocities respectively. The control input provided to the system is , the change in touchdown angle at the next touchdown. The additional parameters are the head mass, , the spring constant, , gravity , and leg length . Despite its simplicity, the return map at touchdown eludes a closedform description, so numerical integration by MATLAB’s ode45 is used to compute and linearize the return map numerically.
The algorithm’s goal is to find a trajectory that, after three hops, places the head of the robot at . This experiment is reminiscent of a set of classical experiments in cognitive psychology that examined the cognitive information used by humans for foot placement while running [43, 44]. Our NLG algorithm was run with , control cost matrices for all , and a terminal state cost as the squared distance of the robot from . The initial state distribution was Gaussian with mean , which is in the vicinity of a fixed point of the return map, and covariance , and the process covariance matrix was . The selected value was .
The results for our simulation are shown in Fig. 4. The algorithm is compared with two iLQG solutions, one presented with the correct measurement model and one presented with an incorrect measurement model. The former is a locally optimal solution to the problem due to the separation principle. However, when modeling error is introduced, the iLQG solution’s performance degrades rapidly. Meanwhile, the TRVbased control policy is notably more performant at a notably lower variance despite this modeling error. In addition, the solution found by our algorithm satisfies for all . Therefore, the online estimator needs to only track a single TRV corresponding to this subspace.
Vi Conclusion
We presented an algorithmic approach for taskdriven estimation and control for robotic systems. In contrast to prevalent approaches for control that rely on accurately estimating the full state of the system, our approach automatically synthesizes a set of taskrelevant variables (TRVs) that are sufficient for achieving the task. Our key insight is to pose the problem of finding TRVs as an optimization problem using the theory of information bottlenecks. We solve this problem offline in order to identify TRVs and a control policy based on the TRVs. Our online approach estimates the TRVs in order to apply the policy. Our theoretical results suggest that the taskdriven approach affords robustness to unmodeled measurement uncertainty. This is validated using thorough numerical simulations, which include a SLIP model running to a target location. Our simulation results also demonstrate the ability of our approach to find highly compressed TRVs (e.g., a onedimensional TRV for the SLIP model).
Challenges and Future Work. On the algorithmic front, we plan to develop approaches that directly minimize the RHS of (3) instead of a linear approximation of it. We expect that this may lead to improved robustness (as suggested by Theorem II.1). On the practical front, we plan to implement our approach on a hardware platform that mimics the gaze heuristic example and on other examples including navigation problems (where the full state representation includes a description of the environment, e.g., in terms of an occupancy map). Perhaps the most exciting future direction is to explore active versions of our approach where the control policy seeks to actively minimize taskrelevant uncertainty in contrast to current approaches to active perception (e.g., based on belief space planning) that attempt to minimize uncertainty associated with the full state.
We believe that the approach presented in this paper along with the indicated future directions represent an important step towards developing a principled and general algorithmic framework for taskdriven estimation and control.
This appendix provides proofs for the main theorems stated in this article.
a Proof of Theorem iii.1
The first step of this derivation is to determine the structure of the value function, . The functional derivative of with respect to is
(21)  
Invoking the FONC yields
(22) 
Next, we repeat the process for the decision variable . The functional derivative of is
(23) 
where is the Lagrange multiplier corresponding to the normalization constraint of . Invoking the FONC again and solving for yields
(24) 
Since
is a probability distribution and must be normalized, it is the case that
where
Comments
There are no comments yet.