Efficient Reinforcement Learning for High Dimensional Linear Quadratic Systems

03/24/2013 ∙ by Morteza Ibrahimi, et al. ∙ Stanford University 0

We study the problem of adaptive control of a high dimensional linear quadratic (LQ) system. Previous work established the asymptotic convergence to an optimal controller for various adaptive control schemes. More recently, for the average cost LQ problem, a regret bound of O(√(T)) was shown, apart form logarithmic factors. However, this bound scales exponentially with p, the dimension of the state space. In this work we consider the case where the matrices describing the dynamic of the LQ system are sparse and their dimensions are large. We present an adaptive control scheme that achieves a regret bound of O(p √(T)), apart from logarithmic factors. In particular, our algorithm has an average cost of (1+) times the optimum cost after T = (p) O(1/^2). This is in comparison to previous work on the dense dynamics where the algorithm requires time that scales exponentially with dimension in order to achieve regret of times the optimal cost. We believe that our result has prominent applications in the emerging area of computational advertising, in particular targeted online advertising and advertising in social networks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper we address the problem of adaptive control of a high dimensional linear quadratic (LQ) system. Formally, the dynamics of a linear quadratic system are given by

(1)

where is the control (action) at time , is the state at time , is the cost at time , and

is a sequence of random vectors in

with i.i.d. standard Normal entries. The matrices and are positive semi-definite (PSD) matrices that determine the cost at each step. The evolution of the system is described through the matrices and . Finally by high dimensional system we mean the case where .

A celebrated fundamental theorem in control theory asserts that the above LQ system can be optimally controlled by a simple linear feedback if the pair is controllable and the pair is observable. The optimal controller can be explicitly computed from the matrices describing the dynamics and the cost. Throughout this paper we assume that controllability and observability conditions hold.

When the matrix is unknown, the task is that of adaptive control, where the system is to be learned and controlled at the same time. Early works on the adaptive control of LQ systems relied on the certainty equivalence principle [2]. In this scheme at each time the unknown parameter

is estimated based on the observations collected so far and the optimal controller for the estimated system is applied. Such controllers are shown to converge to an optimal controller in the case of minimum variance cost, however, in general they may converge to a suboptimal controller

[11]. Subsequently, it has been shown that introducing random exploration by adding noise to the control signal, e.g., [14], solves the problem of converging to suboptimal estimates.

All the aforementioned work have been concerned with the asymptotic convergence of the controller to an optimal controller. In order to achieve regret bounds, cost-biased parameter estimation [12, 8, 1], in particular the optimism in the face of uncertainty (OFU) principle [13] has been shown to be effective. In this method a confidence set is found such that

with high probability. The system is then controlled using the

most optimistic parameter estimates, i.e., with the smallest optimum cost. The asymptotic convergence of the average cost of OFU for the LQR problem was shown in [6]. This asymptotic result was extended in [1] by providing a bound for the cumulative regret. Assume and for a control policy define the average cost

(2)

Further, define the cumulative regret as

(3)

where is the cost of control policy at time and is the optimal average cost. The algorithm proposed in [1] is shown to have cumulative regret where is hiding the logarithmic factors. While no lower bound was provided for the regret, comparison with the multi-armed bandit problem where a lower bound of was shown for the general case [9], suggests that this scaling with time for the cumulative regret is optimal.

The focus of [1] was on scaling of the regret with time horizon . However, the regret of the proposed algorithm scales poorly with dimension. More specifically, the analysis in [1] proves a regret bound of . The current paper focuses on (many) applications where the state and control dimensions are much larger than the time horizon of interest. A powerful reinforcement learning algorithm for these applications should have regret which depends gracefully on dimension. In general, there is little to be achieved when as the number of degrees of freedom () is larger than the number of observations () and any estimator can be arbitrary inaccurate. However, when there is prior knowledge about the unknown parameters , e.g., when are sparse, accurate estimation can be feasible. In particular, [3] proved that under suitable conditions the unknown parameters of a noise driven system (i.e., no control) whose dynamics are modeled by linear stochastic differential equations can be estimated accurately with as few as samples. However, the result of [3] is not directly applicable here since for a general feedback gain even if and are sparse, the closed loop gain need not be sparse. Furthermore, system dynamics would be correlated with past observations through the estimated gain matrix . Finally, there is no notion of cost in [3] while here we have to obtain bounds on cost and its scaling with p. In this work we extend the result of [3] by showing that under suitable conditions, unknown parameters of sparse high dimensional LQ systems can be accurately estimated with as few as observations. Equipped with this efficient learning method, we show that sparse high dimensional LQ systems can be adaptively controlled with regret .

To put this result in perspective note that even when , the expected cost at time is due to the noise. Therefore, the cumulative cost at time is bounded as . Comparing this to our regret bound, we see that for , the cumulative cost of our algorithm is bounded by times the optimum cumulative cost. In other words, our algorithm performs close to optimal after steps. This is in contrast with the result of [1] where the algorithm needs steps in order to achieve regret of times the optimal cost.

Sparse high dimensional LQ systems appear in many engineering applications. Here we are particularly motivated by an emerging field of applications in marketing and advertising. The use of dynamical optimal control models in advertising has a history of at least four decades, cf. [17, 10]

for a survey. In these models, often a partial differential equation is used to describe how advertising expenditure over time translates into sales. The basic problem is to find the advertising expenditure that maximizes the net profit. The focus of these works is to model the temporal dynamics of the advertising expenditure (the control variable) and the variables of interest (sales, goodwill level, etc.). There also exists a rich literature studying the

spatial interdependence of consumers’ and firms’ behavior to devise marketing schemes [7]. In these models space can be generalized beyond geographies to include notions like demographies and psychometry.

Combination of spatial interdependence and temporal dynamics models for optimal advertising was also considered [16, 15]. A simple temporal dynamics model is extended in [15] by allowing state and control variables to have spatial dependence and introducing a diffusive component in the controlled PDE which describes the spatial dynamics. The controlled PDE is then showed to be equivalent to an abstract linear control system of the form

(4)

Both [15] and [7] are concerned with the optimal control and the interactions are either dictated by the model or assumed known. Our work deals with a discrete and noisy version of (4) where the dynamics is to be estimated but is known to be sparse. In the model considered in [15] the state variable lives in an infinite dimensional space. Spatial models in marketing [7] usually consider state variables which have a large number of dimensions, e.g., number of zip codes in the US (K). High dimensional state space and control is a recurring theme in these applications.

In particular, with the modern social networks customers are classified in a highly granular way, potentially with each customer representing his own class. With the number of classes and complexity of their interactions, its unlikely that we could formulate an effective model a priori for how classes interact. Further, the nature of these interactions change over time with the changing landscape of Internet services and information available to customers. This makes it important to efficiently learn from real-time data about the nature of these interactions.

Notation: We bundle the unknown parameters into one variable where and call it the interaction matrix. For , and , we denote by the standard p-norm and by the corresponding operator norm. For , represents the row of matrix . For , is the submatrix of formed by the rows in and columns in . For a set denote by its cardinality. For an integer denote by the set .

2 Algorithm

Our algorithm employs the Optimism in the Face of Uncertainty (OFU) principle in an episodic fashion. At the beginning of episode the algorithm constructs a confidence set which is guaranteed to include the unknown parameter with high probability. The algorithm then chooses that has the smallest expected cost as the estimated parameter for episode and applies the optimal control for the estimated parameter during episode .

The confidence set is constructed using observations from the last episode only but the length of episodes are chosen to increase geometrically allowing for more accurate estimates and shrinkage of the confidence set by a constant factor at each episode. The details of each step and the pseudo code for the algorithm follows.

Constructing confidence set: Define to be the start of episode with . Let be the controller that has been chosen for episode . For the system is controlled by and the system dynamics can be written as At the beginning of episode , first an initial estimate is obtained by solving the following convex optimization problem for each row separately:

(5)

where

(6)

and . The estimator is known as the LASSO estimator. The first term in the cost function is the normalized negative log likelihood which measures the fidelity to the observations while the second term imposes the sparsity constraint on . is the regularization parameter.

For define the distance as

(7)

where is the row of the matrix . It is worth noting that for -sparse matrices with constant, this distance does not scale with or . In particular, if the absolute value of the elements of and are bounded by then .

Having the estimator the algorithm constructs the confidence set for episode as

(8)

where is an input parameter to the algorithm. For any fixed , by choosing judiciously we ensure that with probability at least , , for all . (see Theorem 3.2).

0:  Precision , failure probability , initial identifiable controller ,
0:  Series of estimates , confidence sets and controllers
1:  Let , and
Let , for , and .
2:  for  do
3:     Apply the control until and observe the trace .
4:     Calculate the estimate from (5) and construct the confidence set .
5:     Calculate from (9) and set .
Algorithm: Reinforcement learning algorithm for LQ systems.

Design of the controller: Let be the minimum expected cost if the interaction matrix is and denote by the optimal controller that achieves the expected cost . The algorithm implements OFU principle by choosing, at the beginning of episode , an estimate such that

(9)

The optimal control corresponding to is then applied during episode , i.e., for . Recall that for , the optimal controller is given through the following relations

The pseudo code for the algorithm is summarized in the table.

3 Main Results

In this section we present performance guarantees in terms of cumulative regret and learning accuracy for the presented algorithm. In order to state the theorems, we first need to present some assumptions on the system.

Given and , define and let be a solution to the following Lyapunov equation

(10)

If the closed loop system is stable then the solution to the above equation exists and the state vector has a Normal stationary distribution with covariance .

We proceed by introducing an identifiable regulator.

Definition 3.1.

For a -sparse matrix and , define and let where is the solution of Eq. (10) with . Define to be identifiable (with respect to ) if it satisfies the following conditions for all .

The first condition simply states that if the system is controlled using the regulator then the closed loop autonomous system is asymptotically stable. The second and third conditions are similar to what is referred to in the sparse signal recovery literature as the mutual incoherence or irreprepresentable conditions. Various examples and results exist for the matrix families that satisfy these conditions [18]. Let be the set of indices of the nonzero entries in a specific row of . The second condition states that the corresponding entries in the extended state variable are sufficiently distinguishable from each other. In other words, if the trajectories corresponding to this group of state variables are observed, non of them can be well approximated as a linear combination of the others. The third condition can be thought of as a quantification of the first vs. higher order dependencies. Consider entry in the extended state variable. Then, the dynamic of is directly influenced by entries . However they are also influenced indirectly by other entries of . The third condition roughly states that the indirect influences are sufficiently weaker than the direct influences. There exists a vast literature on the applicability of these conditions and scenarios in which they are known to hold. These conditions are almost necessary for the successful recovery by relaxation. For a discussion on these and other similar conditions imposed for sparse signal recovery we refer the reader to [19] and [20] and the references therein.

Define . Our first result states that the system can be learned efficiently from its trajectory observations when it is controlled by an identifiable regulator.

Theorem 3.2.

Consider the LQ system of Eq. (1) and assume is -sparse. Let where is a identifiable regulator with respect to and define . Let denote the number of samples of the trajectory that is observed. For any , there exists such that, if

(11)

then the -regularized least squares solution of Eq. (5) satisfies with probability larger than . In particular, this is achieved by taking .

Our second result states that equipped with an efficient learning algorithm, the LQ system of Eq. (1) can be controlled with regret under suitable assumptions.

Define an -neighborhood of as . Our assumption asserts the identifiably of for close to .

Assumption: There exist such that for all , is identifiable w.r.t. and

Also define

Note that , since .

Theorem 3.3.

Consider the LQ system of Eq. (1). For some constants and , assume that an initial identifiable regulator is given. Further, assume that for any , is identifiable. Then, with probability at least the cumulative regret of Algorithm (cf. the table) is bounded as

(12)

where is hiding the logarithmic factors.

4 Analysis

4.1 Proof of Theorem 3.2

To prove theorem 3.2 we first state a set of sufficient conditions for the solution of the -regularized least squares to be within some distance, as defined by , of the true parameter. Subsequently, we prove that these conditions hold with high probability.

Define and let be the matrix containing the Gaussian noise realization. Further let the denote the row of .

Define the normalized gradient and Hessian of the likelihood function (6) as

(13)

The following proposition, a proof of which can be found in [20], provides a set of sufficient conditions for the accuracy of the -regularized least squares solution.

Proposition 4.1.

Let be the support of with , and be defined per Definition 3.1. Assume there exist and such that

(14)

For any if the following conditions hold

(15)
(16)

the -regularized least square solution (5) satisfies .

In the sequel, we prove that the conditions in Proposition 4.1 hold with high probability given that the assumptions of Theorem 3.2 are satisfied. A few lemmas are in order proofs of which are deferred to the Appendix.

The first lemma states that concentrates in infinity norm around its mean of zero.

Lemma 4.2.

Assume and let . Then, for any and

(17)

To prove the conditions in Eq. (16) we first bound in the following lemma the absolute deviations of the elements of from their mean , i.e., .

Lemma 4.3.

Let , , and  . Then,

(18)

The following corollary of Lemma 4.3 bounds for .

Corollary 4.4.

Let , , , and . Then,

(19)

The proof of Corollary 4.4 is by applying union bound as

(20)
Proof of Theorem 3.2.

We show that the conditions given by Proposition 4.1 hold. The conditions in Eq. (14) are true by the assumption of identifiability of with respect to . In order to make the first constraint on imply the second constraint on , we assume that , which is ensured to hold if . By Lemma 4.2, if

(21)

Requiring , we obtain

(22)

The conditions on can also be aggregated as . By Corollary 4.4, if

(23)

Merging the conditions in Eq. (22) and (23) we conclude that the conditions in Proposition 4.1 hold with probability at least if

(24)

Which finishes the proof of Theorem 3.2. ∎

4.2 Proof of Theorem 3.3

The high-level idea of the proof is similar to the proof of main Theorem in [1]. First, we give a decomposition for the gap between the cost obtained by the algorithm and the optimal cost. We then upper bound each term of the decomposition separately.

4.2.1 Cost Decomposition

Writing the Bellman optimality equations [5, 4] for average cost dynamic programming, we get

where is the estimate used at time , , and is the -field generated by the variables . Notice that the left-hand side is the average cost occurred with initial state  [5, 4]. Therefore,

Consequently

(25)

where

(26)
(27)
(28)

4.2.2 Good events

We proceed by defining the following two events in the probability space under which we can bound the terms . We then provide a lower bound on the probability of these events.

4.2.3 Technical lemmas

The following lemmas establish upper bounds on .

Lemma 4.5.

Under the event , the following holds with probability at least .

(29)
Lemma 4.6.

Under the event , the following holds.

(30)
Lemma 4.7.

Under the event , the following holds with probability at least .

(31)
Lemma 4.8.

The following holds true.

(32)

Therefore, .

We are now in position to prove Theorem 3.3.

Proof (Theorem 3.3).

Using cost decomposition (Eq. (25)), under the event , we have

where the last inequality stems from the choice of by the algorithm (cf. Eq (9)) and the fact that , for all under the event . Hence,  . Now using the bounds on , we get the desired result. ∎

Acknowledgments

The authors thank the anonymous reviewers for their insightful comments. A.J. is supported by a Caroline and Fabian Pease Stanford Graduate Fellowship.

References

  • [1] Y. Abbasi-Yadkori and C. Szepesvári. Regret bounds for the adaptive control of linear quadratic systems. Proceeding of the 24th Annual Conference on Learning Theory, pages 1–26, 2011.
  • [2] Y. Bar-Shalom and E. Tse. Dual effect, certainty equivalence, and separation in stochastic control. Automatic Control, IEEE Transactions on, 19(5):494–500, 1974.
  • [3] J. Bento, M. Ibrahimi, and A. Montanari. Learning networks of stochastic differential equations. Advances in Neural Information Processing Systems 23, pages 172–180, 2010.
  • [4] D. Bertsekas. Dynamic Programming: Deterministic and Stochastic Models. Prentice-Hall, 1987.
  • [5] D. P. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, 3rd edition, 2007.
  • [6] S. Bittanti and M. Campi. Adaptive control of linear time invariant systems: the “bet on the best” principle. Communications in Information and Systems, 6(4):299–320, 2006.
  • [7] E. Bradlow, B. Bronnenberg, G. Russell, N. Arora, D. Bell, S. Duvvuri, F. Hofstede, C. Sismeiro, R. Thomadsen, and S. Yang. Spatial models in marketing. Marketing Letters, 16(3):267–278, 2005.
  • [8] M. Campi. Achieving optimality in adaptive control: the “bet on the best” approach. In Decision and Control, 1997., Proceedings of the 36th IEEE Conference on, volume 5, pages 4671–4676. IEEE, 1997.
  • [9] V. Dani, T. Hayes, and S. Kakade. Stochastic linear optimization under bandit feedback. In Proceedings of the 21st Annual Conference on Learning Theory (COLT), 2008.
  • [10] G. Feichtinger, R. Hartl, and S. Sethi. Dynamic optimal control models in advertising: recent developments. Management Science, pages 195–226, 1994.
  • [11] L. Guo and H. Chen. The åstrom-wittenmark self-tuning regulator revisited and els-based adaptive trackers. Automatic Control, IEEE Transactions on, 36(7):802–812, 1991.
  • [12] P. Kumar and A. Becker.

    A new family of optimal adaptive controllers for markov chains.

    Automatic Control, IEEE Transactions on, 27(1):137–146, 1982.
  • [13] T. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4–22, 1985.
  • [14] T. Lai and C. Wei. Least squares estimates in stochastic regression models with applications to identification and control of dynamic systems. The Annals of Statistics, 10(1):154–166, 1982.
  • [15] C. Marinelli and S. Savin. Optimal distributed dynamic advertising. Journal of Optimization Theory and Applications, 137(3):569–591, 2008.
  • [16] T. Seidman, S. Sethi, and N. Derzko. Dynamics and optimization of a distributed sales-advertising model. Journal of Optimization Theory and Applications, 52(3):443–462, 1987.
  • [17] S. Sethi. Dynamic optimal control models in advertising: a survey. SIAM review, pages 685–725, 1977.
  • [18] J. Tropp. Just relax: Convex programming methods for identifying sparse signals in noise. Information Theory, IEEE Transactions on, 52(3):1030–1051, 2006.
  • [19] M. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using-constrained quadratic programming (lasso). Information Theory, IEEE Transactions on, 55(5):2183–2202, 2009.
  • [20] P. Zhao and B. Yu. On model selection consistency of Lasso.

    The Journal of Machine Learning Research

    , 7:2541–2563, 2006.

Appendix A Proof of technical lemmas

a.1 Proof of Lemma 4.2

As before let , and . Further, for , , define to have all rows equal to zero except the row which is equal to . Define as,

(33)

and let

(34)
Lemma A.1.

Let denote the eigenvalue of and assume . Then,

(35)
(36)
(37)

We do not prove this lemma here and refer the reader to Lemma A.3 in [3].

Proof (Lemma 4.2).

The proof of this lemma follows closely the proof of Proposition 4.2 in [3] which we provide here for the reader’s convenience. Let be the vector obtained by stacking all the noise vectors up to time , i.e.,

Then we have that