Team decision theory has been introduced by Marschak  to study decisions of agents that are acting collectively based on their private information to optimize a common cost function. Radner  proved fundamental results for static teams and in particular established connections between Nash equilibrium and team-optimality. Witsenhausen’s seminal papers [3, 4, 5, 6, 7, 8] on characterization and classification of information structures have been crucial in the progress of our understanding of teams. In particular, the celebrated counterexample of Witsenhausen  demonstrated the challenges that arise due to a decentralized information structure in teams. We refer the reader to  for a more comprehensive overview of team decision theory and a detailed literature review.
In teams, due to its decentralized nature, establishing the existence and structure of optimal policies is a challenging problem. Existence of optimal policies for static teams and a class of sequential dynamic teams has been shown recently in [10, 11, 12]. More specific setups and nonexistence results have been studied in , . For a class of teams which are convex, one can reduce the search space to a smaller parametric class of policies (see [2, 14, 15], and for a comprehensive review, see ).
In this paper, we consider extendible non-signaling approximation of finite teams. We first introduce three relaxed versions of classical policies. These sets of policies can be classified in increasing order as randomized policies, quantum-correlated policies, and non-signaling policies. It is known that randomized policies do not improve optimal value of the team, whereas quantum-correlated and non-signaling policies in general improve optimal value. Moreover, the optimization problems associated with non-signaling policies can be written as a linear program and they can be solved in polynomial time. After we introduce these classes of policies, we consider extendible non-signaling approximation of teams by appending auxiliary and identical agents to the team. We show that non-signaling optimal value of the extended team converges to optimal value of the original team at a rate depending on the number of extra agents added. Since the non-signaling optimal value of any team can be computed via a linear program whose size is proportional to the cardinality of the observation and action spaces, this gives a computable approximation to the original team.
In the literature, relatively few results are available on approximation of teams. We can only refer the reader to  [17, 18, 19, 20, 21, 22], and a few references therein. With the exception of [20, 21, 22, 23], these works study in general a specific setup (Witsenhausen’s counterexample) and are mostly experimental, and as such, they do not rigorously prove the convergence of approximate solutions.
In [20, 22], a class of static teams is studied, and the existence of smooth optimal strategies is shown under quite restrictive assumptions. By using this result, a rate of convergence result for near optimal solutions is established, where near-optimal policies are constructed as linear combinations of basis functions with adjustable parameters. In , the same authors considered an approximation of Witsenhausen’s counterexample which does not satisfy the restrictive conditions in  and . An analogous error bound on the accuracy of near-optimal solutions is derived for this problem. In this result, both the error bound and the near optimal solutions depend on the knowledge of the optimal strategy for Witsenhausen’s counterexample, which is still unknown to this date. Reference  showed that finite models obtained through discretization of observation and action spaces converge asymptotically to the true model in the sense that the optimal policies obtained by solving such finite models lead to cost values that converge to the optimal value of the original model. In all these works, although one can construct nearly optimal policies by solving a simpler problem for a large class of teams, finding optimal solutions for simpler models is shown to be NP-hard . Therefore, these results do not give computable approximations to the original team.
Contributions. (i) We introduce a hierarchy of policies for teams: randomized policies, quantum-correlated policies, and non-signaling policies. The last two classes are almost new to team decision theory and have not been studied much in prior team decision theory literature. In these classes, non-signaling policies are particularly important, as their optimal value can be computed by solving a linear program as opposed to the classical case. (ii) We establish extended non-signaling approximation of teams by augmenting the team by auxiliary and identical agents. We show that non-signaling optimal value of the extended team converges to the optimal value of the original team and quantify the rate. As the non-signaling optimal value can be computed via linear program, this result gives a computable approximation with an explicit error bound to the original team problem. Therefore, this approach provides the first rigorously established computable approximation result with explicit error bounds for a general class of team problems. (iii) Finally, we state an open problem on computation of optimal value of quantum-correlated policies. A potential solution of this problem is quite significant for team decision theory, as it gives an admissible relaxation of classical team problems in view of recent advances in quantum technology.
The rest of the paper is organized as follows. In Section II we review the definition of Witsenhausen’s intrinsic model for sequential team problems. In Section III we introduce, respectively, randomized policies, quantum-correlated policies, and non-signaling policies. In Section IV we discuss the approximation of randomized policies by extended non-signaling policies. In Section V, linear programming approximation of team problem is established. In Section VI we state the open problem. Section VII concludes the paper.
Notation. Let be a finite product space. For each with , we denote
. A similar convention also applies to elements of these sets, which will be denoted by bold lower-case letters and also to random variables which takes their values on these sets. The notationmeans that the random variable has distribution . For any operator on some Hilbert space, let and denote its trace and operator norm, respectively. For Hilbert spaces and , let , means that is component-wise non-negative. For any , let denote the permutation group of . For random variables , denotes the mutual information between and , and denotes the entropy of
. For probability measuresand , denotes the product measure.
Ii Intrinsic Model for Sequential teams
Witsenhausen’s intrinsic model  for sequential team problems has the following components:
where the finite sets , , and () denote the state space, the action space and the observation space of Agent , respectively. Here is the number of available actions, and each of these actions is supposed to be taken by an individual agent (hence, an agent with perfect recall can also be regarded as a separate decision maker every time it acts). For each , the observations and actions of Agent are denoted by and , respectively. The -valued observation variable for Agent is given by
where is a conditional probability on given . A probability measure on describes the uncertainty on the state variable .
A joint control strategy , also called policy, is an -tuple of functions
such that . Let denote the set of all admissible policies for Agent ; that is, the set of all functions from to and let .
Under this intrinsic model, a sequential team problem is dynamic if the information available to at least one agent is affected by the action of at least one other agent . A decentralized problem is static, if the information available at every decision maker is only affected by state of the nature; that is, no other decision maker can affect the information at any given decision maker.
For any , we let the (expected) cost of the team problem be defined by
for some cost function
where and .
For a given stochastic team problem, a policy (strategy) is an optimal team decision rule if
The cost level achieved by this strategy is the optimal value of the team.
In the literature, it is known that computing the value of is NP-hard . Therefore, it is of interest to find an approximate optimal value with reduced complexity. To that end, we establish a linear programming approximation of team decision problems based on symmetric and non-signaling extension of the original problem in Section V.
In what follows, the terms policy, measurement, and agent are used synonymously with strategy, observation, and decision maker, respectively.
Ii-a Static Reduction of Dynamic Team Problems
In this section, we review the equivalence between dynamic teams and their static reduction (this is called the equivalent model ). Consider a dynamic team setting where there are
decision epochs, and Agentobserves , and the decisions are generated as . The resulting cost under a given team policy is
This dynamic team can be converted to a static team as follows.
Note that, for a fixed choice of
, the joint distribution ofis given by
where . The cost function can then be written as
is uniform distribution on. Now, the observations can be regarded as independent, and by incorporating the terms into , we can obtain an equivalent static team problem. Hence, the essential step is to appropriately change the probability measure of observations and the cost function. This method is discrete-time version of Girsanov change of measure method. Indeed, a continuous-time generalization of static reduction via Girsanov’s method has been presented by Charalambous and Ahmed . In the remainder of this paper, we consider the static reduction of a dynamic team problem.
Iii Hierarchy of Policies for Team Problems
In this section, we introduce three relaxed versions of team decision policies . These sets of policies can be classified in increasing order as randomized policies, quantum-correlated policies, and non-signaling policies. As we will see, randomized policies do not improve optimal value, whereas quantum-correlated and non-signaling policies in general improve optimal value of the team. Moreover, the optimal value of non-signaling policies is, for some classes of teams, strictly better than the optimal value of quantum-correlated policies. Moreover, the optimization problem associated with non-signaling policies can be cast as a linear program whose size scales with the product of cardinalities of observation and action spaces.
A similar hierarchy of policies was introduced in  to study games. In , advantage of quantum-correlated and non-signaling equilibria over classical ones was discussed and it was established that quantum-correlated and non-signaling equilibria are socially more beneficial. Indeed, we have been in part inspired by  to study the same classes of policies for teams instead of games. However, our aim is not to show benefits of quantum-correlated and non-signaling policies over classical ones, instead we want to obtain an approximation to the classical team problem by using these new classes of policies.
In stochastic control, the joint distribution of state, observations, and actions for a given policy is called the strategic measure. In , a hierarchy of strategic measures for teams was established, and many of their properties, such as convexity and compactness, were shown. The strategic-measure version of the set of non-signaling policies for team problems was also introduced in , where it was also proved that the set of strategic measures corresponding to the extreme points of non-signaling policies is a strict superset of the set of strategic measures corresponding to deterministic policies. The main motivation of  for introducing such a hierarchy to the set of strategic measures is to establish the existence and structure of optimal team policies, whereas we are mainly interested in computable approximations of optimal team policies.
Iii-a Randomized Policies
Note that one can consider any policygiven , where
Let denote the set of all conditional probability distributions on given . Therefore, we can view any policy as an element of . In view of this, we define the set of randomized policies as the following subset of :
In this definition, represents independent common randomness shared among agents. In addition to common randomness, agents can also randomly generate their actions depending on the observation and common randomness . Here, we use notation to denote randomized policies in order to make a connection with the ‘local hidden variable’ concept from quantum mechanics .
Note that is a convex set whose extreme points are deterministic product policies . Since the cost function is a linear function on , it takes its optimal value on extreme points. Therefore, common and individual randomization of policies does not improve the cost function; that is, optimal strategy can be chosen deterministically. Therefore, without loss of generality, we can indeed treat as the set of classical team policies. In this setting, the cost of the team can be written as
Therefore, we have
Note that computing the value of is NP-hard , and, in general, the optimization problem above cannot be solved in polynomial time.
Iii-B Quantum-correlated Policies
To introduce quantum-correlated policies, we briefly introduce the mathematical formalism necessary to discuss quantum operations. We refer the reader to books [29, 30] for basics of quantum information and computation.
For a finite-dimensional Hilbert space , let denote the set of positive semi-definite operators with unit trace. In this paper, Hilbert spaces are assumed to be defined over complex scalars. A state of a quantum system, living in , is an element of . A measurement on this quantum system is given by a collection of positive semi-definite operators such that their sum is identity. When one applies the measurement to the quantum system with state , the probability of obtaining the outcome is given by
In quantum physics, a compound of quantum systems with the underlying Hilbert spaces is represented by the tensor product of the individual Hilbert spaces. Therefore, any state (called the compound state) of this compound quantum system is an element of .
With these definitions, we can now define quantum-correlated policies. An element is a quantum-correlated policy if agents have access to a part of a quantum compound state , where is a collection of arbitrary finite-dimensional Hilbert spaces, and, for each , Agent makes measurements on part of the state depending on its observations to generate its action as the output of the measurement; that is, the conditional distribution is of the following form:
Let denote the set of quantum-correlated policies in . The following result states that randomized policies are included in the set of quantum-correlated policies.
We have .
Let ; that is,
Let be a Hilbert space with dimension . Fix some orthonormal basis for . We define
for all , , and . Then, we have
This completes the proof. ∎
As opposed to the randomized case, it is known that
for certain sequential team problems. One such team problem is given below, which is called the XOR team . For this problem, we show that quantum-correlations improve the optimal value of the team.
Note that [31, 32] give evidence that quantum-correlated teams can be computationally tractable as opposed to their randomized counterparts. Namely, in these papers, optimization problems associated with quantum-correlated policies can be written as (or can be approximated by) semi-definite programs whose sizes scale with the cardinality of the observation and action spaces. As a result, they can be solved exactly or approximately in polynomial time. In particular,  computes the optimal value of XOR team and  approximates the optimal value of the unique teams via semi-definite programs.
However, there are other instances of team problems [33, 34, 35], where exact or approximate computation of the optimal value of quantum-correlated policies is NP-hard, and therefore, cannot be cast as a semi-definite program. Therefore, it is an interesting research direction to study approximation of the optimal value of quantum-correlated policies via semi-definite programs. Indeed, we will state this as an open problem in Section VI.
Example 1 (XOR team).
In XOR team, we have two agents with binary action spaces . Observations are generated independently and uniformly over some finite sets , . Hence, there is no state variable in the problem, and so the problem is automatically static. The reward111All results in this paper are also true for maximization of a reward function. function is defined as
where is some arbitrary binary-valued function. This team problem with quantum-correlated policies can be written as a semi-definite program due to Tsirelson’s Theorem [30, Theorem 6.62]. Indeed, let us define
Given any for some finite-dimensional Hilbert spaces and given any two collection of measurements , , the corresponding policy is
|and its expected reward function can be written as|
Note that an operator is Hermitian with if and only if it can be written as , where () are positive semi-definite operators with . Therefore, for any pair , and are Hermitian with operator norms less than . Conversely, for any pair , any Hermitian operators with operator norms less than can be decomposed as above.
Let be a real matrix. Tsirelson’s Theorem states that the following assertions are equivalent:
There exist Hilbert spaces and , a state , and two collections of Hermitian operators
whose operator norms are less than , and
for all , .
There exists two collections , of unit vectors such that
for all , .
Therefore, Tsirelson’s Theorem and the above fact about Hermitian operators imply that
This optimization problem is indeed a semi-definite program. Therefore, the optimal value of the XOR team with quantum-correlated policies can be computed in polynomial time as opposed to its classical counterpart.
A special case for XOR team is the celebrated CHSH (Clauser-Horne-Shimony-Holt) team . In CHSH team, we have binary observation and action spaces and the reward function is defined as
For this problem, the optimal value of randomized policies is [30, Section 6.3.2]. However, quantum-correlated policies can achieve the maximum reward of , that is obtained by solving the corresponding semi-definite program. Therefore, for CHSH team, we have
that is, quantum-correlated policies improve the optimal value of the original team as opposed to randomized policies.
Iii-C Non-signaling Policies
A joint conditional distribution of actions given observations is non-signaling if, for any , the marginal distribution of action of Agent given its observation does not give any information about the observations of other agents [36, 37]. Non-signaling has been investigated in quantum mechanics due to its close connection to foundations of quantum mechanics and relativity . Indeed, it describes the largest class of correlations that obey relativistic causality (relativistic causality dictates that it is impossible to communicate any information faster than the speed of light). Using non-signaling policies, it is possible to correlate actions of distant agents without revealing their local information to other agents in the team. However, to do that, agents should communicate their local observations to a mediator, and then the mediator implements a correlation without violating non-signaling constraint and directs agents to apply certain actions. Therefore, to implement such policies, agents need to communicate with some mediator, which is in general prohibitive in classical team problems due to communication constraints. As a result, it is in general not realistic to assume that agents can implement non-signaling policies to design their strategies.
Note that, for randomized and quantum-correlated policies, there is no need for a mediator to implement the strategies. Therefore, quantum-correlated policies and randomized policies are indeed admissible in team decision theory. Moreover, in contrast with randomized policies, quantum-correlated policies might improve the optimal value of the team and, for certain cases, this improved optimal value can be computed via a semi-definite program.
Formally, non-signaling policies are defined as follows. An element is a non-signaling policy if, for any subset of , the actions of the agents in given their observations are independent of observations of agents in ; that is, for any , we have
At first sight, it is tempting to claim that is the same as the set of non-signaling policies described by condition (1). Indeed, in  it was first claimed that is equivalent to the set of all non-signaling policies, and a counterexample was given to establish that the set of extreme points of non-signaling policies is not , which would imply that non-signaling policies are more general than randomized ones as is the set of extreme points of . It is evident that authors of  were unaware of the quantum information literature, where this result was known long ago.
It turns out that the non-signaling condition (1) can be derived from fewer linear constraints, which will be described below (see also [36, Section II-A]). These constraints indeed enable us to write the optimization problem associated with non-signaling policies as a linear program that scales with the size of the observation and action spaces.
An element is a non-signaling policy if it satisfies the following condition:
For each , the marginal distribution of actions excluding is independent of the observation :
for all values of and .
Note that each constraint (2) is linear in and the number of such linear constraints is
Therefore, if denotes the set of non-signaling policies in , then the optimal value of the team with non-signaling policies can be written as a linear program as follows:
Thus, the optimal value of team with non-signaling policies can be found in polynomial time. This is not possible for teams with randomized policies  and also class of teams with quantum-correlated policies [33, 34, 35].
The following result states that quantum-correlated policies are included in the set of non-signaling policies.
We have .
Let ; that is, agents have access to a part of a compound quantum state , where is a collection of arbitrary finite-dimensional Hilbert spaces, and, for each , Agent makes measurements on the part of the state depending on its observations to generate its action as the output of the following measurement:
We prove that satisfies the condition (N). Fix any . Then, we have
for all values of and . Here, (a) follows from the fact that for any . Therefore, satisfies the condition (N), and so, . ∎
In the literature, it is known that
for certain sequential team problems. One such team problem is the CHSH team which was introduced in Example 1 and will be further discussed below. In this example, we will also show the necessity of the communication between agents and a mediator explicitly to achieve the maximum non-signaling correlation.
Example 2 (CHSH team).
Recall that CHSH team is a special case for XOR team. In CHSH team, we have binary observation and action spaces and observations are generated independently and uniformly. The reward function is defined as
For this problem, quantum-correlated policies can achieve the maximum reward of [30, Section 6.3.2]. However, non-signaling policies can achieve the maximum reward of using the following policy, which is called Popescu-Rohrlich (PR) box in the literature :
It is very easy to prove that is non-signaling; that is, is independent of for and . The reward of is , which is the maximum achievable reward by any policy as . Hence, is the optimal non-signaling policy. Therefore, for CHSH team, we have
that is, non-signaling policies improve the optimal value of quantum-correlated policies.
Note that to implement the policy above, agents should communicate their observations to a mediator, and then, mediator directs them to apply either the same actions or different actions based on the product of their observations. This kind of communication is, in general, infeasible for team decision problems. Therefore, although allowing non-signaling correlations among actions of agents enables us to formulate the team problem as a linear program (solvable in a polynomial time), it is in general not realistic to assume that agents can apply such policies in real-life applications due to communication constraints dictated by decentralized information structure.
Iv Approximation of Randomized Policies via Extendible Non-signaling Policies
As we have explained in the previous section, non-signaling policies are in general not feasible to apply in teams due to intrinsic communication constraints in the problem. However, if we add further properties to non-signaling policies such as symmetric extendibility (defined below), then we prove that such policies are almost equivalent to randomized policies, and so can be used to approximately compute the optimal value of randomized policies. Indeed, after we prove this result, in Section V, we will establish a linear programming approximation for computing .
To that end, we first give the definition of -extendible non-signaling policies. For any , where each is a positive integer, we define the set of -extendible non-signaling policies , denoted by , as the set of all such that there exists a non-signaling -extension which is permutation symmetric in actions and observations of each agent; that is, for any and
where and for , and whose marginals are ; that is,
for any and .
The theorem below states that randomized policies can be approximated by -extendible non-signaling policies if is sufficiently large. This result is the key to proving the linear programming approximation of classical team problems.
Note that -extendible non-signaling policies were first introduced in  when and . In , authors proved the below theorem for this case. Hence, our result is an extension of their result to arbitrary and . We note that the technique used here to prove below result is very similar to the proof of [28, Theorem 1]. However, there is a crucial difference in the details between this problem and the case . For the
case, the key step is to use the chain rule for mutual information between actions and then employ Pinsker inequality. Whenis arbitrary, we should use multipartite mutual information for which we have Pinsker Inequality, but unfortunately, we do not have chain rule. This indeed complicates the proof considerably.
Let . Then, we have
V LP Approximation of Teams
In this section, we show that the optimal value of the classical team problem can be approximated within additive error by a linear program. The key idea is to extend the original team by adding extra identical agents to the team. Namely, for Agent , we add to the team more agents which are identical to Agent . Then, after we generate observations from distribution of the original team problem, the -observation is sent to one of the agents chosen at random. We do not send any observations to the remaining agents and we do not expect any actions from the remaining agents. We name this situation as null observation and null action, and denote them by the symbol . Finally, we use actions and observations of the chosen agents to compute the cost function .
It can be shown that the classical optimal value of the extended team is the same as the classical optimal value of the original team. Moreover, the non-signaling optimal value of the extended team, which can be computed using a linear program, is the same as the -extendible non-signaling value of the original team. With these observations, one can easily obtain an approximation result using Theorem 1.
In the next section, we first give the precise mathematical description of the extended game, and then prove the main result of this paper.
V-a Extended Team Problem
Fix any , where is a positive integer for all . In -extended team, we have agents labelled by the pair , where and, given , . For Agent , the observation space is and the action space is . The observations are generated with respect to which is defined as
Given observations and the corresponding actions , the cost function is given by
Note that both and are symmetric in observations and actions of identical agents; that is, for any and , we have
The cost of the -extended team under policy is given by
Let denote the set of all non-signaling and let be the set of -extensions of the -extendible non-signaling policies of the original team.
Since both and are symmetric in observations and actions of identical agents, any non-signaling policy can be replaced without increasing the cost by some policy in [28, Lemma 9]. It is also straightforward to prove that the optimal value of the extended team with randomized policies is the same as the optimal value of the original team with randomized policies. Moreover, the non-signaling optimal value of the extended team is the same as the -extendible non-signaling value of the original team. These observations imply that