I Introduction
Team decision theory has been introduced by Marschak [1] to study decisions of agents that are acting collectively based on their private information to optimize a common cost function. Radner [2] proved fundamental results for static teams and in particular established connections between Nash equilibrium and teamoptimality. Witsenhausen’s seminal papers [3, 4, 5, 6, 7, 8] on characterization and classification of information structures have been crucial in the progress of our understanding of teams. In particular, the celebrated counterexample of Witsenhausen [8] demonstrated the challenges that arise due to a decentralized information structure in teams. We refer the reader to [9] for a more comprehensive overview of team decision theory and a detailed literature review.
In teams, due to its decentralized nature, establishing the existence and structure of optimal policies is a challenging problem. Existence of optimal policies for static teams and a class of sequential dynamic teams has been shown recently in [10, 11, 12]. More specific setups and nonexistence results have been studied in [13], [8]. For a class of teams which are convex, one can reduce the search space to a smaller parametric class of policies (see [2, 14, 15], and for a comprehensive review, see [9]).
In this paper, we consider extendible nonsignaling approximation of finite teams. We first introduce three relaxed versions of classical policies. These sets of policies can be classified in increasing order as randomized policies, quantumcorrelated policies, and nonsignaling policies. It is known that randomized policies do not improve optimal value of the team, whereas quantumcorrelated and nonsignaling policies in general improve optimal value. Moreover, the optimization problems associated with nonsignaling policies can be written as a linear program and they can be solved in polynomial time. After we introduce these classes of policies, we consider extendible nonsignaling approximation of teams by appending auxiliary and identical agents to the team. We show that nonsignaling optimal value of the extended team converges to optimal value of the original team at a rate depending on the number of extra agents added. Since the nonsignaling optimal value of any team can be computed via a linear program whose size is proportional to the cardinality of the observation and action spaces, this gives a computable approximation to the original team.
In the literature, relatively few results are available on approximation of teams. We can only refer the reader to [16] [17, 18, 19, 20, 21, 22], and a few references therein. With the exception of [20, 21, 22, 23], these works study in general a specific setup (Witsenhausen’s counterexample) and are mostly experimental, and as such, they do not rigorously prove the convergence of approximate solutions.
In [20, 22], a class of static teams is studied, and the existence of smooth optimal strategies is shown under quite restrictive assumptions. By using this result, a rate of convergence result for near optimal solutions is established, where nearoptimal policies are constructed as linear combinations of basis functions with adjustable parameters. In [21], the same authors considered an approximation of Witsenhausen’s counterexample which does not satisfy the restrictive conditions in [20] and [22]. An analogous error bound on the accuracy of nearoptimal solutions is derived for this problem. In this result, both the error bound and the near optimal solutions depend on the knowledge of the optimal strategy for Witsenhausen’s counterexample, which is still unknown to this date. Reference [23] showed that finite models obtained through discretization of observation and action spaces converge asymptotically to the true model in the sense that the optimal policies obtained by solving such finite models lead to cost values that converge to the optimal value of the original model. In all these works, although one can construct nearly optimal policies by solving a simpler problem for a large class of teams, finding optimal solutions for simpler models is shown to be NPhard [24]. Therefore, these results do not give computable approximations to the original team.
Contributions. (i) We introduce a hierarchy of policies for teams: randomized policies, quantumcorrelated policies, and nonsignaling policies. The last two classes are almost new to team decision theory and have not been studied much in prior team decision theory literature. In these classes, nonsignaling policies are particularly important, as their optimal value can be computed by solving a linear program as opposed to the classical case. (ii) We establish extended nonsignaling approximation of teams by augmenting the team by auxiliary and identical agents. We show that nonsignaling optimal value of the extended team converges to the optimal value of the original team and quantify the rate. As the nonsignaling optimal value can be computed via linear program, this result gives a computable approximation with an explicit error bound to the original team problem. Therefore, this approach provides the first rigorously established computable approximation result with explicit error bounds for a general class of team problems. (iii) Finally, we state an open problem on computation of optimal value of quantumcorrelated policies. A potential solution of this problem is quite significant for team decision theory, as it gives an admissible relaxation of classical team problems in view of recent advances in quantum technology.
The rest of the paper is organized as follows. In Section II we review the definition of Witsenhausen’s intrinsic model for sequential team problems. In Section III we introduce, respectively, randomized policies, quantumcorrelated policies, and nonsignaling policies. In Section IV we discuss the approximation of randomized policies by extended nonsignaling policies. In Section V, linear programming approximation of team problem is established. In Section VI we state the open problem. Section VII concludes the paper.
Notation. Let be a finite product space. For each with , we denote
. A similar convention also applies to elements of these sets, which will be denoted by bold lowercase letters and also to random variables which takes their values on these sets. The notation
means that the random variable has distribution . For any operator on some Hilbert space, let and denote its trace and operator norm, respectively. For Hilbert spaces and , letdenote their tensor product. For any real vector
, means that is componentwise nonnegative. For any , let denote the permutation group of . For random variables , denotes the mutual information between and , and denotes the entropy of. For probability measures
and , denotes the product measure.Ii Intrinsic Model for Sequential teams
Witsenhausen’s intrinsic model [4] for sequential team problems has the following components:
where the finite sets , , and () denote the state space, the action space and the observation space of Agent , respectively. Here is the number of available actions, and each of these actions is supposed to be taken by an individual agent (hence, an agent with perfect recall can also be regarded as a separate decision maker every time it acts). For each , the observations and actions of Agent are denoted by and , respectively. The valued observation variable for Agent is given by
where is a conditional probability on given . A probability measure on describes the uncertainty on the state variable .
A joint control strategy , also called policy, is an tuple of functions
such that . Let denote the set of all admissible policies for Agent ; that is, the set of all functions from to and let .
Under this intrinsic model, a sequential team problem is dynamic if the information available to at least one agent is affected by the action of at least one other agent . A decentralized problem is static, if the information available at every decision maker is only affected by state of the nature; that is, no other decision maker can affect the information at any given decision maker.
For any , we let the (expected) cost of the team problem be defined by
for some cost function
where and .
Definition 1.
For a given stochastic team problem, a policy (strategy) is an optimal team decision rule if
The cost level achieved by this strategy is the optimal value of the team.
In the literature, it is known that computing the value of is NPhard [24]. Therefore, it is of interest to find an approximate optimal value with reduced complexity. To that end, we establish a linear programming approximation of team decision problems based on symmetric and nonsignaling extension of the original problem in Section V.
In what follows, the terms policy, measurement, and agent are used synonymously with strategy, observation, and decision maker, respectively.
Iia Static Reduction of Dynamic Team Problems
In this section, we review the equivalence between dynamic teams and their static reduction (this is called the equivalent model [5]). Consider a dynamic team setting where there are
decision epochs, and Agent
observes , and the decisions are generated as . The resulting cost under a given team policy isThis dynamic team can be converted to a static team as follows.
Note that, for a fixed choice of
, the joint distribution of
is given bywhere . The cost function can then be written as
where
and  
where
is uniform distribution on
. Now, the observations can be regarded as independent, and by incorporating the terms into , we can obtain an equivalent static team problem. Hence, the essential step is to appropriately change the probability measure of observations and the cost function. This method is discretetime version of Girsanov change of measure method. Indeed, a continuoustime generalization of static reduction via Girsanov’s method has been presented by Charalambous and Ahmed [25]. In the remainder of this paper, we consider the static reduction of a dynamic team problem.Iii Hierarchy of Policies for Team Problems
In this section, we introduce three relaxed versions of team decision policies . These sets of policies can be classified in increasing order as randomized policies, quantumcorrelated policies, and nonsignaling policies. As we will see, randomized policies do not improve optimal value, whereas quantumcorrelated and nonsignaling policies in general improve optimal value of the team. Moreover, the optimal value of nonsignaling policies is, for some classes of teams, strictly better than the optimal value of quantumcorrelated policies. Moreover, the optimization problem associated with nonsignaling policies can be cast as a linear program whose size scales with the product of cardinalities of observation and action spaces.
A similar hierarchy of policies was introduced in [26] to study games. In [26], advantage of quantumcorrelated and nonsignaling equilibria over classical ones was discussed and it was established that quantumcorrelated and nonsignaling equilibria are socially more beneficial. Indeed, we have been in part inspired by [26] to study the same classes of policies for teams instead of games. However, our aim is not to show benefits of quantumcorrelated and nonsignaling policies over classical ones, instead we want to obtain an approximation to the classical team problem by using these new classes of policies.
In stochastic control, the joint distribution of state, observations, and actions for a given policy is called the strategic measure. In [27], a hierarchy of strategic measures for teams was established, and many of their properties, such as convexity and compactness, were shown. The strategicmeasure version of the set of nonsignaling policies for team problems was also introduced in [27], where it was also proved that the set of strategic measures corresponding to the extreme points of nonsignaling policies is a strict superset of the set of strategic measures corresponding to deterministic policies. The main motivation of [27] for introducing such a hierarchy to the set of strategic measures is to establish the existence and structure of optimal team policies, whereas we are mainly interested in computable approximations of optimal team policies.
Iiia Randomized Policies
Note that one can consider any policy
as a conditional probability distribution on
given , whereLet denote the set of all conditional probability distributions on given . Therefore, we can view any policy as an element of . In view of this, we define the set of randomized policies as the following subset of :
In this definition, represents independent common randomness shared among agents. In addition to common randomness, agents can also randomly generate their actions depending on the observation and common randomness . Here, we use notation to denote randomized policies in order to make a connection with the ‘local hidden variable’ concept from quantum mechanics [28].
Note that is a convex set whose extreme points are deterministic product policies . Since the cost function is a linear function on , it takes its optimal value on extreme points. Therefore, common and individual randomization of policies does not improve the cost function; that is, optimal strategy can be chosen deterministically. Therefore, without loss of generality, we can indeed treat as the set of classical team policies. In this setting, the cost of the team can be written as
Therefore, we have
Note that computing the value of is NPhard [24], and, in general, the optimization problem above cannot be solved in polynomial time.
IiiB Quantumcorrelated Policies
To introduce quantumcorrelated policies, we briefly introduce the mathematical formalism necessary to discuss quantum operations. We refer the reader to books [29, 30] for basics of quantum information and computation.
For a finitedimensional Hilbert space , let denote the set of positive semidefinite operators with unit trace. In this paper, Hilbert spaces are assumed to be defined over complex scalars. A state of a quantum system, living in , is an element of . A measurement on this quantum system is given by a collection of positive semidefinite operators such that their sum is identity. When one applies the measurement to the quantum system with state , the probability of obtaining the outcome is given by
In quantum physics, a compound of quantum systems with the underlying Hilbert spaces is represented by the tensor product of the individual Hilbert spaces. Therefore, any state (called the compound state) of this compound quantum system is an element of .
With these definitions, we can now define quantumcorrelated policies. An element is a quantumcorrelated policy if agents have access to a part of a quantum compound state , where is a collection of arbitrary finitedimensional Hilbert spaces, and, for each , Agent makes measurements on part of the state depending on its observations to generate its action as the output of the measurement; that is, the conditional distribution is of the following form:
Let denote the set of quantumcorrelated policies in . The following result states that randomized policies are included in the set of quantumcorrelated policies.
Lemma 1.
We have .
Proof.
Let ; that is,
Let be a Hilbert space with dimension . Fix some orthonormal basis for . We define
for all , , and . Then, we have
This completes the proof. ∎
As opposed to the randomized case, it is known that
for certain sequential team problems. One such team problem is given below, which is called the XOR team [31]. For this problem, we show that quantumcorrelations improve the optimal value of the team.
Note that [31, 32] give evidence that quantumcorrelated teams can be computationally tractable as opposed to their randomized counterparts. Namely, in these papers, optimization problems associated with quantumcorrelated policies can be written as (or can be approximated by) semidefinite programs whose sizes scale with the cardinality of the observation and action spaces. As a result, they can be solved exactly or approximately in polynomial time. In particular, [31] computes the optimal value of XOR team and [32] approximates the optimal value of the unique teams via semidefinite programs.
However, there are other instances of team problems [33, 34, 35], where exact or approximate computation of the optimal value of quantumcorrelated policies is NPhard, and therefore, cannot be cast as a semidefinite program. Therefore, it is an interesting research direction to study approximation of the optimal value of quantumcorrelated policies via semidefinite programs. Indeed, we will state this as an open problem in Section VI.
Example 1 (XOR team).
In XOR team, we have two agents with binary action spaces . Observations are generated independently and uniformly over some finite sets , . Hence, there is no state variable in the problem, and so the problem is automatically static. The reward^{1}^{1}1All results in this paper are also true for maximization of a reward function. function is defined as
where is some arbitrary binaryvalued function. This team problem with quantumcorrelated policies can be written as a semidefinite program due to Tsirelson’s Theorem [30, Theorem 6.62]. Indeed, let us define
Given any for some finitedimensional Hilbert spaces and given any two collection of measurements , , the corresponding policy is
and its expected reward function can be written as  
Note that an operator is Hermitian with if and only if it can be written as , where () are positive semidefinite operators with . Therefore, for any pair , and are Hermitian with operator norms less than . Conversely, for any pair , any Hermitian operators with operator norms less than can be decomposed as above.
Let be a real matrix. Tsirelson’s Theorem states that the following assertions are equivalent:

There exist Hilbert spaces and , a state , and two collections of Hermitian operators
whose operator norms are less than , and
for all , .

There exists two collections , of unit vectors such that
for all , .
Therefore, Tsirelson’s Theorem and the above fact about Hermitian operators imply that
subject to
This optimization problem is indeed a semidefinite program. Therefore, the optimal value of the XOR team with quantumcorrelated policies can be computed in polynomial time as opposed to its classical counterpart.
A special case for XOR team is the celebrated CHSH (ClauserHorneShimonyHolt) team [30]. In CHSH team, we have binary observation and action spaces and the reward function is defined as
For this problem, the optimal value of randomized policies is [30, Section 6.3.2]. However, quantumcorrelated policies can achieve the maximum reward of , that is obtained by solving the corresponding semidefinite program. Therefore, for CHSH team, we have
that is, quantumcorrelated policies improve the optimal value of the original team as opposed to randomized policies.
IiiC Nonsignaling Policies
A joint conditional distribution of actions given observations is nonsignaling if, for any , the marginal distribution of action of Agent given its observation does not give any information about the observations of other agents [36, 37]. Nonsignaling has been investigated in quantum mechanics due to its close connection to foundations of quantum mechanics and relativity [38]. Indeed, it describes the largest class of correlations that obey relativistic causality (relativistic causality dictates that it is impossible to communicate any information faster than the speed of light). Using nonsignaling policies, it is possible to correlate actions of distant agents without revealing their local information to other agents in the team. However, to do that, agents should communicate their local observations to a mediator, and then the mediator implements a correlation without violating nonsignaling constraint and directs agents to apply certain actions. Therefore, to implement such policies, agents need to communicate with some mediator, which is in general prohibitive in classical team problems due to communication constraints. As a result, it is in general not realistic to assume that agents can implement nonsignaling policies to design their strategies.
Note that, for randomized and quantumcorrelated policies, there is no need for a mediator to implement the strategies. Therefore, quantumcorrelated policies and randomized policies are indeed admissible in team decision theory. Moreover, in contrast with randomized policies, quantumcorrelated policies might improve the optimal value of the team and, for certain cases, this improved optimal value can be computed via a semidefinite program.
Formally, nonsignaling policies are defined as follows. An element is a nonsignaling policy if, for any subset of , the actions of the agents in given their observations are independent of observations of agents in ; that is, for any , we have
(1) 
At first sight, it is tempting to claim that is the same as the set of nonsignaling policies described by condition (1). Indeed, in [39] it was first claimed that is equivalent to the set of all nonsignaling policies, and a counterexample was given to establish that the set of extreme points of nonsignaling policies is not , which would imply that nonsignaling policies are more general than randomized ones as is the set of extreme points of . It is evident that authors of [39] were unaware of the quantum information literature, where this result was known long ago.
It turns out that the nonsignaling condition (1) can be derived from fewer linear constraints, which will be described below (see also [36, Section IIA]). These constraints indeed enable us to write the optimization problem associated with nonsignaling policies as a linear program that scales with the size of the observation and action spaces.
An element is a nonsignaling policy if it satisfies the following condition:

For each , the marginal distribution of actions excluding is independent of the observation :
(2) for all values of and .
Note that each constraint (2) is linear in and the number of such linear constraints is
Therefore, if denotes the set of nonsignaling policies in , then the optimal value of the team with nonsignaling policies can be written as a linear program as follows:
Thus, the optimal value of team with nonsignaling policies can be found in polynomial time. This is not possible for teams with randomized policies [24] and also class of teams with quantumcorrelated policies [33, 34, 35].
The following result states that quantumcorrelated policies are included in the set of nonsignaling policies.
Lemma 2.
We have .
Proof.
Let ; that is, agents have access to a part of a compound quantum state , where is a collection of arbitrary finitedimensional Hilbert spaces, and, for each , Agent makes measurements on the part of the state depending on its observations to generate its action as the output of the following measurement:
We prove that satisfies the condition (N). Fix any . Then, we have
for all values of and . Here, (a) follows from the fact that for any . Therefore, satisfies the condition (N), and so, . ∎
In the literature, it is known that
for certain sequential team problems. One such team problem is the CHSH team which was introduced in Example 1 and will be further discussed below. In this example, we will also show the necessity of the communication between agents and a mediator explicitly to achieve the maximum nonsignaling correlation.
Example 2 (CHSH team).
Recall that CHSH team is a special case for XOR team. In CHSH team, we have binary observation and action spaces and observations are generated independently and uniformly. The reward function is defined as
For this problem, quantumcorrelated policies can achieve the maximum reward of [30, Section 6.3.2]. However, nonsignaling policies can achieve the maximum reward of using the following policy, which is called PopescuRohrlich (PR) box in the literature [38]:
It is very easy to prove that is nonsignaling; that is, is independent of for and . The reward of is , which is the maximum achievable reward by any policy as . Hence, is the optimal nonsignaling policy. Therefore, for CHSH team, we have
that is, nonsignaling policies improve the optimal value of quantumcorrelated policies.
Note that to implement the policy above, agents should communicate their observations to a mediator, and then, mediator directs them to apply either the same actions or different actions based on the product of their observations. This kind of communication is, in general, infeasible for team decision problems. Therefore, although allowing nonsignaling correlations among actions of agents enables us to formulate the team problem as a linear program (solvable in a polynomial time), it is in general not realistic to assume that agents can apply such policies in reallife applications due to communication constraints dictated by decentralized information structure.
Iv Approximation of Randomized Policies via Extendible Nonsignaling Policies
As we have explained in the previous section, nonsignaling policies are in general not feasible to apply in teams due to intrinsic communication constraints in the problem. However, if we add further properties to nonsignaling policies such as symmetric extendibility (defined below), then we prove that such policies are almost equivalent to randomized policies, and so can be used to approximately compute the optimal value of randomized policies. Indeed, after we prove this result, in Section V, we will establish a linear programming approximation for computing .
To that end, we first give the definition of extendible nonsignaling policies. For any , where each is a positive integer, we define the set of extendible nonsignaling policies [28], denoted by , as the set of all such that there exists a nonsignaling extension which is permutation symmetric in actions and observations of each agent; that is, for any and
where and for , and whose marginals are ; that is,
for any and .
The theorem below states that randomized policies can be approximated by extendible nonsignaling policies if is sufficiently large. This result is the key to proving the linear programming approximation of classical team problems.
Note that extendible nonsignaling policies were first introduced in [28] when and . In [28], authors proved the below theorem for this case. Hence, our result is an extension of their result to arbitrary and . We note that the technique used here to prove below result is very similar to the proof of [28, Theorem 1]. However, there is a crucial difference in the details between this problem and the case . For the
case, the key step is to use the chain rule for mutual information between actions and then employ Pinsker inequality. When
is arbitrary, we should use multipartite mutual information for which we have Pinsker Inequality, but unfortunately, we do not have chain rule. This indeed complicates the proof considerably.Theorem 1.
Let . Then, we have
V LP Approximation of Teams
In this section, we show that the optimal value of the classical team problem can be approximated within additive error by a linear program. The key idea is to extend the original team by adding extra identical agents to the team. Namely, for Agent , we add to the team more agents which are identical to Agent . Then, after we generate observations from distribution of the original team problem, the observation is sent to one of the agents chosen at random. We do not send any observations to the remaining agents and we do not expect any actions from the remaining agents. We name this situation as null observation and null action, and denote them by the symbol . Finally, we use actions and observations of the chosen agents to compute the cost function .
It can be shown that the classical optimal value of the extended team is the same as the classical optimal value of the original team. Moreover, the nonsignaling optimal value of the extended team, which can be computed using a linear program, is the same as the extendible nonsignaling value of the original team. With these observations, one can easily obtain an approximation result using Theorem 1.
In the next section, we first give the precise mathematical description of the extended game, and then prove the main result of this paper.
Va Extended Team Problem
Fix any , where is a positive integer for all . In extended team, we have agents labelled by the pair , where and, given , . For Agent , the observation space is and the action space is . The observations are generated with respect to which is defined as
Given observations and the corresponding actions , the cost function is given by
Note that both and are symmetric in observations and actions of identical agents; that is, for any and , we have
and  
The cost of the extended team under policy is given by
Let denote the set of all nonsignaling and let be the set of extensions of the extendible nonsignaling policies of the original team.
Since both and are symmetric in observations and actions of identical agents, any nonsignaling policy can be replaced without increasing the cost by some policy in [28, Lemma 9]. It is also straightforward to prove that the optimal value of the extended team with randomized policies is the same as the optimal value of the original team with randomized policies. Moreover, the nonsignaling optimal value of the extended team is the same as the extendible nonsignaling value of the original team. These observations imply that
Comments
There are no comments yet.