All warfare is based on deception. Hence, when we are able to attack, we must seem unable; when using our forces, we must appear inactive; when we are near, we must make the enemy believe we are far away; when far away, we must make him believe we are near.
- Sun Tzu, The Art of War Sunzi and Wee (2003)
As quoted above, even the earliest known work on military strategy and war, The Art of War, emphasizes the importance of deception in security. Deception can be used as a defense strategy by making the opponent/adversary to perceive certain information of interest in an engineered way. Indeed, deception is also not limited to hostile environments. In all non-cooperative multi-agent environments, as long as there is asymmetry of information and one agent is informed about the information of interest while the other is not, then the informed agent has power on the uninformed one to manipulate his/her decisions or perceptions by sharing that information strategically.
Especially with the introduction of cyber connectedness in physical systems, certain communication and control systems can be viewed as multi-agent environments, where each agent makes rational decisions to fulfill certain objectives. As an example, we can view transmitters (or sensors) and receivers (or controllers) as individual agents in communication (or control) systems. However, classical communication and control theory is based on the cooperation between these agents to meet certain challenges together, such as in mitigating the impact of a noisy channel in communication or in stabilizing the underlying state of a system around an equilibrium through feedback in control. However, cyber connectedness makes these multi-agent environments vulnerable against adversarial interventions and there is an inherent asymmetry of information as the information flows from transmitters (or sensors) to receivers (or controllers)111In control systems, we can also view the control input as information that flows implicitly from the controllers to the sensors since it impacts the underlying state and correspondingly the sensors’ measurements.. Therefore, if these agents are not cooperating, e.g., due to adversarial intervention, then the informed agents, i.e., transmitters or sensors, could seek to deceive the uninformed ones, i.e., receivers or controllers, so that they would perceive the underlying information of interest in a way the deceiver has desired, and correspondingly would take the manipulated actions.
Our goal, here, is to craft the information that could be available to an adversary in order to control his/her perception about the underlying state of the system as a defensive measure. The malicious objective and the normal operation of the system may not be completely opposite of each other as in the framework of a zero-sum game, which implies that there is a part of malicious objective that is benign and the adversary would be acting in line with the system’s interest with respect to that aligned part of the objectives. If we can somehow restrain the adversarial actions to fulfill only the aligned part, then the adversarial actions, i.e., the attack, could inadvertently end up helping the system toward its goal. Since a rational adversary would make decisions based on the information available to him, the strategic crafting of the signal that is shared with the adversary, or the adversary can have access to, can be effective in that respect. Therefore, our goal is to design the information flowing from the informed agents, e.g., sensors, to the uninformed ones, e.g., controllers, in view of the possibility of adversarial intervention, so as to control the perception of the adversaries about the underlying system, and correspondingly to persuade them (without any explicit enforcement) to fulfill the aligned parts of the objectives as much as possible without fulfilling the misaligned parts.
In this chapter, we provide an overview of the recent results Sayin et al. (2017); Sayin and Başar (2017, 2019) addressing certain aspects of this challenge in non-cooperative communication and control settings. For a discrete-time Gauss Markov process, and when the sender and the receiver in a non-cooperative communication setting have misaligned quadratic objectives, in Sayin et al. (2017), we have shown the optimality of linear signaling rules222We use the terms “strategy”, “signaling/decision rule”, and “policy” interchangeably. within the general class of measurable policies and provided an algorithm to compute the optimal policies numerically. Also in Sayin et al. (2017), we have formulated the optimal linear signaling rule in a non-cooperative linear-quadratic-Gaussian (LQG) control setting when the sensor and the controller have known misaligned control objectives. In Sayin and Başar (2017), we have introduced a secure sensor design framework, where we have addressed the optimal linear signaling rule again in a non-cooperative LQG setting when the sensor and private-type controller have misaligned control objectives in a Bayesian setting, i.e., the distribution over the private type of the controller is known. In Sayin and Başar (2019), we have addressed the optimal linear robust signaling in a non-Bayesian setting, where the distribution over the private type of the controller is not known, and provided a comprehensive formulation by considering also the cases where the sensor could have partial or noisy information on the signal of interest and relevance. We elaborate further on these results in some detail throughout the chapter.
In Section 2, we review the related literature in economics and engineering. In Sections 3 and 4, we introduce the framework and formulate the deception-as-defense game, respectively. In Section 5, we elaborate on Gaussian information of interest in detail. In Sections 6 and 7, we address the optimal signaling rules in non-cooperative communication and control systems. In Section 8, we provide the optimal signaling rule against the worst possible distribution over the private types of the uninformed agent. In Section 9, we extend the results to partial or noisy measurements of the underlying information of interest. Finally, we conclude the chapter in Section 10 with several remarks and possible research directions.
Notation:Random variables are denoted by bold lower case letters, e.g.,
. For a random vector, denotes the corresponding covariance matrix. For an ordered set of parameters, e.g., , we use the notation , where .
denotes the multivariate Gaussian distribution with zero mean and designated covariance. For a vectorand a matrix , and denote their transposes, and denotes the Euclidean -norm of the vector . For a matrix , denotes its trace. We denote the identity and zero matrices with the associated dimensions by and , respectively. denotes the set of -by- symmetric matrices. For positive semi-definite matrices and , means that is also positive semi-definite.
2 Deception Theory in Literature
There are various definitions of deception. Depending on the specific definition at hand, the analysis or the related applications vary. Commonly in signaling-based deception definitions, there is an information of interest private to an informed agent whereas an uninformed agent may benefit from that information to make a certain decision. If the informed and uninformed agents are strategic while, respectively, sharing information and making a decision, then the interaction can turn into a game where the agents select their strategies according to their own objectives while taking into account the fact that the other agent would also have selected his/her strategy according to his/her different objective. Correspondingly, such an interaction between the informed and uninformed agents can be analyzed under a game-theoretic solution concept. Note that there is a main distinction between incentive compatible deception model and deception model with policy commitment.
We say that a deception model is incentive compatible if neither the informed nor the uninformed agent have an incentive to deviate from their strategies unilaterally.
The associated solution concept here is Nash equilibrium Başar and Olsder (1999). Existence of a Nash equilibrium is not guaranteed in general. Furthermore, even if it exists, there may also be multiple Nash equilibria. Without certain commitments, any of the equilibria may not be realized or if one has been realized, which of them would be realized is not certain beforehand since different ones could be favorable for different players.
We say that in a deception model, there is policy commitment if either the informed or the uninformed agent commits to play a certain strategy beforehand and the other agent reacts being aware of the committed strategy.
The associated solution concept is Stackelberg equilibrium, where one of the players leads the game by announcing his/her committed strategy Başar and Olsder (1999). Existence of a Stackelberg equilibrium is not guaranteed in general over unbounded strategy spaces. However, if it exists, all the equilibria would lead to the same game outcome for the leader of the game since the leader could have always selected the favorable one among them. We also note that if there is a favorable outcome for the leader in the incentive compatible model, the leader has the freedom to commit to that policy in the latter model. Correspondingly, the leader is advantageous by acting first to commit to play according to a certain strategy even though the result may not be incentive compatible.
Game theoretical analysis of deception has attracted substantial interest in various disciplines, including economics and engineering fields. In the following subsections, we review the literature in these disciplines with respect to models involving incentive compatibility and policy commitment.
2.1 Economics Literature
The scheme of the type introduced above, called strategic information transmission, was introduced in a seminal paper by V. Crawford and J. Sobel in Crawford and Sobel (1982). This has attracted significant attention in the economics literature due to the wide range of relevant applications, from advertising to expert advise sharing. In the model adopted in Crawford and Sobel (1982), the informed agent’s objective function includes a commonly known bias term different from the uninformed agent’s objective. That bias term can be viewed as the misalignment factor in-between the two objectives. For the incentive compatible model, the authors have shown that all equilibria are partition equilibria, where the informed agent controls the resolution of the information shared via certain quantization schemes, under certain assumptions on the objective functions (satisfied by quadratic objectives), and the assumption that the information of interest is drawn from a bounded support.
Following this inaugural introduction of the strategic information transmission framework, also called cheap talk due to the costless communication over an ideal channel, different settings, such as
Repeated games Morris (2001),
have been studied extensively; however, all have considered the scenarios where the underlying information is one-dimensional, e.g., a real number. However, multi-dimensional information can lead to interesting results like full revelation of the information even when the misalignment between the objectives is arbitrarily large if there are multiple senders with different bias terms, i.e., misalignment factors Battaglini (2002). Furthermore, if there is only one sender yet multidimensional information, there can be full revelation of information at certain dimensions while at the other dimensions, the sender signals partially in a partition equilibrium depending on the misalignment between the objectives Battaglini (2002).
The ensuing studies Farrell and Gibbons (1986); Gilligan and Krehbiel (1989); Farrell and Rabin (1996); Krishna and Morgan (2000); Morris (2001); Battaglini (2002) on cheap talk Crawford and Sobel (1982) have analyzed the incentive compatibility of the players. More recently, in Kamenica and Gentzkow (2011), the authors have proposed to use a deception model with policy commitment. They call it “sender-preferred sub-game perfect equilibrium” since the sender cannot distort or conceal information once the signal realization is known, which can be viewed as the sender revealing and committing to the signaling rule in addition to the corresponding signal realization. For information of interest drawn from a compact metric space, the authors have provided necessary and sufficient conditions for the existence of a strategic signal that can benefit the informed agent, and characterized the corresponding optimal signaling rule. Furthermore, in Tamura (2014), the author has shown the optimality of linear signaling rules for multivariate Gaussian information of interest and with quadratic objective functions.
2.2 Engineering Literature
There exist various engineering applications depending on the definition of deception. Reference Pawlick et al. (2017) provides a taxonomy of these studies with a specific focus on security. Obfuscation techniques to hide valuable information, e.g., via externally introduced noise Howe and Nissenbaum (2009); Clark et al. (2012); Zhu et al. (2012) can also be viewed as deception based defense. As an example, in Howe and Nissenbaum (2009), the authors have provided a browser extension that can obfuscate user’s real queries by including automatically-fabricated queries to preserve privacy. Here, however, we specifically focus on signaling-based deception applications, in which we craft the information available to adversaries to control their perception rather than corrupting it. In line with the browser extension example, our goal is to persuade the query trackers to perceive the user behavior in a certain fabricated way rather than limiting their ability to learn the actual user behavior.
In computer security, various (heuristic) deception techniques, e.g., honeypots and honey nets, are prevalent to make the adversary perceive a honey-system as the real one or a real system as a honey-oneSpitzner (2002). Several studies, e.g., Carroll and Grosu (2011), have analyzed honeypots within the framework of binary signaling games by abstracting the complexity of crafting a real system to be perceived as a honeypot (or crafting a honeypot to be perceived as a real system) to binary signals. However, here, our goal is to address the optimal way to craft the underlying information of interest with a continuum support, e.g., a Gaussian state.
The recent study Sarıtaş et al. (2017) addresses strategic information transmission of multivariate Gaussian information over an additive Gaussian noise channel for quadratic misaligned cost functions and identifies the conditions where the signaling rule attaining a Nash equilibrium can be a linear function. Recall that for scalar case, when there is no noisy channel in-between, all the equilibria are partition equilibria, implying all the signaling rules attaining a Nash equilibrium are nonlinear except babbling equilibrium, where the informed agent discloses no information Crawford and Sobel (1982). Two other recent studies Akyol et al. (2017) and Farokhi et al. (2017) address strategic information transmission for the scenarios where the bias term is not common knowledge of the players and the solution concept is Stackelberg equilibrium rather than Nash equilibrium. They have shown that the Stackelberg equilibrium could be attained by linear signaling rules under certain conditions, different from the partition equilibria in the incentive compatible cheap talk model Crawford and Sobel (1982). In Farokhi et al. (2017), the authors have studied strategic sensor networks for multivariate Gaussian information of interest and with myopic quadratic objective functions in dynamic environments and by restricting the receiver’s strategies to affine functions. In Akyol et al. (2017), for jointly Gaussian scalar private information and bias variable, the authors have shown that optimal sender strategies are linear functions within the general class of measurable policies for misaligned quadratic cost functions when there is an additive Gaussian noise channel and hard power constraint on the signal, i.e., when it is no longer cheap talk.
3 Deception-As-Defense Framework
Consider a multi-agent environment with asymmetry of information, where each agent is a selfish decision maker taking action or actions to fulfill his/her own objective only while actions of any agent could impact the objectives of the others. As an example, Fig. 1 illustrates a scenario with two agents: Sender (S) and Receiver (R), where S has access to (possibly partial or noisy version of) certain information valuable to R, and S sends a signal or signals related to the information of interest to R.
We say that an informed agent (or the signal the agent crafts) is deceptive if he/she shapes the information of interest private to him/her strategically in order to control the perception of the uninformed agent by removing, changing, or adding contents.
Deceptive signaling can play a key role in multi-agent non-cooperative environments as well as in cooperative ones, where certain (uninformed) agents could have been compromised by certain adversaries. In such scenarios, informed agents can signal strategically to the uninformed ones in case they could have been compromised. Furthermore, deceiving an adversary to act, or attack the system in a way aligned with the system’s goals can be viewed as being too optimistic due to the very definition of adversary. However, an adversary can also be viewed as a selfish decision maker seeking to satisfy a certain malicious objective, which may not necessarily be completely conflicting with the system’s objective. This now leads to the following notion of “deception-as-defense”.
We say that an informed agent engages in a deception-as-defense mode of operation if he/she crafts the information of interest strategically to persuade the uninformed malicious agent (without any explicit enforcement) to act in line with the aligned part of the objective as much as possible without taking into account the misaligned part.
We re-emphasize that this approach differs from the approaches that seek to raise suspicion on the information of interest to sabotage the adversaries’ malicious objectives. Sabotaging the adversaries’ malicious objectives may not necessarily be the best option for the informed agent unless the objectives are completely opposite of each other. In this latter case, the deception-as-defense framework actually ends up seeking to sabotage the adversaries’ malicious objectives.
We also note that this approach differs from lying, i.e., the scenario where the informed agent provides a totally different information (correlated or not) as if it is the information of interest. Lying could be effective, as expected, as long as the uninformed agent trusts the legitimacy of the provided information. However, in non-cooperative environments, this could turn into a game where the uninformed agent becomes aware of the possibility of lying. This correspondingly raises suspicion on the legitimacy of the shared information and could end up sabotaging the adversaries’ malicious objectives rather than controlling their perception of the information of interest.
Once a defense mechanism has been widely deployed, this can cause the advanced adversaries learn the defense policy in the course of time. Correspondingly, the solution concept of policy commitment model can address this possibility in the deception-as-defense framework in a robust way if the defender commits to a certain policy that takes into account the best reaction of the adversaries that are aware of the policy. Furthermore, the transparency of the signal sent via the committed policy generates a trust-based relationship in-between S and R, which is powerful to persuade R to make certain decisions inadvertently without any explicit enforcement by S.
4 Game Formulation
The information of interest is considered to be a realization of a known, continuous, random variable in static settings or a known (discrete-time) random process in dynamic settings. Since the static setting is a special case of the dynamic setting, we formulate the game in a dynamic, i.e., multi-stage, environment. We denote the information of interest by, where denotes its support. Let
have zero mean and (finite) second-order moment. We consider the scenarios where each agent has perfect recall and constructs his/her strategy accordingly. S has access to a possibly partial or noisy version of the information of interest, . We denote the noisy measurement of by , where denotes its support. For each instance of the information of interest, S selects his/her signal as a second-order random variable
correlated with , but not necessarily determined through a deterministic transformation on (i.e., is in general a random mapping). Let us denote the set of all signaling rules by . As we will show later, when we allow for such randomness in the signaling rule, under certain conditions the solution turns out to be a linear function of the underlying information and an additive independent noise term. Due to the policy commitment by S, at each instant, with perfect recall, R selects a Borel measurable decision rule , where , from a certain policy space in order to make a decision
knowing the signaling rules and observing the signals sent .
Let denote the length of the horizon. We consider that the agents have cost functions to minimize, instead of utility functions to maximize. Clearly, the framework could also be formulated accordingly for utility maximization rather straightforwardly. Furthermore, we specifically consider that the agents have quadratic cost functions, denoted by and .
An Example in Non-cooperative Communication Systems Over a finite horizon with length , S seeks to minimize over
by taking into account that R seeks to minimize over
where the weight matrices are arbitrary (but fixed). The following special case illustrates the applicability of this general structure of misaligned objectives (3) and (4). Suppose that the information of interest consists of two separate processes and , e.g., . Then (3) and (4) cover the scenarios where R
seeks to estimateby minimizing
whereas S wants R to perceive as , and end up minimizing
An Example in Non-cooperative Control Systems Consider a controlled Markov process, e.g.,
where is a white Gaussian noise process. S seeks to minimize over
by taking into account that R seeks to minimize over
with arbitrary (but fixed) positive semi-definite matrices and , and positive-definite matrices and . Similar to the example in communication systems, this general structure of misaligned objectives (8) and (9) can bring in interesting applications. Suppose the information of interest consists of two separate processes and , e.g., , where is an exogenous process, which does not depend on R’s decision . For certain weight matrices, (8) and (9) cover the scenarios where R seeks to regularize around zero vector by minimizing
whereas S seeks R to regularize around the exogenous process by minimizing
We define the deception-as-defense game as follows:
The deception-as-defense game is a Stackelberg game between S and R, where
denotes the information of interest,
denotes S’s (possibly noisy) measurements of the information of interest,
Under the deception model with policy commitment, S is the leader, who announces (and commits to) his strategies beforehand, while R is the follower, reacting to the leader’s announced strategies. Since R is the follower and takes actions knowing S’s strategy , we let be R’s best reaction set to S’s strategy . Then, the strategy and best reaction pair attains the Stackelberg equilibrium provided that
5 Quadratic Costs and Information of Interest
Misaligned quadratic cost functions, in addition to their various applications, play an essential role in the analysis of the game . One advantage is that a quadratic cost function can be written as a linear function of the covariance of the posterior estimate of the underlying information of interest. Furthermore, when the information of interest is Gaussian, we can formulate a necessary and sufficient condition on the covariance of the posterior estimate, which turns out to be just semi-definite matrix inequalities. This leads to an equivalent semi-definite programming (SDP) problem over a finite dimensional space instead of finding the best signaling rule over an infinite-dimensional policy space. In the following, we elaborate on these observations in further detail.
Due to the policy commitment, S needs to anticipate R’s reaction to the selected signaling rule . Here, we will focus on the non-cooperative communication system, and later in Section 7, we will show how we can transform a non-cooperative control setting into a non-cooperative communication setting under certain conditions. Since the information flow is in only one direction, R faces the least mean square error problem for given . Suppose that is invertible. Then, the best reaction by R is given by
almost everywhere over . Note that the best reaction set is a singleton and the best reaction is linear in the posterior estimate , i.e., the conditional expectation of with respect to the random variables . When we substitute the best reaction by R into S’s cost function, we obtain
where . Since for arbitrary random variables and ,
the objective function to be minimized by S, (15), can be written as
where denotes the covariance of the posterior estimate,
and the constant is given by
We emphasize that is not the posterior covariance, i.e., in general.
The cost function depends on the signaling rule only through the covariance matrices and the cost is an affine function of . By formulating the relation, we can obtain an equivalent finite-dimensional optimization problem over the space of symmetric matrices as an alternative to the infinite-dimensional problem over the policy space . Next, we seek to address the following question.
Relation between and What is the relation between the signaling rule and the covariance of the posterior estimate ?
Here, we only consider the scenario where S has access to the underlying information of interest perfectly. We will address the scenarios with partial or noisy measurements in Section 9 by transforming that setting to the setting of perfect measurements.
There are two extreme cases for the shared information: either sharing the information fully without any crafting or sharing no information. The former one implies that the covariance of the posterior estimate would be whereas the latter one implies that it would be since R has perfect memory.
In-between the extremes of sharing everything and sharing nothing What would be if S has shared the information only partially?
To address this, if we consider the positive semi-definite matrix , by (16) we obtain
Furthermore, if we consider the positive semi-definite matrix , by (16) we obtain
which is independent of the distribution of the underlying information and the policy space of S.
Sufficient Condition What would be the sufficient condition? Is the necessary condition on (22) sufficient?
The sufficient condition for arbitrary distributions is an open problem. However, in the following subsection, we show that when information of interest is Gaussian, we can address the challenge and the necessary condition turns out to be sufficient.
5.1 Gaussian Information of Interest
In addition to its use in modeling various uncertain phenomena based on the central limit theorem, Gaussian distribution has special characteristics which make it versatile in various engineering applications, e.g., in communication and control. The deception-as-defense framework is not an exception for the versatility of the Gaussian distribution. As an example, if the information of interest is Gaussian, the optimal signaling rule turns out to be a linear function within the general class of measurable policies, as to be shown in different settings throughout this chapter.
Let us first focus on the single-stage setting, where the necessary condition (22) is given as
The convention here is that for arbitrary symmetric matrices , means that , that is positive semi-definite. We further note that the space of positive-semi-definite matrices is a semi-cone Wolkowicz et al. (2000). Correspondingly, Fig. 2 provides a figurative illustration of (23), where is bounded from both below and above by certain semi-cones in the space of symmetric matrices.
With a certain linear transformation bijective over (23), denoted by , where is not necessarily the same with , the necessary condition (23) can be written as
As an example of such a linear mapping when is invertible, we can consider and . If is singular, then the following lemma from Sayin and Başar (2018b) plays an important role to compute such a linear mapping.
Provided that a given positive semi-definite matrix can be partitioned into blocks such that a block at the diagonal is a zero matrix, then certain off-diagonal blocks must also be zero matrices, i.e.,
Provided that a given positive semi-definite matrix can be partitioned into blocks such that a block at the diagonal is a zero matrix, then certain off-diagonal blocks must also be zero matrices, i.e.,
Let the singular with rank have the eigen-decomposition
where . Then, (23) can be written as
where we let
be the corresponding partitioning, i.e., . Since , the diagonal block must be positive semi-definite Horn and Johnson (1985). Further, (27) yields that , which implies that . Invoking Lemma 1, we obtain . Therefore, a linear mapping bijective over (23) is given by
where the unitary matrixand the diagonal matrix are as defined in (26).
|Eigenvalues of are in the closed interval|
since the eigenvalues of weakly majorize the eigenvalues of the positive semi-definite from below Horn and Johnson (1985).
Up to this point, the specific distribution of the information of interest did not play any role. However, for the sufficiency of the condition (23), Gaussianness of the information of interest plays a crucial role as shown in the following theorem Sayin and Başar (2019).
Consider -variate Gaussian information of interest . Given any stochastic kernel , we have
Furthermore, given any covariance matrix satisfying
we have that there exists a probabilistic linear-in- signaling rule
where and is an independent -variate Gaussian random variable, such that . Let have the eigen-decomposition and . Then, the corresponding matrix and the covariance are given by
where the unitary matrix and the diagonal matrix are as defined in (26), , , and
Implication of Theorem 5.1 If the underlying information of interest is Gaussian, instead of the functional optimization problem
we can consider the equivalent finite-dimensional problem
Without any need to solve the functional optimization problem (36), Theorem 5.1 shows the optimality of the “linear plus a random variable” signaling rule within the general class of stochastic kernels when the information of interest is Gaussian.
Versatility of the Equivalence Furthermore, a linear signaling rule would still be optimal even when we introduce additional constraints on the covariance of the posterior since the equivalence between (36) and (37) is not limited with the equivalence in optimality.
Recall that the distribution of the underlying information plays a role only in proving the sufficiency of the necessary condition. Therefore, in general, based on only the necessary condition, we have
The equality holds when the information of interest is Gaussian.
Therefore, for fixed covariance , Gaussian distribution is the best one for S to persuade R in accordance with his/her deceptive objective, since it yields total freedom to attain any covariance of the posterior estimate in-between the two extremes .
The following counter example shows that the sufficiency of the necessary condition (47) holds only in the case of the Gaussian distribution.
A Counter Example for Arbitrary Distributions For a clear demonstration, suppose that and , and correspondingly . The covariance matrix satisfies the necessary condition (47) since
which implies that the signal must be fully informative about without giving any information about . Note that only implies that and are uncorrelated, yet not necessarily independent for arbitrary distributions. Therefore, if and are uncorrelated but dependent, then any signaling rule cannot attain that covariance of the posterior estimate even though it satisfies the necessary condition.
Let us now consider a Gauss-Markov process, which follows the following first-order auto-regressive recursion
where and . For this model, the necessary condition (22) is given by
for . Given , let have the eigen-decomposition
where , i.e., has rank . The linear transformation given by
which correspondingly yields that has eigenvalues in the closed interval . Then, the following theorem extends the equivalence result of the single-stage to multi-stage ones Sayin and Başar (2019).
Consider the -variate Gauss-Markov process following the state recursion (40). Given any stochastic kernel for , we have
Furthermore, given any covariance matrices satisfying
where , then there exists a probabilistic linear-in-, i.e., memoryless, signaling rule
where and is independently distributed -variate Gaussian process such that for all . Given , let have the eigen-decomposition and . Then, the corresponding matrix and the covariance are given by
where the unitary matrix and the diagonal matrix are defined in (42), , , and
Without any need to solve the functional optimization problem
Theorem 5.2 shows the optimality of the “linear plus a random variable” signaling rule within the general class of stochastic kernels also in dynamic environments, when the information of interest is Gaussian.
6 Communication Systems
In this section, we elaborate further on the deception-as-defense framework in non-cooperative communication systems with a specific focus on Gaussian information of interest. We first note that in this case the optimal signaling rule turns out to be a linear deterministic signaling rule, where S does not need to introduce additional independent noise on the signal sent. Furthermore, the optimal signaling rule can be computed analytically for the single-stage game Tamura (2014). We also extend the result on the optimality of linear signaling rules to multi-stage ones Sayin et al. (2017).
If we multiply each side of the inequalities in the constraint set of (53) from left and right with unitary matrices such that the resulting matrices are still symmetric, the semi-definiteness inequality would still hold. Therefore, let the symmetric matrix have the eigen-decomposition
where and are positive semi-definite matrices with dimensions and . Then (53) could be written as
and there exists a such that
satisfies the constraint in (53). Then, the following lemma shows that an optimal solution for (56) is given by , , and . Therefore, in (56), the second (negative semi-definite) term can be viewed as the aligned part of the objectives whereas the remaining first (positive semi-definite) term is the misaligned part.
For arbitrary and diagonal positive semi-definite , we have
The left inequality follows since while is positive semi-definite. The right inequality follows since the diagonal entries of are majorized from below by its eigenvalues by Schur Theorem Horn and Johnson (1985) while the eigenvalues of are weakly majorized from below by the eigenvalues of since Horn and Johnson (1985).
Note that the optimal signaling rule (60) does not include any additional noise term. The following corollary shows that the optimal signaling rule does not include additional noise when as well (versions of this theorem can be found in Sayin et al. (2017) and Sayin and Başar (2018b)).
Consider a deception-as-defense game , where the exogenous Gaussian information of interest follows the first-order autoregressive model (
, where the exogenous Gaussian information of interest follows the first-order autoregressive model (40), and the players S and R have the cost functions (3) and (4), respectively. Then, for the optimal solution of the equivalent problem, is a symmetric idempotent matrix, which implies that the eigenvalues of are either or . Let denote the rank of , and have the eigen-decomposition