1 Introduction
The Increasing Importance of PrivateInformation Sharing & Collusionencoding.
Sharing information over private channels is a key issue in many ICT applications, from robotics to information systems [44]. Safeguarding privacy is also of utmost concern, also in the context of EU’s newlyenforced General Data Protection Regulation (GDPR) [1]. So, being able to explicitly model datasharing over private channels in multiagent systems (MAS) or concurrent systems is crucial.
On the other hand, there are numerous ICT applications, such as identityschemes and distributed ledgers, where the threat of adversaries colluding privately with inside accomplices is greater than that of classical, outsidertype attackers. That said, verifying security against collusionsbased attacks such as terroristfrauds [10] in identification schemes is a notoriously difficult problem in security verification [17]. And, even the most recent formalisms attempting this [19] fall short of encapsulating the strategic nature of this property or of the threatmodel it views; instead, [19] looks at correspondence of events in applied calculus (i.e., if agents did a series of actions with one result, then later another fact must be the case). Instead, they use arguably crude approximations (based on causalities/correspondence of events in process calculi [15]) of what a collusive attack would be. But, indeed, such attacks resting on not one by two interdependent strategies are notoriously hard to model [17]. Meanwhile, expressive logics for strategic reasoning have been shown effective in capturing intricate security requirements, such as coercionresistance in evoting [45, 4, 36]. Thus, the further study of strategic abilities under collusion is of timely interest.
Looking into this, we observe that the typical formalisms for encoding multiagent systems (MAS), such as interpreted systems [24], support the analysis of strategyoriented properties. Yet, these systems are generally inspired by concurrent action models and, so, explicit expression therein of datasharing between agents is difficult to model naturally. Other frameworks, such as dynamic epistemic logic (DEL) [22, 39] or propositional assignments [34], provide agents with more flexibility on data sharing, but do not allow for the full power of strategic reasoning. To fill this gap, the formalism of vCGS was recently proposed in [7]. vCGS supports the explicit expression of privatedata sharing in MAS. That is, a vCGS encodes syntactically and in a natural semantics a “MAS with 1to1 privatechannels”: agent and agent have an explicit syntactic/semantic endowment to “see” some of each others’ variables, without other agents partaking in this. However, unfortunately, verifying strategybased logic specification on vCGS has been shown undecidable. So, it would interesting and timely, to see if we can maintain the needed feature of privateinformation sharing that vCGS have and recover the decidability of model checking logics that express strategic abilities.
To sum up, in this paper, we look at tractable verification of logics of strategicabilities in systems capable of expressing private datasharing between processes; we show an important decidability result in this space. We do this with the view of applying this to ICT systems where the analysis of strategic behaviour is crucial: e.g., collusionbased attacks in security settings.
2 Context & Contribution
Context on AlternatingTime Temporal Logic. Alternatingtime temporal logic (ATL) [2] is a powerful extension of branchingtime logics which indeed provides a framework for reasoning about strategic behaviours in concurrent and multiagent systems. From the very beginning, ATL was proposed also for agents with imperfect information. In ATL semantics, agents can have memoryless strategies or recallbased/memoryful strategies; in the latter, they “remember” all their history of moves, whereas in the former – they recall none. Also, ATL with imperfect information comes with several semantic flavours [35] (subjective, objective or commonknowledge interpretation), created by different views on the intricate relationships between its temporal and epistemic aspects.
Context on ATL Decidability. There have been several proposals for decidable fragments of ATL with imperfect information, which arose via two alternative avenues: (a) by imposing a memoryless semantics for the ATL operators [38], approach implemented in the MCMAS tool; (b) by making structural restrictions on the indistinguishability relations inside the game structures, so that one looks at ATL just with distributed knowledge [20], or ATL with hierarchical knowledge [11] or ATL over broadcasting systems [6, 9]. Some other decidable fragments can be further obtained by adapting decidable cases of the distributedsynthesis problem, or the the existence of winning strategies in multiagent games with imperfect information. We therefore may generalise the decidability of the distributedsynthesis problem in architectures without information forks [25], or utilise the decidability of the problem of the existence of a winning strategy in multiplayer games with finite knowledge gaps [14].
Contribution
I. Our Private DataSharing Systems. In this paper, we propose a new class of game structures is called cast game structures, where is a set of processes/agents. In cast game structures, when some outsider (agent not in ) sends some information to some agent in , the same information is seen to all agents in the coalition .
cast game structures are introduced using the formalism called vCGS recently proposed^{1}^{1}1vCGS can be seen as a generalisation of Reactive Modules Games with imperfect information [29], in that each agent may dynamically modify the visibility of the atoms she “owns”, by either disclosing or hiding the value of that atom to some other agent. in [7]. We can introduce cast systems directly as game structures, yet we chose to use vCGS simply as it allows for an elegant formalisation of the informationflow between outsiders to the coalition . So, as a consequence of this presentation choice, cast game structures can be seen as a specialisation of the vCGS in [7].
cast game structures are in fact a strict generalisation of both architectures without information forks [25] and broadcast systems [42]. On the one hand, the extension of [25] comes from the fact that, in cast game structures, the set of initial states is arbitrary, as well as the epistemic relations defining the information available to agents in initial states. On the other, modelling these features using the setting of distributed architectures in [25], where a single initial state is imposed, requires the Environment agent to send the initial information to each agent by some private channel, hence creates an information fork.
II. Our Decidable Fragment of ATL. We now describe the class of formulas for which the modelchecking problem is decidable on our cast game structures, as well as some details linked to this. This ATL fragment is composed of formulas which utilise only coalition operators involving a set of agents , where is the set of agents/processes describing the cast system.
To obtain this result, the key point is that two actionhistories starting in the same initial state, both being generated by the same joint strategy for coalition and being in the same commonknowledge indistinguishability class for are in fact in the same distributedknowledge indistinguishability class for . This property allows for the design of a finitary informationset construction for the appropriate multiplayer game with imperfect information [13], needed to decide each coalition operator.
III. Our Case Study and Its Need of Verification against Formulae with NestedATL Modalities. We provide a case study which shows the relevance of our new decidable class of ATL modelchecking. This case study lives in the cybersecurity domain: the identityschemes and threats of collusionbased attacks therein (i.e., terroristfraud attack). Concretely, we model the distancebounding (DB) protocol by Hancke and Kuhn [31] as an cast vCGS, and the existence of a terroristfraud attack on this protocol as an formula involving a nonsingleton coalition operator. In fact, we also note that the formula which specifies the existence of a terroristfraud attack in our case study is the first type of formula requiring nesting of operators.
Hence, the modelchecking algorithm proposed in this paper can be applied to this case study, while other existing decidable modelchecking or distributedsynthesis frameworks can treat it (due to the formula it requires). The only exception would be the utilisation of a memoryless semantics of [38], which would be applicable since our case study is in fact a model in which there exists a finite number of lassotype infinite runs. Yet, specifying our casestudy in a formalism like that of [38] would require an explicit encoding/“massaging” of the agent memory into the model. In other words, our algorithm synthesises memoryful strategies and hence allows working with a given model without explicit encoding of agent memory into the model.
Structure. In Section 3, we present general preliminaries on Alternatingtime Temporal Logic () and concurrent game structures with imperfect information (iCGS) [2]. In Section 4, we recall MAS with privatechannels, called vCGS [7]. On vCGS, we provide ATL with our main formalism called cast systems, that is a semantics under the assumptions of imperfect information and perfect recall, under subjective interpretation [35]. In Section 5, we present the main decidability result of model checking ATL on top of cast systems. In Section 6, we show how use the logic and result here to check he existence of collusionbased attacks in secure systems. Section 7 discusses related work and future research.
3 Background on Alternatingtime Temporal Logic
We here recall background notions on concurrent game structures with imperfect information (iCGS) and Alternatingtime Temporal Logic (ATL) [2]. We denote the length of a tuple as , and its th element either as or . For , let or be the suffix of starting at and let or be the prefix of ending at . Further, we denote with the last element of . Hereafter, we assume a finite set of agents and an infinite set of atomic propositions (atoms).
Definition 1 (iCGS)
Given sets of agents and of atoms, a CGS with imperfect information is a tuple where:

is the set of states, with the set of initial states.

For every agent , is the set of actions for . Let be the set of all actions, and the set of all joint actions.

For every agent , the indistinguishability relation is an equivalence relation on .

For every and , the protocol function returns the nonempty set of actions enabled at for s.t. implies .

is the (partial) transition function s.t. is defined iff for every .

is the labelling function.
An iCGS describes the interactions of set of agents. Every agent has imperfect information about the global state of the iCGS, as in every state she considers any state as (epistemically) possible [24, 35].
To reason about the strategic abilities of agents in iCGS, we adopt the Alternatingtime Temporal Logic .
Definition 2
Formulas in are defined as follows, for and .
The strategy operator is read as “coalition can achieve …”.
The subjective interpretation of with imperfect information and perfect recall [35] is defined on iCGS as follows. Given an iCGS M, a path is a (finite or infinite) sequence of states such that for every there exists some joint action such that . A finite, nonempty path is called a history. Hereafter, we extend the indistinguishability relation to histories: iff and for every , .
Definition 3 (Strategy)
A uniform, memoryful strategy for agent is a function such that for all histories , (i) ; and (ii) if then .
Given a joint strategy for coalition , and history , let be the set of all infinite paths whose initial segment is indistinguishable from and consistent with , that is, .
We now assign a meaning to formulas on iCGS.
Definition 4 (Satisfaction)
The satisfaction relation for an iCGS M, path , index , and formula is defined as follows (clauses for Boolean operators are immediate and thus omitted):
iff  
iff  for some strategy , for all ,  
iff  for some strategy , for some ,  
s.t. and ,  
iff  for some strategy , for all paths ,  
, or s.t. 
Operators , ‘eventually’ , and ‘globally’ can be introduced as usual.
A formula is true at state , or , iff for all paths starting in , . A formula is true in an iCGS M, or , iff for all initial states , .
Our choice for the subjective interpretation of ATL is motivated by the fact that it allows us to talk about the strategic abilities of agents as depending on their knowledge, which is essential in the analysis of security protocols. We illustrate this point in Section 6.
Hereafter we tackle the following major decision problem.
Definition 5 (Model checking problem)
Given a iCGS M and an formula , the model checking problem amounts to determine whether .
4 Agents with VisibilityControl
We now provide details on a notion of agent presented in [7]. Such an agents can change the truthvalue of the atoms she controls, and make atoms visible to other agents.
Definition 6 (Visibility Atom)
Given atom and agent , denotes a visibility atom expressing intuitively that the truth value of is visible to . By we denote the set of all visibility atoms , for and . By = we denote the set of visibility atoms for agent .
Importantly, the notion of visibility in Def. 6 is dynamic, rather than static, as it can change at run time. That is, agent , can make atom visible (resp. invisible) to agent by setting the truth value of atom to true (resp., false). This intuition motivates the following definition.
Definition 7 (VisibilityControlling Agent: Syntax)
Given set of atoms, an agent is a tuple such that

is the set of atoms controlled by agent ;

is a finite set of update commands of the form:
where each is an atom controlled by that occurs at most once, guard is a Boolean formula over , all are agents in different from , and is a truth value in .
We denote with and the guard (here, ) and assignment of , respectively.
The intuitive reading of a guarded command is: if guard is true, then the agent executes assignment . By Def. 7 every agent controls the truth value of every atom . Moreover, differently from [3, 32], she can switch the visibility for some other agent over her own atoms, by means of assignments . Lastly, by requiring that all are different from , we guarantee that agent cannot loose visibility of her own atoms. Hereafter, we assume that control is exclusive: for any two distinct agents and , , i.e., the sets of controlled atoms are disjoint. Then, we often talk about the owner of an atom .
The agents in Def. 7 can be thought of as a compact, programlike specifications of a multiagent system. The actual transitions in the system are described by a specific class of concurrent game structures with imperfect information, that we call iCGS with visibilitycontrol (vCGS). In particular, a state in a vCGS is a set of propositional and visibility atoms such that for every , , , that is, every agent can see the atoms she controls. Then, given state and , we define as the set of atoms visible to agent in . Notice that, by definition, for every state .
Definition 8 (VisibilityControlling Agents: Semantics)
Given a set of agents as in Def. 7 over a set of atoms, and a set of initial states, an iCGS with visibilitycontrol (vCGS) is an iCGS where:

and are the set of all states and the set of initial states, respectively;

For every agent , .

The indistinguishability relation is defined so that for every , iff and for every , iff .

For every and , the protocol function returns set of updatecommands s.t. and , where is the set of atoms occurring in . That is, all atoms appearing in the guard are visible to the agent and the guard is true. We can enforce for all and , by introducing a null command , with trivial guard and empty assignment. We can check that if then .

The transition function is s.t. holds iff:

For every , .

For every and , iff either contains an assignment or, and does not contain an assignment .

For every and , iff either contains an assignment or, and does not contain an assignment .


The labelling function is the identity function restricted to , that is, every state is labelled with the propositional atoms belonging to it.
Definitions 7 and 8 implicitly state that agents in a vCGS interact by means of strategies, defined, as for iCGS, as suitable functions . This means that the set of guarded commands in an agent specification describe all possible interactions, from which agents choose a particular one. The chosen command can be seen as a refinement of the corresponding guarded command, in the form of a (possibly infinite) set of guarded commands where each is obtained from the unique command by appending to the guard of some additional formula uniquely identifying the observations of history by agent .
Clearly, vCGS are a subclass of iCGS, but still expressive enough to capture the whole of iCGS. Indeed, [8] provides a polynomialtime reduction of the model checking problem for from iCGS to vCGS. Intuitively, all components of an iCGS can be encoded by using propositional and visibility atoms only. In particular, since model checking under perfect information and perfect recall is undecidable on iCGS [21], it follows that this is also the case for vCGS. This also means that, in general, the ”implementation” of a strategy accomplishing some ATL objective along the above idea of ”refining” the guarded commands may yield an infinite set of guarded commands . The possibility to obtain a finite, refined set is therefore intimately related with the identification of a subclass of vCGS in which the model checking problem for ATL is decidable. We address this problem in the next section and provide a relevant case study for the relevant subclass in Section 6.
5 vCGS with Coalition Broadcast
Given the undecidability results in [7], it is of interest to find restrictions on the class of vCGS or language that allow for a decidable model checking problem. Hereafter, we introduce one such restriction that is also relevant for the security scenario presented in Section 6; thus allowing us to verify it against collusion. In that follows we fix a coalition of agents.
Definition 9 (cast vCGS)
A vCGS with broadcast towards coalition (or cast) satisfies the following conditions for every , , and :

contains an assignment of the type for some .

For , contains iff it contains also.
Remark 1
Condition (Cmp) ensures that each command for an agent outside the coalition determines explicitely the visibility of each variable in , which cannot just be copied via some transition. Hence, both conditions (Cmp) and , ensure that, in all states along a run excepting the initial state all agents inside coalition observe the same atoms owned by adversaries in . This statement, formalized in Lemma 1 below, is a generalization of the “absence of information forks” from [25], which disallows private oneway communication channels from agents outside the coalition to agents inside the coalition, in order to avoid undecidability of their synthesis problem.
The restriction in (Cmp) however does not apply at initial states: the indistinguishability relation is arbitrary on initial states. This makes systems satisfying Def. 9 strictly more general than systems not having “information forks” in [25]. To see that, we note that the notion of strategy in [25] amounts to the following: a strategy for agent is a function such that for all histories , (i) ; and (ii) if then . The difference between this definition and ours (where is a function from to ) implies that there is a unique initial state. This is not the case in our setting.
Indeed, CGS with multiple initial states and an arbitrary indistinguishability relation on these may be simulated by CGS with a unique initial state by allowing the environment (or adversaries) to do an initial transition to any of the initial states. But this additional initial transition must set the indistinguishability of the resulting state, which means that if we had two initial states in the original CGS with but , this creates an information fork in the sense of [25]. So our setting allows ”information forks only at initial states”, contrary to [25].
Note that Def. 9 still allows an agent to have a ”private oneway communication channel” to some agent , in the sense that, for some and reachable state , but for any other , so agent sees the value of which any other agent does not.
Remark 2
vCGS constrainted by Def. 9 are more general than ”broadcast” game structures from [6] or [42]. To see this, note that the setting from [6] amounts to the following restriction: for any state and distinct joint actions , , if and then for any . But this restriction disallows actions by an agent which update some variable which is invisible to any other agent, a fact which is permitted by Def. 9. Also our setting is more general than the “broadcast” systems in [42]. In such broadcast systems, condition is to be satisfied by all agents, both inside and outside the coalition, which is not the case here.
In what follows, the notation stands for the group knowledge relation for coalition , for the common knowledge for coalition , whereas for the distributed knowledge for [24]. Both notations are overloaded over states and histories: let be any of , , or , for and , then for all histories , , we set iff (i) , and (ii) for every , .
The following lemma is essential in proving the decidability result in Theorem 5.1. Intuitively, it states that histories that are related through the common knowledge of coalition , are compatible with the same strategy, and share the same initial state, are in fact in the distributed knowledge of that coalition. This can be seen as a generalization of Lemma 2 from [42].
Lemma 1
Let M be an cast vCGS. Assume a joint strategy on M. Consider histories such that , , and for both . Then and in particular for all .
Proof
We will actually prove the following properties by induction on :

For every and , iff .

For every , .

For each , and , .
Note that all these properties imply that .
The base case is trivial by the assumption that , so let’s assume the two properties hold for some . Since, as noted, this implies that , uniformity of implies that, . But then the effect of this unique tuple of actions on each atom is the same (actions are deterministic), which ensures property (1).
To prove property (2), assume there exist two joint actions such that for each , . Fix some and denote for both . Also take some and two agents . We will give the proof for , the general case following by induction on the length of the path of indistinguishabilities relating . And, for this, we assume, without loss of generality, that .
Assume now that . The first assumption, together with , implies that . So, by condition (Cmp) in Def. 9, for each , contains assignment , and, by condition , both contain . But, in this way, we get that . So . A similar argument holds if we start with or which proves that condition (2) holds.
For proving property (3), assume . By property (2) just proved, for every , hence for every . By assumption that and by definition of , there must exist some such that , and for every , iff . Hence this property must be met by , which ends our proof.
Remark 3
Lemma 1 still allows cast vCGS in which, for some histories and , but when . That is, this lemma does not imply that common knowledge and distributed knowledge coincide on histories in general.
formulas. Let be a set of agents. We denote the subclass of formulas where coalition operators only involve sets of agents as formulas.
Theorem 5.1
The model checking problem for cast vCGS and formulas is 2EXPTIMEcomplete.
Proof
The proof proceeds by structural induction on the given formula. We illustrate the technique when the formula is either of type or , which can be easily generalized to the case of formulas or when , since then any cast vCGS is also a cast vCGS. The general case follows by the usual structural induction technique: for each subformula of the type , we append a new atomic proposition which labels each state where coalition has a uniform strategy to achieve .
We construct a twoplayer game that generalises the informationset construction for solving twoplayer zerosum games with imperfect information in [42]. The difficult part is to show that the information sets do not “grow” unboundedly. To this end, it suffices that, for each history compatible with a joint strategy, we keep track just of the first and last state in the history. This is so since, by Lemma 1, every uniform strategy will behave identically on two histories that share the first and last states and are related by the common knowledge relation for .
More technically, we build a turnbased game with perfect information in which the protagonist simulates each winning strategy for coalition on some finite abstractions of information sets. Set contains macrostates defined as tuples whose most important component are triples , where and represent the initial and the final state in a history, while is a bookkeeping bit for remembering whether the objective has been reached or not during the encoded history. Two triples in are connected via the common knowledge relation of M extended to tuples. Hence, intuitively, macrostates along a run in abstract a set of histories of equal length and related by the common knowledge relation in M.
The essential step in the construction of is to show that this abstraction is compatible with any uniform strategy. That is, for any two histories of equal length, starting in the same initial and ending in the same final state and related by the relation (on tuples of states), any uniform strategy prescribes the same action to any agent in coalition . So, agent does not need to remember all intermediate steps for deciding which action to choose, only the initial state and the final state of each history suffice. By doing so, all strategies in the original game M are simulated by strategies in the resulting game , and viceversa.
We now formalise the construction of game . To this end, given a set , we denote by the relation of indistinguishability extended to tuples: iff and for any bit . Then, is the common knowledge equivalence induced by and restricted to . Since the bit does not play a role in the relation, we often simply write .
The set of states in is , with (resp. ) denoting the protagonist (resp. antagonist) states, and
The game proceeds as follows: in a macrostate the protagonist chooses a onestep strategy profile such that, for each , if for some , then . The control then passes to the antagonist which computes a new state as explained below: First, in the resulting state , one computes the set of outcomes:
Then, for each tuple , one computes . Additionally, a labelling is associated to as follows: for each , let . Then put:
The antagonist then picks a tuple and designates the new protagonist state as the tuple .
The objective of the protagonist is to reach macrostates with for all . We can show that in M coalition has a strategy to enforce iff in the protagonist has a winning strategy. The easy part is the inverse implication, since each winning strategy for the protagonist in gives enough information to construct a joint strategy for coalition in M to achieve . As for the difficult part, consider a strategy for coalition in M achieving . In order to build a strategy for the protagonist to win the game , we need to decide what action the protagonist chooses at a macroset . Now, each triple may represent a whole family of histories in M with and , all of them related by the common knowledge relation in M. So, one might face the problem of consistently choosing one of the actions for playing at . Fortunately, Lemma 1 ensures that whenever we have histories with , , , and consistent with a uniform strategy , then for all – and, as such, for all . Then, by denoting the history of macrostates that leads to with denoting the length of this history, the strategy can be chosen as follows: for each , choose any history of length in M with which is compatible with and put . It is not difficult to see that, whenever achieves objective in M, is winning for the protagonist in .
The case of is treated similarly, with the reachability objective being replaced with a safety objective for avoiding macrostates in which there exists with .
Finally, nesting of formulas is handled by creating a new atom for each subformula of the given formula. This atom is used to relabel the second component of each tuple belonging to each macrostate as follows:

If is a boolean combination of atoms, then append to iff .

If for , then append to iff in the twoplayer game constructed as above for coalition and starting in the set of initial states where , the protagonist has a strategy for ensuring . A similar construction gives the labeling for .

If for , then append to iff there exists a tuple of actions (with defined at the previous item) such that all the macrostates directly reachable by actions extending to the whole have the property that for any , .
To end the proof, we note that the above construction is exponential in size of the given vCGS. Since we have a symbolic presentation of the vCGS, this gives the 2EXPTIME upper bound. The 2EXPTIME lower bound follows from the results on modelchecking reactive modules with imperfect information [29], p398, table 1, general case for twoplayer reactive modules, which can be encoded as a modelchecking problem for singleton coalitions for vCGS.
6 Coalitioncast vCGS in Modelling Security Problems
We now apply coalitioncast vCGS to the modelling of secure systems and their threats. We start by recalling the setting of TerroristFraud Attacks [19] and the distancebounding protocol by Hancke and Kuhn [31], then give an cast vCGS model for this protocol and an ATL formula expressing the existence of an attack against the protocol, and finally we argue why the results from Section 4 are relevant for this case study.
Identity Schemes.
Distancebounding (DB) protocols are identity schemes which can be summarised as follows: (1) via timed exchanges, a prover demonstrates that it is physically situated within a bound from a verifier ; (2) via these exchanges, the prover also authenticates himself to the verifier. Herein, we consider the distancebounding (DB) protocol by Hancke and Kuhn [31]. A version of this protocol is summarised in Fig 1. In this protocol, a verifier will finish successfully if it authenticates a prover and if it is conceived that this prover is physically situated no further than a given bound.
Verifier  Prover  
secret: , bound  secret:  
initialization phase  
pick  
pick  
distancebounding phase  
pick = ,  
with  
local time is  
local time is  =,  
with  
set by  
checking and 
In this protocol, the prover and the verifier share a longterm key . In the initialisation phase, produces a pseudorandom bitstring , where “ ” is concatenation. In the distancebounding phase, pick randomly bits
and forms the challenge bitstring/vector
. It sends this to the prover and times the time to it to respond. ’s responsevector is formed from bits , where is the th bit of if is , or the th bit of if is . Finally, the verifier checks if the response was correct and if it did not take too long to arrive.HK Versions.
In the original HK protocol [31], the challenges and responses are sent not in one bitstring and respectively, but rather bit by bit, i.e., and . The one timing step is also broken into separate timing. W.r.t. terrorist fraud (see description below), it can be easily proven that if there is an attack on the version of protocol presented in Figure 1, then there is an attack in the original protocol in [31] and viceversa. Formal symbolic verification methodologies [16] such as the recent [18, 40, 19] are only able to capture the HK version presented in Figure 1 (i.e., modelling 1bitstring challenge and not 1bit challenges); and this is true w.r.t. checking for all attacks/properties, not just terrorist frauds. Unlike these, in our vCGS we are able to naturally encode both the original HK protocol and the depiction in Figure 1. Yet, in Section 6, we present a vCGS encoding of the HK version in Figure 1, as it yields a model less tedious to follow step by step. If these verification passes, then =, otherwise this =.
Identityfraud Attacks.
A possible threat is that of a collusionbased attack called terroristfraud (TF). In this, a dishonest prover , found far away from the honest verifier , helps an adversary pass the protocol as if was close to . However, a protocol is considered to suffer from such security problem, if there is a strategy for a dishonest