1. Introduction
Presently, an estimated 1Bn people worldwide are without any identity documents
(worldbankundocumented, ). In addition, some estimates project up to 1Bn displaced people within a generation (geisler2017impediments, ), many of whom may find any documents they still have to be unworthy (humantide, ). Combined, these numbers suggest that a trustworthy and usable notion of global identity that can serve the basic rights of the world population in general and of displaced populations in particular is essential. The size, growth rate, and dispersion around the globe of the displaced population suggest that only a decentralized, grassroots solution might be able to cope with granting the world population as a whole an adequate form of identity.Granting an identity by a state is normally a complex process as it requires careful verification of the credentials of a person; e.g., to grant French citizenship, the state shall check that the person has, say, French roots. Granting a global identity might seem even more daunting, except for following fundamental premise: Every person deserves an identity; thus, there are no specific credentials to be checked, except for the existence of the person. As a result, a solution for granting global identities may focus solely on ensuring a onetoone correspondence between humans and their global digital identities. With this fundamental premise, we develop a foundation for a decentralized, distributed, grassroots, bottomup, selfsovereign process in which every human being may create and own a trustworthy identity. We follow the approach and the spirit of selfsovereign identities (ssi, ) and the W3C Decentralized Identifiers (DID) standard (did, ) in letting people freely create and own identities. But, we augment this freedom with the desire that each person declares exactly one identity as its genuine global identity (and as many identities of other types as one wishes).
The process by which a person becomes the owner of a genuine global identity is simple and straightforward. With a suitable app, it could be done literally with a click of a button^{1}^{1}1Global identities will later become vertices in a graph, hence the letter .:

Choose a new cryptographic keypair , with being the public key and the private key. Keep secure and secret!

Declare to be a global identity by publicly posting a declaration that is a global identity, signed with .
Lo and behold! You have become the proud rightful owner of a genuine global identity. Note that such a public declaration does not necessarily expose the person making the declaration. It only reveals to all that someone who knows the secret key for the public key has declared as a global identity. Depending on personality and habit, the person may or may not publicly associate oneself with
. For example, a person with truthful social media accounts will most probably wish to associate all these accounts with its newlyminted genuine global identity.
More fundamentally, genuine global identities may provide the necessary foundation for a notion of global citizenship and, subsequently, for democratic global governance, or global democracy (shapiro2018point, ).
If becoming the rightful owner of a genuine global identity is so simple, what could go wrong? In fact, so many things can go wrong, that this paper is but an initial investigation into describing, analyzing, and preventing them. Let us enumerate and name some of the things that can go wrong; in what follows, let the denote the agent under consideration:

The keypair is not new, or else someone got hold of it between Step 1 and Step 2 above. Either way, someone else has declared to be a public key prior to the declaration by . In which case cannot declare .

Agent failed to keep secret so that other people, e.g. , know , in which case is also an owner of and, thus, is compromised. Figure 1 illustrates a compromised identity.

Agent declared , but also declared another global identity . In which case and are duplicates, the identity declared in the latter of the two declarations is a sybil, and agent is corrupt. An honest agent does not declare sybil identities. Figure 2 illustrates an honest and a corrupt agent.

The private key has been lost, stolen, robbed, or otherwise compromised, requiring agent to replace its genuine global identity by a new identity.
To develop a foundation for genuine global identities with which we can describe, characterize and aim to prevent all the problems listed above, we drew upon concepts and tools from multiple disciplines, including epistemic logic, philosophy of language, mathematics, and computer science. Specifically, we were inspired by epistemic logic (meyer2004epistemic, ) in designing the public knowledge and individual agent knowledge, which is updated throughout the process. This knowledge is updated by agents acquiring knowledge from other agents and through public actions of agents that become known by all other agents. The actions of agents are best understood as illocutionary acts (speech acts), and hence the study of illocutionary acts and forces (illocutionarybook, ) was conducive to our design. Naturally, we rely on publickey cryptography, as we use public keys as identities and private keys for authenticating illocutionary acts. Finally, as we show later in the paper, our solution employs graphtheoretic analysis of the graphs induced by public illocutionary acts by the agents. Thus, graph theory (graphtheorybook, ), and notions of connectivity in graphs, specifically graph conductance (conductancepaper, ), are instrumental.
Nevertheless, the final result is sufficiently simple that we do not need to call upon these disciplines explicitly. Our aim it to allow people to create genuine global identities in a way that is simple yet resilient to malicious antagonists (so called byzantines and sybils). Our approach to achieve it is described below using basic concepts of publickey cryptography and graph theory.
Related Work. Digital identities is a subject of extensive study, with many organizations aiming at providing solutions. The purpose of many of these solutions, including those of SelfSovereign Identity (ssi, ) is to allow for the creation of such identities, but without the uniqueness requirement. As here we aim at global identities that have a onetoone correspondence with their owner, our requirements are different. There are business initiatives such as the Decentralized Identity Foundation (https://identity.foundation/), bundling some of the tech giants as well as smaller organizations such as the Global Identity Foundation (gif, ) and Sovrin (sovrin, ). We aim at developing foundations for a bottomup solution.
There are some highprofile projects that provide nationwide digital identities, such as India’s Aadhaar system (aadhaar, ). Here we are concerned with global identities, not bound to any national boundaries and have argued that topdown solutions fail to provide such a solution. In this context, we mention the concept of Proof of Personhood (borge2017proof, ) which aims at providing unique identities by means of conducting facetoface encounters, an approach suitable only for small communities.
Our solution based upon the notion of trust. We thus mention the work of Andersen et al. (andersen2008trust, ) which study axiomatizations of trust systems. They are not concerned, however, with sybils, but with quality of recommendations.
Finally, we mention our previous works regarding sybilresilient community expansion (poupko2019sybil, ) in which we considered algorithms for the expansion of an online community while keeping the fraction of sybils in it small; and regarding sybilresilient social choice (SRSC, ) in which we developed aggregation methods to be applied in situations where sybils have infiltrated the electorate. In these two papers, we assumed a notion of genuine and sybil identities without specifying what they are; here, we define a concrete notion of genuine identity, and derive from it a formal definition of a sybil and related notions of honest and corrupt agents and byzantine identities.
2. Formal Model
2.1. Ingredients
We aim to provide precise and formal foundations for genuine global identities, but at the same time to have these foundations enable an actual solution to the realworld problem of people without genuine identity. Thus, we aim the foundations to be readily amenable to implementation. Here we list the ingredients needed for a realization of our solution.

A set of agents. It is important to note that, mathematically, the agents form a set (of unique entities) not a multiset (with duplicates). Intuitively, it is best to think of agents as people (or other physical beings with unique personal characteristics, unique personal history, and agency, such as intelligent aliens), which cannot be duplicated, but not as software agents, which can be.

A way for agents to create cryptographic keypairs. This can be realized, e.g., using the RSA standard (rsapaper, ). Note that our solution does not require a global standard or a uniform implementation for public key encryption. Different agents can use different technologies for creating and using such keypairs, as long as the methods needed to verify signatures are declared. By “cryptographic” we naturally assume standard cryptographic computational hardness.

A way for agents to sign strings using their keypairs. As we assume cryptographic hardness, it shall not be possible for an agent that does not know a certain keypair to sign strings with this keypair.

A message broadcasting mechanism or, alternatively, a public ledger mechanism, so that agents can be aware of messages sent by other agents. A critical requirement of such a mechanism is that all agents observe the same order of messages. Future work may allow weakening this requirement so that the same order is observed only eventually, as well as allowing partial orders.
2.2. Agents and their Global Identities
Below we provide the basic definitions regarding agents and identities. We assume a set of agents (e.g. people) that is fixed over time.^{2}^{2}2We stress again that this is a set of unique entities that cannot be duplicated, not a multiset. Birth of additional unique agents and agent death will be addressed in future work. Agents can create new keypairs . We assume that an agent that has a keypair can sign a string, and denote by the string resulting from signing the string with . Intuitively, each agent corresponds to a human being. Importantly, the set of agents (containing all human beings) is, in a sense, external to our model; specifically, the messages broadcasted via the broadcasting mechanism never contain identifiers for agents : Indeed, having no such identifiers is the main motivation for this work.
In the spirit of selfsovereign identities and the W3C Distribute Identity (DID) standard, we let agents determine and own their global identities. An agent can publicly declare a global identity using a public key for which it knows the private key . A global identity declaration has the form and can be realized by sending a message with the declaration to all agents, or, interchangeably, by posting the declaration to a sequential public ledger, shared by all agents. In either realization, we require that every agent can make a global identity declaration and that all agents have the same view of the sequence of all declarations made. We make the simplifying assumption that each global identity is declared at most once (alternatively, subsequent declarations of the same global identity are simply ignored).
We denote the event of agent declaring global identity with the statement . We denote by a sequence of declaration events and its prefix, the first declaration events, by . Note that a declaration event records the agent making the declaration, while the declaration itself only has a signature and does not name the agent, as the whole point of our endeavor is to develop a solution for naming agents. (Specifically, this means that we cannot, say, write in the public ledger or use it in the assumed broadcasting mechanism.)
Definition 0 (Global Identity).
Let be a sequence of global identity declaration events and a declaration event for agent and public key . Then is a global identity and is the rightful owner of , given .
Definition 0 (Genuine Global Identity, Sybil, Honest Agent).
Let be a sequence of global identity declaration events and be the rightful owner of global identity in . Then is genuine if it is the first global identity declared in by , else is a sybil. An agent is corrupt if it declares any sybils, else is honest. All notions relative to .
Remark 1 ().
Note that an agent is the rightful owner of its genuine identity as well as of any subsequent sybils that it declares. Note that if , the rightful owner of , is corrupt, then its first declared identity is genuine and the rest of its declared identities are all sybils. Note that an honest agent may create many keypairs, yet remain honest as long as it has declared at most one public key as a global identity.
2.3. A Naive Community Growth Model
So far we have described a way for agents to declare global identities, including sybils. The basic problem we aim at solving is to allow a community to grow in the face of sybils. Thus, below we first discuss the sybil penetration rate in a community of identities.
Definition 0 (Community, Sybil penetration rate).
Let be a sequence of events and let denote the sybil identities in wrt. . A community in is a subset of identities . The sybil penetration of the community is given by .
Consider a small community comprised of genuine global identities. Our main aim is to allow to admit new identities while retaining a low sybil penetration rate (under reasonable assumptions). For that reason, we allow communities to declare their existence and to declare the admission of a new member to it. With these declarations, the sequence of claims induces a community history , where is either or for some .
The following observation is immediate.
Observation 1 ().
Let be the community history wrt. a sequence of claims . Assume that , and that whenever for some , it holds that for some fixed . Then, the expected sybil penetration rate for every is at most .
That is, the observation above states that a sybilfree community can keep its sybil penetration rate below , as long as the probability of admitting a sybil to it is at most . While the simplicity of Observation 1 might seem promising, its premise is naively optimistic. Due to the ease in which sybils can be created and the benefits of owning sybils in a democratic community, we wish to care for the realistic setting of a hoard of sybils compared to a modest number of genuine global identities hoping to joining the community. Furthermore, once a fraction of sybils has already been admitted, it is reasonable to assume that all of them would support the admission of further sybils (e.g., owned by the same agent). Thus, there is no reason to assume neither the independence of candidates being sybils, nor a constant upper bound on the probability of sybil admission to the community.
In the following, we explore several approaches to address the problem of safe community growth under more realistic assumptions.
2.4. Mutual Sureties and Their Graphs
A key element in our approach relies on allowing agents to pledge mutual sureties to each other. Intuitively, using these mutual surety pledges we will be able to have some notion of trust (to be formalized below) between the global identities; later we discuss how to use different notions of trust to allow for a predominantlyhonest communitybuilding.
Specifically, we aim to capture the notion that two agents that know each other and know the global identities declared by each other, are each willing to pledge surety to the other regarding the good standing of the global identities.
For to provide surety to regarding its global identity, first has to know . How this knowledge is established is not specified in this formal framework, but this is quite an onerous requirement that cannot be taken lightly or satisfied casually. For example, we may assume that one knows one’s family, friends and colleagues, and may diligently get to know new people if one so chooses.
We can envision at least four types of sureties of increasing strength, in which each agent with global identity makes a pledge regarding the global identity of the other agent . All assume that the agent knows the agent . We propose four Surety Types, which are cumulative as each including the previous ones, and explain on what basis one may choose to pledge each of them:
 Surety of Type 1:

Ownership of public key:
Agent pledges that owns public key . Agent can prove to that it owns without disclosing to . This can be done, for example, by asking to sign a novel string and verifying the signature of in using .  Surety of Type 2:

Rightful ownership of a global identity:
Agent pledges Surety 1 and that is the rightful owner of global identity . In addition to proving to that it owns , must provide evidence that itself, and not some other agent, has declared . A selfie video of pressing the declare button with , signed with a certified timestamp promptly after the video was taken, and then signed by , may constitute such evidence. A suitable app may record, timestampt and sign such a selfie video automatically during the creation of a genuine global identity.  Surety of Type 3:

Rightful ownership of a genuine global identity:
Agent pledges Surety Type 2 and that is the genuine global identity of : Here a leap of faith is required. In addition to obtaining from a proof of rightful ownership of , it must also trust not to have declared any other identity prior to declaring . There is no reasonable way for to prove this to .  Surety of Type 4:

Rightful ownership of a genuine global identity by an honest agent:
Agent pledges Surety Type 3 and that is a genuine global identity of an honest agent : Here has to put even greater trust in : Not only does have to trust that its past actions resulted in being its genuine global identity. It also has to take on faith that has not declared any sybils since and, furthermore, that will not do so in the future.
Mutual surety between two agents with two global identities is pledged by both agents pledging a surety to the global identity of the other agent^{3}^{3}3Notice that we consider undirected graphs, as we require surety to be symmetric. Indeed, one might consider directed sureties as well.. The format of a surety pledge of type by the owner of to the owner of is , .
The corresponding surety event is , and the surety enters into effect once both parties have made the mutual pledges. We now take to be a record of both declaration events and pledge events.
For a given surety type, we say that the surety is violated if its assertion does not hold. Naturally, the specifics of surety violation depend on the surety type:

Surety Type 1 is violated if in fact does not know the secret key for the public key .

Surety Type 2 is violated if either Surety Type 1 is violated or in fact did not declared as a global identity.

Surety Type 3 violated if either Surety Type 2 is violated or has declared some other as a global identity before declaring as a global identity.

Surety Type 4 is violated if Surety Type 3 is violated or ever declares some other as a global identity after declaring as a global identity.
See Figure 3 for an illustration of violations of sureties of Type 3 and Type 4.
Remark 2 ().
Observe that mutual sureties can be easily pledged by agents, technically. However, we wish that agents will be prudent and sincere in their mutual surety pledges. Thus, we expect a mechanism that, on the one hand, rewards the pledging of sureties but, on the other hand, punishes for surety violations. Both parties to a violated surety should be punished, but not necessarily in the same way. For example, if a mutual surety with a sybil is discovered, the sybil should be removed and all other agents pledging sureties with this sybil should be prosecuted. The punishment of the agent pledging for a sybil may include a fine and a temporary or permanent ban on pledging further mutual sureties. In addition, all existing sureties of the pledging agent should be examined for possible further violations.
While the specifics of such a mechanism is beyond the scope of the current paper, note that with such a mechanism in place, the commissive illocutionary force (illocutionarybook, ) of a surety pledge will come to bear.
Definition 0 (Mutual Surety).
The global identities have mutual surety of type X, , if there are for which
in which case and are the witnesses for the mutual surety between and .
A sequence of events induces a sequence of surety graphs in which the vertices are global identities that correspond to global identity declarations and the edges correspond to mutual surety pledges, as follows.
Definition 0 (Surety Graph).
Let be a sequence of events and let denote its first events. Then, for each , induces a surety graph of type X, , , as follows:
Remark 3 ().
Observe that we allow surety pledges to be made before the corresponding global identity declarations. We do not see a reason to enforce order.
Note that the difference between a surety graph and its successor is either an introduction of a new global identity (i.e., vertex) or a mutual surety edge, or that and are equal (this last case happens if the event is a surety pledge event which is the first of the two symmetric events). While the removal of vertices and/or edges is important, we do not treat these now.
2.5. Local Properties of Identities in Surety Graphs
In the following, we provide some local properties in surety graphs and relate them to the notions presented in Section 1. As Type 1 surety is the weakest notion of surety, a property that holds in G1 graphs holds in stronger surety graphs as well. (Indeed, generally, properties that hold for graphs of Type hold for graphs of Type , for .)
Recall that a public key is compromised if it has more than one owner. We begin by relating this notion to a local notion in a Type 1 surety graph, and say that a vertex in a Type 1 surety graph is compromised if one may infer that it has two owners just by examining the mutual surety edges incident to it.
Definition 0 (Compromised Vertex).
Let be a surety graph. A global identity with rightful owner is a compromised vertex in if there is surety edge with witness for .
The next two observations relate the notion of a compromised public key to the graphrelativized notion of a compromised vertex.
Observation 2 (Global Identity Compromised in a Graph).
If a global identity is a compromised vertex in a surety graph then it is a compromised public key.
Proof.
Assume with rightful owner is compromised in with witness for . To pronounce a mutual surety on behalf of , must know the secret key of , thus owns , in addition to owning . Hence, is a compromised public key. ∎
Observation 3 (Knowledge of Compromised Self).
If a global identity is a compromised vertex in a surety graph then its rightful owner knows that is a compromised public key.
Proof.
Assume has rightful owner and for some , . This means that has made a public claim on behalf of the global identity . Agent can see a public statement signed by the secret key of , which it did not sign, hence it must know that someone else knows the private key for ; hence, it knows that is a compromised public key. ∎
The observation below, corresponding to the definition below, does not apply for Type 1 graphs, but only to Type 2 graphs and stronger surety graphs. This is so as, contrary to the observations above which relates to public keys only, the definition below addresses the genuineness of global identities.
Definition 0 (Adjacent Duplicates).
Let be a sequence of claims and its induced Type 2 surety graph. Let be a genuine global identity with owner and , . Then and are adjacent duplicates of if they have the same witness in their mutual sureties with .
Observation 4 (Knowledge of Adjacent Duplicates).
Let be a sequence of claims and its induced Type 2 surety graph. If is a genuine global identity with owner and with adjacent duplicates in , then either is compromised in or knows that and are duplicates.
Proof.
By assumption, signed the mutual sureties with on behalf of both and . If is not compromised, then must be the witness of both sureties. Since knows the witness for the two claims to be , it follows that knows that possesses the secret key of both and , hence knows that are duplicates. ∎
3. SybilResilient Community Growth
Here we use the formal machinery developed in the previous sections together with our earlier results (poupko2019sybil, ) to present a solution for growing a community of global identities. In particular, we demonstrate how the strong premise of Observation 1 can be relaxed when mutual sureties are used. Our basic premise is that we cannot prevent sybils from being declared, and, furthermore, perfect detection and eradication of sybils is out of reach. Thus, we wish to allow a community of agents that are the rightful owners of genuine global identities to grow (i.e., admit new members) while retaining the supermajority of genuine global identities throughout the process.
Our earlier work (poupko2019sybil, ) addressed the task of community growth based on a less mature notion of identity types, which lacked the foundation provided herein. We refer the reader to our earlier work (poupko2019sybil, ) for a comprehensive discussion regarding conductancebased community growth; its essence is that graph conductance is a measure for graph connectivity, and, as such, by assuming some bound on the power of an attacker, the topology of the surety graph will provide sufficient assurance for safely expanding the community.
Community history. Our aim is to find conditions under which a community may grow safely, i.e., while retaining upper bounds on the sybil penetration ratio. We consider nontrivial elementary transitions obtained by either adding a single member to the community or removing a community member:
Definition 0 (Elementary Community Transition).
Let denote two communities in . We say that is obtained from by an elementary community transition, and we denote it by , if:

, or

for some , or

for some .
In order to formally realize these transitions, we allow a subset of identities to declare itself as a community, and to declare the admission/removal of identities to and from the community. Throughout the paper, we assume that the sole goal of the community is to grow with the lowest sybil penetration possible, and thus only sybil identities are expelled from the community. We leave possible relaxations of this assumption for future research.
Definition 0 (Community History).
Let be a sequence of events. A community history wrt. is a sequence of communities such that and holds for every .
The idea is that while the surety graph evolves, new members might be admitted to the evolving community (or removed from it). In the following, we utilize properties of the surety graph restricted to the vertices of the community. We thus provide the needed graphtheoretic definitions.
Graph notations and terminology. Let be an undirected graph. The degree of a vertex is . The volume of a given subset is the sum of degrees of its vertices, . Additionally, we denote the subgraph induced on the set of vertices as , by the degree of vertex in , and by the volume of a set in . Given two disjoint subsets , the size of the cut between and is denoted by .
We are now ready to introduce the notion of graph conductance, which plays a substantial role in safe community growth.
Definition 0 (Conductance).
Let be a graph. The conductance of is defined by: . (where is the complement of .)
The following definition is an algebraic measure for connectivity.
Definition 0 (Algebraic expansion).
Let be a graph, and let
be the eigenvalues of its random walk matrix. Then,
is a said to be a expander if its generalized second eigenvalue satisfies .We note that the notions of conductance and algebraic expansion are tightly related via the celebrated Cheeger inequality (cheeger1969lower, ). We refer the reader to (hoory2006expander, ) for through exposition and elaborate discussion regarding graph expansion.
3.1. Expected Sybil Penetration in Surety Graphs
Next we aim to allow a community to safely grow under the following assumptions:

There exists some sybil detection mechanism (say, via random sampling or random triangle closures). We assume that once a sybil is examined it is detected with constant probability .

The detection of a single sybil implies the detection of its entire connected component in the graph restricted to the sybil identities .

Once a sybil within the community is detected, it is immediately expelled.
Some elaborations are in place: We first note that assumption (1) is far weaker than the premise in Observation 1 as it presumes nothing on the candidates, but rather on the ability to verify their authentic owners. Assumption (2) aims to utilize the natural cooperation between sybil identities owned by the same agent. Intuitively, while honest agents in a surety graph may refrain from admitting duplicate neighbors, it is realistic to expect sybil identities not to do so. This strong assumption relies on the premise that once a sybil is detected, a thorough examination would take place among its neighbors which would repeat iteratively to its neighbors in case they are sybils as well. We leave the relaxations of these coarse and primal assumptions for future research.
In the following, we analyze a simple stochastic model where every admission of a new member is preceded by a random detection process in the community . Formally, following assumptions (1), (2), and (3), we assume that a sybil component is detected and expelled with probability . We assume that the sybil identities are operated from disjoint, possibly empty, sybil components in , where the choice of both the parameter and the locations of the components are adversarial – the attacker may choose how to operate. While realistic attackers may also choose to which component shall the new (sybil) member join, for simplicity, we assume that the sybil component is chosen uniformly at random, i.e., with probability .
Our main result in this setting provides an upper bound on the expected sybil penetration, assuming bounded computational resources of the attacker.
Theorem 3.5 ().
In the stochastic model described above, obtaining an expected sybil penetration is NPhard for every constant .
Proof.
(sketch) The expected sybil penetration in this model is obtained in a steady state in this model, i.e., in a state in time where , that is:
where and . It follows that . Solving this quadratic equation implies that the size of a single sybil component in the steady state is . It follows that the number of sybil identities in the community in a steady state is . We now note that operating from nonempty sybil components corresponds to obtaining an independent set of size (at least, choosing a single vertex in each component). The theorem follows from the fact that approximating independent set within a constant factor is known to be NPhard (see, e.g., (arora2009computational, )). ∎
The following corollary establishes an upper bound on the sybil penetration rate regardless of the attacker’s computational power.
Corollary 3.6 ().
Let be a community history wrt. a sequence of events . If every community with satisfies , then the expected sybil penetration in every under the stochastic model depicted above, is at most .
Proof.
(sketch) Recall that the size of the maximal independent set is a trivial upper bound on . As the cardinality of an independent set in a expander is known to be at most (see, e.g., (hoory2006expander, )), we conclude that . It follows that the number of sybil identities in the community in a steady state is . ∎
3.2. Community Growth with Sureties of Type 3 and Type 4
Next we show how to obtain sybilresilient community growth with sureties of Type 4 and a slightly stronger Type 3 surety. Two common concepts we use are byzantine identities and attack edges. Our key assumption is that the fraction of attack edges among all edges can be bounded; note that some assumption regarding the power of the attacker must be stated. Together with a lower bound on the conductance of the surety graph and an upper bound on the fraction of surety edges devoted to identities outside the community, we can obtain a bound on the fraction of sybils in a growing community.
Byzantine identities. A byzantine identity is owned by an agent that is acting maliciously, possibly in collaboration with other malicious agents. In our setting, these acts include declaring sybil identities and pledging violated sureties. Consequently, we define byzantines differently when considering different types of surety graphs. A sybil is always byzantine, and we refer to nonbyzantine identities as harmless. We thus have, for each surety graph, , where is the set of byzantine identities and its sybil subset, and , where denote the harmless identities. The set is induced by the sequence of events , and is defined wrt. the surety type considered.
Remark 4 ().
Note that being honest is a property of an agent and being harmless is a property of a global identity, a vertex in a graph. We would like to assume that all genuine global identities of honest agents are harmless. Normally, a corrupt agent that creates sybils would also be a byzantine, as it would create surety edges with its sybils in order to introduce them into the community. Worst, corrupt people may organize, and provide sureties to each other’s sybils in order to get them into the community. In either case, the genuine identities (as well as the sybils of course) of corrupt agents would normally be byzantine.
Hence, the genuine global identities of honest agents would normally be harmless. However, it is technically possible for an honest agent to create sureties with sybils owned by others, on purpose or by mistake, in which case its sole genuine global identity would indeed be byzantine. Similarly, it is technically possible for a corrupt agent not to engage its sole genuine identity with sureties to sybil identities, owned by itself or by others, thus rendering its genuine identity harmless even though it also owns (possibly hidden in its closet) sybil identities.
Definition 0 (Byzantine penetration).
Let denote the byzantine identities in . The Byzantine penetration of a community is given by .
Note that since then for every community , hence an upper bound on byzantine penetration provides also an upper bound on sybil penetration.
Attack edges. If all that byzantine agents do is create sybil identities and provide sureties to each other’s identities, their induced surety graph would be a disconnected component that can be easily identified and eradicated. Hence, in order to have their sybil identities be admitted into the community of genuine identities, the key weapon corrupt agents employ in their attack is to obtain sureties for their global identities from the genuine global identities of honest agents.
This might be difficult to achieve for sybils, and indeed an honest person providing a surety for a sybil would instantly turn its own global identity to a byzantine. Hence, corrupt people may try to obtain sureties from honest people for their (sole) genuine global identities. We term such an edge, connecting a harmless genuine identity and a byzantine genuine identity, an attack edge.
Our key assumption is that attack edges are scarce, as honest people tend to trust honest people and distrust corrupt ones. While we believe this is true in general, we expect an appropriate mechanism to further encourage this tendency, as mentioned above. We capture the degree to which this premise holds by the following parameter that measures the percentage of attack edges among the total number of sureties pledged by harmless identities.
Definition 0 (bounded attack).
Let be a sequence of events and let be the induced surety graph. A community has a bounded attack if:
The next definition, of solidarity, measures a certain aspect of the connectivity within the community; Roughly speaking, it requires that each vertex in the community would have many neighbors inside the community:
Definition 0 (solidarity).
Given a community in a surety graph , a vertex satisfies solidarity if:
where is the maximal degree of . The community satisfies solidarity if every satisfies solidarity.
The following definition utilizes a combination of desired graph properties, and is applied as a necessary condition on the community for nontrivial growth.
Definition 0 (Community resilience).
Let be a sequence of events, be the induced surety graph, and let , and . We say that a community is resilient if the following hold: 1) satisfies solidarity; 2) has a bounded attack; and 3) .
We can now state our main theorem in this context:
Theorem 3.11 ().
Let be a community history wrt. a sequence of events . Let , , , and assume that . If every community with is resilient, then every has Byzantine penetration .
The proof below follows by the analogous theorem stated in our earlier work (poupko2019sybil, ). This is possible since the proof there holds with respect to any definition of identity types, as long as the community satisfies resilience. We begin with the following lemma:
Lemma 3.12 ().
Let be a community history wrt. a sequence of claims , and let be two consecutive elements with . Let , , and assume that .
If is resilient, for some parameter , then .
Proof.
Let denote two such consecutive elements in , and let be the maximal degree of the vertices in . Notice first that
where the second inequality follows from the Byzantine penetration in . It follows therefore that . Also, from solidarity we get that and . Using the conductance of it follows specifically that:
(1) 
Since has a bounded attack, we can write:
(2) 
Combining the last two equations together we get:
where the first equality holds as , the second inequality stems from Equation 2 and the third inequality stems from Equation 1. Flipping the nominator and the denominator then gives . ∎
We are now ready to prove the main Theorem.
Proof.
(of Theorem 3.11) Towards contradiction, let be the first community for which . In particular, (by the assumption), and hence . By the premise of the theorem is resilient. In addition, as only sybils are expelled from the community and , it follows that for all . Hence,
It now follows from Lemma 3.12 that . ∎
3.3. SybilResilient Community Growth with a G4 Surety Graph
We apply Theorem 3.11 to G4 graphs, i.e., surety graphs of Type 4. To be commensurate with this surety, we define an identity to be byzantine if it is a sybil or the genuine identity of a corrupt agent. Hence, in G4, a harmless identity is simply a genuine global identity of an honest agent. Correspondingly, we define an attack edge to be an edge among a genuine identity of an honest agent and a byzantine identity.
As discussed above, we assume that honest agents tend to pledge surety of Type 4 only with honest agents. Then, is an upper bound on the fraction of violated sureties pledged by honest agents in , wrt. to the total number of sureties pledged by honest agents in .
The corollary below is then a direct application of Theorem 3.11 to this setting.
Corollary 3.13 (G4 Community Growth).
For Type 4 sureties, relying on resiliency upperbounds the byzantine penetration, where a byzantine is either a sybil or genuine identity of a corrupt agent and attack edge is a surety edge among a guenuine identity of an honest agent and a byzantine identity.
3.4. SybilResilient Community Growth with a G3 Surety Graph
Here we provide conditions for sybilresilient community growth for mutual sureties of Type 3. In fact, we rely on a slightly stronger surety that composes two Type 3 sureties, which we term Type :
 Surety of Type :

Genuine global identity at distance 2:
Agent pledges that is the genuine global identity of , and each neighbour of is the genuine global identity of . Namely, the surety is not only regarding an immediate neighbour in the graph, but also the immediate neighbours of such a neighbour as well. The basis of such surety could be direct investigation of each of the sureties made by , or else trusting to having done its due diligence for each of its sureties.
With this definition, we adapt the notions of byzantine identity and attack edge. An identity is byzantine if it is a sybil or has a mutual surety edge with a sybil. A vertex is harmless if it is a genuine global identity with no surety to a sybil.
Definition 0 (Byzantines and Attack Edges in a G3 surety graph).
Let be a sequence of events and let be the Type surety graph induced by . An identity is byzantine in if is a sybil or if there is and edge where is a sybil, otherwise harmless. An edge is an attack edge if is byzantine in and is harmless.
Remark 5 ().
Note that, as in (poupko2019sybil, ), an attack edge may be introduced into a graph in the sequence of Type surety graphs induced by , and be defined as such, even if the byzantine nature of is still latent, namely before a surety edge between and a sybil is introduced.
Also note that, in contrast to the G4 byzantine definition, where the notions of honest agent and harmless global identity coincide, in they do not. Whether a genuine identity is harmless or not is determined by whether it pledged a surety to a sybil, not by whether its rightful owner has declared a sybil. In particular, if a corrupt agent keeps its sybils “in the closet” and does not pledge surety to them or to any other sybil, then its genuine global identity would remain harmless. On the other hand, if an honest agent pledges surety to someone else’s sybil, intentionally or by mistake, then its genuine global identity would be considered byzantine.
First, we can apply Theorem 3.11 wrt. to the G3 byzantine definition, resulting in an upper bound on the byzantine penetration.
Corollary 3.15 ().
(G3 Community Growth) For Type 3 sureties, relying on resiliency upperbounds the byzantine penetration, where a byzantine is either a sybil or has a surety edge to a sybil and attack edge is a surety edge among a byzantine and a nonbyzantine.
For G3 surety graphs, we can also provide a tighter upper bound on the sybil penetration.
Proposition 3.16 ().
Let be a community history wrt. a sequence of events . Let , , , and assume that .
If every community with is resilient, then every sybil penetration .
Proof.
We refer the reader an analogous result stated in our earlier work (poupko2019sybil, ), noting that the proof is completely analogous. ∎
4. Updating a Global Identity
Once creating a genuine global identity is provided for, one must also consider the many circumstances under which a person may wish to update their global identity:

Identity loss: The private key was lost.

Identity theft: The private key was stolen, robbed, or otherwise compromised.

Identity disclosure: The global identity was disclosed with unwarranted consequences.

Identity refresh: Proactive identity update to protect against all the above.
The global identity declaration event establishes as a global identity. To support updating a global identity, we add the global identity update event , which declares that is a new global identity that replaces the global identity. A public declaration of identity update has the form , i.e., it is signed with the new identity. We refer to declarations of both types as global identity declarations, and extend the assumption that a new identity can be declared at most once to this broader definition of identity declaration. The validity of an identity update claim is defined inductively, as follows.
Definition 0 (Valid Identity Update Claim).
Let be a sequence of claims, the set of global identities claimed in , and . A global identity update event over has the form , , in which case we say that is the rightful owner of .
An identity update event is valid if it is made by the same agent, namely, if equals the agent which declared to be an identity.
Valid global identity claims should form linear chains, one for each agent, each starting from and ending with the currently valid global identity of the agent, as follows.
Definition 0 (Identity Provenance Chain).
Let be a sequence of claims and the claimed set of global identities. An identity provenance chain (identity chain for short) is a subsequence of of the form (starting from the bottom):
Such an identity chain is valid if the declarations in it are valid. Such an identity chain is maximal if there is no claim
for any and . A global identity is current in if it is the last identity in a maximal identity chain in .
Note it is very easy for an agent to make an update claim for its identity. However, it is just as easy for an adversarial agent wishing to steal the identity to make such a claim. Hence, this ability must be coupled with a mechanism is protects the rightful owner of an identity from losing due to invalid identity update claims. Thus, crucially, the main idea here is to use mutual sureties to protect valid identity update declarations and help distinguish between them and invalid declarations.
In particular, immediately after an identity update the new identity would not have any surety edges incident to it. Thus, as a crude measure, we require that the identity update would come to bear only after all the neighbors of the old identity would update their mutual sureties to be with the new identity. So, the agent wishing to update his identity would have to approach his neighbors and they would have to create mutual surety pledges with his new identity.
Example 4.3 ().
Consider two friends, agent and agent having a mutual surety pledge between them. If would lose his identity, he would create a new keypair, make an identity update claim, and ask for a new mutual surety pledge between ’s identity and new identity.
The following observation stems from the fact that whether a surety between two identities is violated depends on their rightful owners, and that a valid identity chain has a single owner.
Observation 5 ().
Let be a sequence of claims and and be two valid identity chains in . If a surety pledge between two global identities is valid, then any surety pledge between two global identities in these chains, is valid.
The import of this observation is that a mutual surety can be “moved along” valid identity chains as they grow, without being violated, as it should be.
Below we argue that invalid identity update claims are quite easy to catch, thus the risk of stealing identities can be managed. In effect, we show the value of surety pledges in defending an identity against invalid update claims.
Let be a sequence of claims, be two identity chains in , and assume there is a valid surety pledge between the two current global identities . Now assume that the identity update claim is made, namely, some agent has claimed to replace by . Then, it will be hard for to secure surety from and, if it attempts to do so, then will know that is not valid and thus (if is honest) a mutual surety between and will not be established. Consider the following case analysis:

Assume notices . Then it would inform that it did not claim , and thus will know that is not valid.

Alternatively, assume that notices . It would approach to update the mutual surety between them accordingly; would deny owning , and thus will know that is invalid.

Alternatively, would approach to update the mutual surety has with to be with with instead; will see (or suspect, if did not reveal himself) that is not , will double check with and thus know that the claim is invalid.
5. Outlook
We provided a formal foundation for a solution for granting genuine global identities. We have discussed the crucial ingredients and concepts and described several results for safe community growth.
While this paper is quite formal, we aimed the constructions to be readily amenable to implementation, and hinted at some needed implementation ingredients.
As promising future work, we intend to study the following:

directed surety graphs, as opposed to the undirected graphs considered here;

surety graphs containing surety edges of various types combined; and

the birth and death of agents, to accommodate a more dynamic, realworld setting.
More broadly, realizing the proposed solution entails developing additional components, notably governance mechanism based on sybilresilient social choice (SRSC, ), a mechanism for encouraging honest behavior and discouraging corrupt behavior, and a cryptocurrency to fuel such a mechanism and the system in general.
Acknowledgements. We thank the generous support of the Braginsky Center for the Interface between Science and the Humanities and Ouri Poupko for helpful discussions.
References
 [1] R. Andersen, C. Borgs, J. Chayes, U. Feige, A. Flaxman, A. Kalai, V. Mirrokni, and M. Tennenholtz. Trustbased recommendation systems: an axiomatic approach. In Proceedings of WWW ’08, pages 199–208, 2008.
 [2] Sanjeev Arora and Boaz Barak. Computational complexity: a modern approach. Cambridge University Press, 2009.
 [3] R. Baird, K. Migiro, D. Nutt, A. Kwatra, S. Wilson, J. Melby, A. Pendleton, M. Rodgers, and J. Davison. Human tide: the real migration crisis. 2007.
 [4] World Bank. Identification for development (ID4D) global dataset, 2018.
 [5] M. Borge, E. KokorisKogias, P. Jovanovic, L. Gasser, N. Gailly, and B. Ford. Proofofpersonhood: Redemocratizing permissionless cryptocurrencies. In Proceedings of EuroS&PW ’17, pages 23–26, 2017.
 [6] Jeff Cheeger. A lower bound for the smallest eigenvalue of the laplacian. In Proceedings of the Princeton conference in honor of Professor S. Bochner, 1969.
 [7] D. Longley C. Allen R. Grant M. Sabadello D. Reed, M. Sporny. Decentralized identifiers (DIDs) v0. 12–data model and syntaxes for decentralized identifiers (DIDs). Draft Community Group Report, 29, 2018.
 [8] R. Diestel. Graph theory. Graduate texts in mathematics, page 7, 2012.
 [9] The Global Identity Foundation. Global identity challenges, pitfalls and solutions, 2014.
 [10] Charles Geisler and Ben Currens. Impediments to inland resettlement under conditions of accelerated sea level rise. Land Use Policy, 66:322–330, 2017.
 [11] Shlomo Hoory, Nathan Linial, and Avi Wigderson. Expander graphs and their applications. Bulletin of the American Mathematical Society, 43(4):439–561, 2006.
 [12] JJ. C. Meyer and W. Van Der Hoek. Epistemic logic for AI and computer science, volume 41. Cambridge University Press, 2004.
 [13] A. Mühle, A. Grüner, T. Gayvoronskaya, and C. Meinel. A survey on essential components of a selfsovereign identity. Computer Science Review, 30:80–86, 2018.
 [14] Government of India. Home  unique identification authority of india, 2018. Available at https://uidai.gov.in.
 [15] Ouri Poupko, Gal Shahaf, Ehud Shapiro, and Nimrod Talmon. Sybilresilient conductancebased community growth. In Proceedings of CSR ’19, 2019. To appear. A preliminary version appears in https://arxiv.org/abs/1901.00752.
 [16] R. L. Rivest, A. Shamir, and L. Adleman. A method for obtaining digital signatures and publickey cryptosystems. Communications of the ACM, 21(2):120–126, 1978.
 [17] J. R. Searle, S. Willis, and D. Vanderveken. Foundations of illocutionary logic. CUP Archive, 1985.
 [18] G. Shahaf, E. Shapiro, and N. Talmon. Realityaware sybilresilient voting. arXiv preprint arXiv:1807.11105, 2018.
 [19] E. Shapiro. Point: foundations of edemocracy. Communications of the ACM, 61(8):31–34, 2018.
 [20] D. A. Spielman. Spectral graph theory and its applications. In Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS ’07), pages 29–38, 2007.
 [21] A. Tobin and D. Reed. The inevitable rise of selfsovereign identity. The Sovrin Foundation, 29, 2016.
 [22] P. Zimmermann. PGP user’s guide, volume II: Special topics. 1994. Available at ftp://ftp.pegasus.esprit.ec.org/pub/arne/pgpdoc2.ps.gz.
Comments
There are no comments yet.