 # Constructing graphs with limited resources

We discuss the amount of physical resources required to construct a given graph, where vertices are added sequentially. We naturally identify information -- distinct into instructions and memory -- and randomness as resources. Not surprisingly, we show that, in this framework, threshold graphs are the simplest possible graphs, since the construction of threshold graphs requires a single bit of instructions for each vertex and no use of memory. Large instructions without memory do not bring any advantage. With one bit of instructions and one bit of memory for each vertex, we can construct a family of perfect graphs that strictly includes threshold graphs. We consider the case in which memory lasts for a single time step, and show that as well as the standard threshold graphs, linear forests are also producible. We show further that the number of random bits (with no memory or instructions) needed to construct any graph is asymptotically the same as required for the Erdős-Rényi random graph. We also briefly consider constructing trees in this scheme. The problem of defining a hierarchy of graphs in the proposed framework is fully open.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Alice (A) and Bob (B) work together to construct a graph. At each time step , A sends instructions to B. Then, B constructs a graph according to the instructions, by adding vertices and edges to the graph . We will try to work in the simplest possible setting. It will be convenient to assume that is on vertices, , and that vertex is added at time . Thus, the neighbours of vertex at time are vertices of . In addition to instructions, i.e., information, B may have access to a memory and a source of randomness (e.g., B can flip a coin). Of course, memory is also information, but it seems useful to distinguish between information received at time from information received at time . These three quantities will be called resources. Notice that we do not consider potential loss of information. In other words, when we say “A sends instructions to B”, we do not mean that there is an information-theoretic (noisy) channel between A and B.

Instructions and memory consist of bit strings – more generally, we could use the symbols of an alphabet. We assume that memory is not required to remember the effect of each instruction. For instance, if the instruction at time is associated to a certain action, say , then is associated to at every time , but there is no need to store . Obviously, but importantly, no action can be performed without the relevant resources – e.g., adding an edge between a specific pair of vertices, making a random choice, etc. Since bits are needed to specify exactly, it is intuitive that B requires coin flips to construct with randomness only, whenever instructions are unavailable – see Section 5 for a thorough description of this point. The process introduced in [JS13], which was originally studied in the context of graph limits, aimed at defining a notion of likelihood for a graph constructed exactly in the way described above.

Here, we take a different perspective and look at the amount of instructions, memory, and randomness needed by A and B for constructing graphs. We mostly focus on the role of B. The proposed framework has clearly many variations. We skim over the basic ones. The central question is: what graphs can B construct with the use of limited resources? B has full freedom of interpreting instructions, but no computational power (e.g., B cannot count). A setting in which graphs are seen as constructed/defined by computationally bounded distributed agents is proposed by Arora et al. [ASW09].

We will need some standard notation. Given graphs and , the disjoint union is the graph such that and if and only if or . We denote by , , , , and , the -path, the -cycle, the complete graph on vertices, the empty graph on vertices, and the complete bipartite graph on vertices, respectively. (See [D12] for the elementary graph theory that we use.)

Let us denote by the adjacency matrix of . The -th entry of is  if and , otherwise. The following fact is obvious, but worth a mention for the sake of completeness:

###### Proposition 1.

Every graph can be constructed by adding vertices sequentially.

###### Proof.

Given a graph on vertices, (when is undirected) the entries in the triangle above the main diagonal are the significant part of . Label the vertices of by , where vertex corresponds to the -th line of , vertex to the -th line, etc. – a line is a row or a column. Let us assume that at time we add vertex . The proof follows by observing that in the neighbours of vertex correspond to lines , with , of , and that these lines have been already completed when adding each vertex . ∎

We note that threshold graphs occur frequently in this model. It is worth providing a proper description of what such a graph is. A threshold graph is a graph which can be constructed from by a sequence of two operations: add an isolated vertex or add a dominating vertex. Recall that a vertex of a graph is isolated if it has degree zero and dominating if it is adjacent to all other vertices of . The family of threshold graphs is denoted by . We make use of the forbidden subgraphs characterization of

when classifying a graph as being threshold. This may also be taken as a definition

[G80]: a graph is a threshold graph if does not contain , and as induced subgraphs. Threshold graphs are unigraphs (that is, completely specified by their degree sequence up to isomorphism) and easy to recognise. Many hard computational problems are efficiently solvable when threshold graphs are taken as instances.

In the following sections, we consider the graphs that can be constructed by adding vertices sequentially when B has access to different combinations of resources. First, we will study the simplest case, where B receives one bit instructions from A. With no other information, we show that B is limited to constructing threshold graphs. In fact, these graphs prove to be constructible with any combination of resources. In the next case, B is also given access to one bit of memory, which is the ability to label each vertex with either or as it is placed, and thus differentiate the vertex set into two groups. We see that A is now able to give instructions which are directed at only one group of vertices, and this extends the family of constructible graphs beyond threshold graphs, including non-threshold graphs such as bipartite and split graphs. Memory, as it turns out, has further uses when its use is modified. Specifically, we explore two variations of memory available to B: fading and modifiable. In the first variation, fading memory only lasts one time step, which means that B can only remember the label of the last two placed vertices. This allows the formation of a linear forest of path graphs, among other non-threshold graphs. In the second case - modifiable memory - memory can be used to modify the edges between previously placed vertices. Despite offering a new construction mechanism for B, it is shown that this does not extend the number of unique graphs beyond normal memory. We then consider randomness as the only resource available to B, and show that the resources required to construct any graph and the Erdős-Rényi random graph are equivalent in the asymptotic limit. In the penultimate section, we modify the procedure for constructing graphs and construct trees using the resources considered previously. The final section concludes with open questions.

To start, we consider one bit of instruction and no memory, as described above. The main result of this section is the following statement:

###### Proposition 2.

Let A send to B one bit instructions at each time step . Furthermore, assume that B has no memory and no randomness. Then, is a threshold graph.

###### Proof.

Let us consider the simplest possible case, which is, arguably, one bit of instruction per vertex. B can not distinguish vertices of , by reading their labels, since B has no memory. B can not distinguish vertices of , by picking them at random, since B has no randomness. Hence, B, without further information, is restricted to either placing a dominating or isolated vertex for each instruction. We list in a table all unique interpretations of a one bit instruction from A that are compatible with these conditions:

The notation is straightforward:

• " or “" means “construct for every when receiving instruction or at time ", that is, is a dominating vertex;

• " or “" means “construct for every when receiving instruction or at time ", that is, is an isolated vertex.

The cases and give and , respectively. These are both threshold graphs. It follows directly from the definition of a threshold graph that the case gives a threshold graph, since introduces a dominating vertex and introduces an isolated vertex. It is clear that instructions of arbitrary length will not change the above table, because the number of possible interpretations of the instructions does not increase. In summary, the families of graphs constructed by the given instructions are:

We specify with and the number of vertices associated with instruction and respectively. ∎

## 2 One bit of instruction and one bit of memory

Let us now consider families of graphs that B can construct with the use of an extra resource: memory. The construction method is as before. One bit instructions are sent from A to B. For each instruction, B adds a vertex, and then carries out the instruction specified by said vertex. Unlike in the previous case, where all vertices were considered the same, the addition of one bit of memory allows B to label each vertex with the instruction 0 or 1, as it is placed. Hence B can differentiate between two different groups of vertices. A is now able to send instructions which are directed at one particular group of vertices, and this increases the number of graphs that can be constructed. For consistency, we assert that no other edge can be added to at time , meaning that, at time , B is not allowed to add an edge of the form with both and in (The effects of removing this restriction will be studied in section 3). In other words: for all , is an induced subgraph of on the vertices .

For the following analysis, we must define the standard join operation. Given graphs and , their join, , is the graph constructed by taking the disjoint union of and , then connecting every vertex of with every vertex of . For example, and .

Define the labelling function, that returns the label of a given vertex. Unless stated otherwise, we will assume the bit string of instructions sent by A, has vertices labelled ‘0’ and vertices labelled ‘1’, that is and .

We also define the graphs , and . First, is a bitstring of length . is the complete bipartite graph , where the bipartition classes are vertices labelled ‘0’ and ‘1’, minus edges of the form when , and . Similarly, is the complete split graph minus edges of the form when , and . The graph is the graph plus edges of the form when , and . Our analysis results in the following proposition:

###### Proposition 3.

Let A send to B one bit of instruction at each time step . Let us assume that B has no randomness. Moreover, let us assume that B can assign one bit of memory for each vertex. Then, is either a threshold graph or one of the following graphs (not necessarily threshold):

 El⊎Em, E(x), Kl⊎Em, Kl,m,K(x), Kl+Em, Kl⊎Km, ˜K(x), Kl+m. (1)
###### Proof.

B can distinguish vertices of by reading their labels, since B has one bit of memory for each vertex. However, the amount of memory is only sufficient to divide into two classes: 0 or 1. B cannot distinguish vertices of , by picking them at random, since B has no randomness. As in the proof of Proposition 2, we list in a table all unique interpretations of a one bit instruction from A that are compatible with these conditions:

Here, the notation is as follows:

• " or “" means “construct for every if when receiving instruction or at time ";

• " or “" means “construct for every if when receiving instruction or at time ".

We proceed with a case by case analysis:

:

and .

:

and .

:

we obtain , since this is the case of Proposition 2.

:

and if .

:

and

:

. From the instruction set, we have the edges if and only if and , . This is the definition of .

:

. Note that for .

:

. The instructions enforce that we have edges if and only if either: for , or , for . This is readily seen to be .

:

. The instructions give us all edges of the form when either: for or , . These edges realise the complete split graph .

:

. From the instruction set, we have the edges if and only if either: , or , , . This graph is .

The table summarises the families:

## 3 Using memory to modify Gt−1

In earlier sections, B had the restriction that no other edge could be added to at time – meaning that, at time , B was not allowed to add an edge of the form with both and in . We can say that graphs produced in this way have the property that is always an induced subgraph of . In this section, we relax this restriction, meaning that B, using memory, is now able to construct edges within . We will refer to graphs constructed in this manner as being memory modifiable.

To further illustrate this, let us consider the memory modifiable graph constructed from the instructions . As B receives the first four bits from A, an empty graph is constructed. This corresponds to . As B has memory, the labels or of each of the four vertices can be distinguished. Hence, when B receives the last instruction bit, which corresponds to joining vertex to vertices, B can instead choose to construct the edges ,,, , as all vertices labeled are indistinguishable without the above restriction. The resultant graph is a complete bipartite graph, . We notice that if vertex was removed, then the remaining induced subgraph is . This is clearly not . We can now give a definition of a memory modifiable graph: A memory modifiable graph, , is a graph with the property that its induced subgraph constructed by the removal of vertex is not isomorphic to . If B had chosen not to use memory to add edges in , then we construct the equivalent graph listed in Proposition 3, which for is the closely related (see Proposition 3 and Figure 1). We can conjecture that for each instruction there exists a memory modifiable graph. Given that the construction method is different the natural question to ask is the following: is the family of memory modifiable graphs the same as the graphs listed in Proposition 3? Figure 1: The graphs M(00010) and E(00010), as defined in the main text.
###### Corollary 1.

All possible memory modifiable graphs, , are of the form:

 Kl+Em, Kl,m, Kl+m. (2)

which are all graphs proven to be constructible in Proposition 3.

###### Proof.

Let us assume that B still has access to one bit of memory and instruction per vertex. Let us also remove the restriction that edges cannot be constructed in . Then, from the ten unique instructions identified, only four can produce memory modifiable graphs. For those four, the new graph types are listed. We find that none of these graphs are unique, as they can all be constructed by the method used in Proposition 3. Below, we assume the following definition for a memory modifiable graph: A memory modifiable graph is a graph such that its induced subgraph formed by the removal of vertex is not isomorphic to . We use this definition to determine whether an instruction can construct memory modifiable graphs. We also note that if a memory modifiable graph has not been formed for a particular instruction set at time , then the graph must be of the form listed in Proposition 3.

(: Trivially, this instruction set cannot construct a memory modifiable graph as there are no edges.

(: This instruction always forms a complete graph, , so cannot form memory modifiable graphs.

(: Consider on vertices. This is . Let us add vertex . If vertex is 0, then we produce . If vertex is 1, then we produce . Now, if vertex was removed, in either case, we are left with . Clearly, is the same as the induced subgraph formed when vertex is removed. In fact, we can conclude the same about any set of instructions where the two groups of and vertices don’t interact, because any graph formed from only one type of vertex, as shown in Proposition 2 can only form or . Neither of these are memory modifiable, as shown above.

(: Consider on vertices. This is . We note that all possible edges must exist for this graph type. Let us add vertex . If vertex is 0, then we add a dominating vertex. If vertex is 1, then we connect all existing vertices labelled 1 to those labelled 0. However, we see that since all previous edges have already been constructed, only vertex forms connections. In both cases, if vertex was removed, then the remaining induced subgraph would be . Hence, this instruction can never produce memory modifiable graphs.

(: This set of instructions is non-interacting. Hence, we can see that this cannot form a memory modifiable graph.

(: Consider on vertices. This is . Let us add vertex . If vertex is 0, then we construct . If vertex is 1, then we construct . In both cases, vertex connects to all vertices of the opposite type. However, doesn’t modify the edges of any other pair of vertices, as these have already been connected with previous instructions. If vertex was removed, in either case, we are left with the original . The induced subgraph is the same as , so this cannot form memory modifiable graphs.

(: = , where and are the number of ’s and ’s respectively. This is in fact a complete split graph consisting of an independent empty set to which every vertex of clique is completely connected. This is same graph produced by as in Proposition 3.

(: = , where and are the number of ’s and ’s respectively. This is same graph produced by as in Proposition 3.

(: = , where and are the number of ’s and ’s respectively. This is a complete graph and is also a threshold graph. This is same graph produced by as in Proposition 3.

(: = . This is a complete split graph, which is also threshold graph. This is same graph produced by as in Proposition 3.

Hence, any graph whose subgraph is altered with the addition of vertex is not unique, and does not add further to the number of graphs constructible by . ∎

Recall that a graph is -free if it does not contain a graph as an induced subgraph. Let us consider . The sequence of instructions gives the graph on five vertices and edges , and . The edges , and form . Hence, we can construct in , meaning that is not -free. We can notice, by direct inspection on the graphs in Eq. (3), that every time we attempt to construct , with , we are forced to include a chord. A chord is an edge between two vertices of a cycle that is not itself an edge of the cycle. Thus, is -free for . Same situation for , with . The case of is easy. Consider and try to add a vertex – and an extra edge – for obtaining . In taking , we can only add or . If we add , we then create a -cycle . The vertex is isolated. There are binary strings of length . The rule can not give . The rules and are clearly unsuitable for . The same happens for , , since threshold graphs are known to be -free. Further, gives only disjoint cliques. With , we form a clique of whenever we add and so we can not go beyond , which is given by the sequence . We always form a new triangle or a pendant vertex attached to a clique when adding with the rule . For , cliques are unavoidable. Finally, is -free. This discussion leads to a corollary of Proposition 3:

###### Corollary 2.

Let A send to B one bit of instructions at each time step . Let us assume that B has no randomness. Moreover, let us assume that B remembers all instructions of A, i.e., one bit of memory for each vertex. Then, is not necessarily -free or -free, but it is -free and -free, for each .

## 4 One bit of instructions and one bit of memory for each vertex (fading memory)

In this section, we consider the graphs produced when has ‘fading’ memory. That is, a memory which lasts for an integer L number of time steps. Previously, we considered the cases where L = 1 (no memory) and L = (perfect memory). Now we consider the graphs formed when L is a small integer. With such a time step, it is impossible to construct a memory modifiable graph. For the following analysis, we define a linear forest - the disjoint union of path graphs , , each of size for - to be a graph of the form . Note that an isolated vertex is the path , so a linear forest can contain isolated vertices.

We also define the graphs and , with respect to the instruction bit string , as in Section 2. The graph is minus edges of the form for when and . The graph is minus edges of the form for when and . Furthermore, we use the notation to represent exclusive ‘or’.

###### Proposition 4.

Let send to one bit of instruction at each time step . Furthermore, let B have memory which lasts L = 2 number of time steps. Then, the families of graphs produced will either be a threshold graph, a linear forest, or one of the following graphs:

 E′(x), K′(x), Kt, Et. (3)
###### Proof.

can distinguish only the labels of vertices and . cannot distinguish vertices of by picking them at random, since has no randomness. As before, and are the number of ’s and ’s respectively in the instruction string . The interpretations of a one bit instruction from A are the same as in Proposition 3. Given these constraints, we proceed with a case by case analysis.

:

= .

:

=

:

= , again, as this is the case of Proposition 2.

:

= for . The string is the ordered sequence counting consecutive zeros that appear in a bit string , for instance .

:

= , for and . The strings and are the ordered sequences counting consecutive zeros and ones respectively that appear in a bit string , e.g. , .

:

= , where is the number of occurrences of the substring in the instruction string .

:

= , for . The string is the ordered sequence of the lengths of substrings of alternating bits appearing in a bit string , for instance , counting bits , and .

:

= , for . The string is the ordered sequence of the lengths of substrings of the form appearing in a bit string , for instance , counting bits , and .

:

= . We see this as the edge exists in if and only if: , , or , , , which precisely agrees with the definition of .

:

= . We see this as the edge exists in if and only if: , , or , , which precisely agrees with the definition of .

## 5 Randomness only

Randomness is an indispensable resource in communication and computational complexity, and in general is a fundamental tool in many areas of computer science, statistical mechanics, etc. How can we use randomness to construct ? Start with the empty graph on

vertices and insert each edge with probability

. The outcome of this random process is the (uniform) random graph and it is denoted by (see [JŁR00]). The probability that is nonzero, but exponentially small; in fact, if we do not keep into account the size of the isomorphism class. It follows however that without instructions we can still construct . The randomness used amounts to flip coins, which corresponds to an equal number of random bits. The time required for the process consists of a single time-step, since we can flip all the coins at once. So, bits here quantify both randomness and information. A quick observation: when the length of the instructions is zero, we can construct (with nonzero probability) by the use of bits assigned to the random process; conversely, without randomness, we need bits of information to construct (with probability one). An important fact is that the random process is instantaneous. Randomness is generated by unbiased coins that can be re-used as many times we need. On the other hand, if we include time, each random bit can be given by flipping the same coin – to be discussed further.

If we use randomness, time could be subdivided into three operations: adding vertices; flipping coins, adding edges. Of course, can be constructed by flipping coins at the same time, or by flipping a single coin times. In the second case, if we allow a single coin as a physical resource to generate randomness, the time for constructing the graph is . As might be expected it does not make much sense to talk about random random graphs, where the pairs of vertices are chosen at random. In this case, a dyad has probability to be picked. If we wait long enough then we just obtain the random graph.

At time , we have a graph , at time , , and finally , at time . Notice that not every time-step needs to correspond to a graph on vertices. However, we may assume without loss of generality that a graph has a number of vertices , whenever and is on vertices. In this notation

at the end of the growth process. For the moment, let us consider a basic case: we start with

, i.e., the empty graph with a single vertex, and add a new vertex at each time-step. Moreover, suppose that our only available resource is randomness. As a consequence of this fact, at each time step we add a single vertex and choose its neighbours at random. We end up with the following process introduced in [JS13] and originally studied in the context of graph limits (see also [L12]). For each , let

be a probability distribution on

. By denoting as

a random variable drawn according to the distribution

, we obtain by adding a new vertex to and connecting it to a subset of size in distributed with respect to . When Bi, with and , we get the Erdős–Rényi random graph. By modifying , we can end up with various other graph ensembles.

The above process needs randomness and no information at all. How much randomness?

###### Proposition 5.

Let be constructed only with the use of randomness and no information. Then, the amount of randomness needed to construct  and  are asymptotically equivalent.

###### Proof.

We quantify randomness by the number of random bits needed to perform each choice. The table below lists these bits for the graphs , and . Notation:  is the -th bit used for choosing neighbours of vertex in and is the -th bit used for choosing their number, both at time . For , and , we need bits, respectively:

We have denoted by the random bits for . The formula for this integer sequence is , ,  and for , we have (proof below)

 a(n)=a(n−1)+bo(n)+⌊log2(n−1)⌋+1, (4)

for odd and

 a(n)=a(n−1)+be(n)+⌊log2(n−1)⌋+1, (5)

for even, where

The integer is the number of bits in the binary expansion of . For Eqs. 4 and 5, let us consider the growth process. At time , . At time , we add a vertex . There are possible cases: we flip a coin and get either or . (Recall that the degree is the number of neighbours of a vertex .) Hence, . At time time , we add a vertex . There are possible cases: . It is evident that the contribution from this term at time is then because . Vertex can choose among vertices, and in general vertex can choose among vertices. What is the with the highest randomness cost? Let us label as , the vertices in potentially adjacent to vertex . The answer is for even and for odd. These are the numbers giving the largest binomial coefficient. Since the binary system starts enumerating from , we take the number of bits in and . Summing up everything, a formula for is

 a(n) = n−1∑i=3; even⌊log2((i−1i/2)−1)⌋+n−1∑i=3; odd⌊log2((i−1(i−1)/2)−1)⌋ +n∑i=2⌊log2(i−1)⌋+2n−3.

Let us look at the asymptotic behaviour of . First, and . Also, there is a constant such that when . By combining these facts, one can see that the asymptotic efficiency class of is . Notably the same happens for , which is the number of random bits needed for the uniform random graph, . ∎

Let consider again the process above, but where the probability distribution is taken to be uniform – see [BMS14]. We iteratively construct a graph , starting from . The -th step of the iteration is divided into three substeps: (1) We select a number with equal probability. Assume that we have selected . (2) We select vertices of with equal probability. Assume that we have selected the vertices . (3) We add a new vertex to and the edges . For a graph on vertices, the likelihood of , denoted by , is defined as the probability that , where is the graph given by the above iteration: . For example, and , where is a star on vertices. An important point is a link between the likelihood and the size of the automorphism group of . An automorphism of a graph is a permutation such that if and only if . The set of all automorphisms of , with the operation of composition of permutations “”, is a subgroup of the permutation group denoted by Aut. It is in fact possible to show that

 1|Aut(G)|∏ti=1(i−1⌊(i−1)/2⌋)≤L(G)≤1|Aut(G)|.

It is plausible to conjecture that the minimum likelihood is attained by the complete bipartite graph on vertices, , when , and , when . Numerical evidence is exhibited in [W]. These complete bipartite graph have a relatively large automorphism group and by Mantel’s theorem are the triangle-free graphs with the largest number of edges. The analogue conjecture for the maximum seems harder to state. The computational complexity of the likelihood is an open problem. The original motivation for introducing the likelihood was to measure how likely is that a given graph is generated at random. The idea fits the context of quasi-randomness, i.e., the study of how much a given graph resembles a random one.

## 6 Trees built from limited resources

We can consider also building trees using the framework developed thus far. A tree is a graph without cycles. We grow trees by adding vertices one-by-one. Given a tree on vertices, we can always identify a vertex , called the root, and added at time . The -th generation of the tree are the vertices at distance from the root. The leaves are pendant vertices, i.e. vertices of degree . The vertex is always a leaf. Also the root, in our definition, can be a leaf. When we grow a tree by adding vertices one-by-one then we also add edges one-by-one. In fact, the number of edges of a tree is exactly .

### 6.1 Trees built with random bits

The process in Section 5 can be used to grow trees if at each time step it is restricted to add a single edge. Equivalently, the degree of vertex in is . It can be done by bypassing the random choice of the degree. If we keep the random choice of the adjacent vertex, we have a model of a random tree. At each time step we add a new vertex and a new edge incident with that vertex. The neighbour of the vertex is chosen randomly in . This process is also called the uniform attachment model and is usually denoted by UA [SM95]. The graph UA can be also obtained by taking one of all the possible spanning trees of at random; the well-known Prüfer bijection guarantees that contains all trees on vertices. The process gives a sequence of random recursive trees, where a tree on the vertices is recursive if the vertex labels along the unique path form to increase for every .

For a rooted tree , let denote the set of its leaf vertices. We also denote by the (unique) path from the root vertex to the leaf . For graphs and we denote by the graph union of and . This is the graph . Also, let , the number of bits needed to represent the integer as a binary string.

###### Proposition 6.

Let be any tree on vertices and let be the random tree, as constructed previously. Then, where is the likelihood function defined in Section 5, that is, .

###### Proof.

We can describe the tree as the graph union of the paths to each of its leaves, that is, . Each path takes the form , so any individual given path can be generated according to our random process, as there is always nonzero probability for the edge for .

We now need to prove that our random process supports constructing the union of all paths in . Choose any two leaves in , . The induced paths from the root of to and respectively are and . From the previous paragraph, there is nonzero probability that and are individually subgraphs of . For these paths to have zero probability of simultaneously being in , there must be a vertex for some such that there are edges for , and . This is because every vertex has only one neighbour smaller than itself . Suppose the condition holds. Then, the vertices induce a cycle. However, is a tree, so has no induced cycle and we have a contradiction. This means is a subgraph of with nonzero probability.

Since the argument holds for any , we have that there is nonzero probability that is a subgraph of . Indeed, since both and are trees on vertices, they both have edges and so implies that and the result follows. ∎

### 6.2 Trees built with instructions and memory

We can also consider the model earlier in the text, where Alice (A) sends instructions to Bob (B), and B has memory. We can bound the size of the instructions and memory needed to construct an arbitrary tree .

###### Proposition 7.

Let be a tree on  vertices. Then  can be constructed with the use of bits of instructions and  bits of memory.

###### Proof.

Consider constructing tree given that we have the tree . Since has edges and has edges, we only add one edge, between vertex and some other vertex . The instructions have to specify which vertex will be ’s neighbour. This requires bits. We also require a label in memory for each vertex , each of which requires bits. Summing up, for the graph , we require bits of memory for the vertex labels and bits for the instructions. Finally,

 n−1∑t=1b(t) =n−1∑t=1(⌊log2n⌋+1)≤n−1∑t=1log2n+n−1=log2((n−1)!)+n−1 ≤(n−1)(log2(n−1)−log2(e)+1)+O(log2(n−1))=O(nlogn).

## 7 Conclusions

In this work we have considered a scenario where two parties Alice (A) and Bob (B) construct a graph together, given limited communication, memory and randomness. We enumerate the different classes of graphs that A and B can construct under different constraints on these resources.

A number of open questions remain: for instance, in Section 2 we saw that giving Bob one bit of memory per vertex lifted the class of threshold graphs to a larger class. Indeed, it is well known that computational problems such as graph isomorphism and maximum clique are easy when the instances are taken as threshold graphs. Perhaps for this extended class of graphs such problems are also easy. Indeed, it would be interesting to construct a hierarchy of graphs with an increasing number of bits of memory and at which point these problems increase in difficulty. Maybe there there is some correspondence with some well-known class of graphs.

The case for fading memory where needs to be investigated in a similar fashion as does the case of having access to differing numbers of random bits.

We may also ask the converse question: given a graph , how many bits do and need to build it? Recognising threshold graphs is linear-time [CH73, HIS78]. Is there a polynomial-time procedure for determining the number of bits of instructions and memory needed to construct , beyond the