# Active Learning on Trees and Graphs

We investigate the problem of active learning on a given tree whose nodes are assigned binary labels in an adversarial way. Inspired by recent results by Guillory and Bilmes, we characterize (up to constant factors) the optimal placement of queries so to minimize the mistakes made on the non-queried nodes. Our query selection algorithm is extremely efficient, and the optimal number of mistakes on the non-queried nodes is achieved by a simple and efficient mincut classifier. Through a simple modification of the query selection algorithm we also show optimality (up to constant factors) with respect to the trade-off between number of queries and number of mistakes on non-queried nodes. By using spanning trees, our algorithms can be efficiently applied to general graphs, although the problem of finding optimal and efficient active learning algorithms for general graphs remains open. Towards this end, we provide a lower bound on the number of mistakes made on arbitrary graphs by any active learning algorithm using a number of queries which is up to a constant fraction of the graph size.

## Authors

• 33 publications
• 16 publications
• 8 publications
• 10 publications
• ### S2: An Efficient Graph Based Active Learning Algorithm with Application to Nonparametric Classification

This paper investigates the problem of active learning for binary label ...
06/29/2015 ∙ by Gautam Dasarathy, et al. ∙ 0

• ### A Linear Time Active Learning Algorithm for Link Classification -- Full Version --

We present very efficient active learning algorithms for link classifica...
01/21/2013 ∙ by Nicolò Cesa-Bianchi, et al. ∙ 0

• ### Random Spanning Trees and the Prediction of Weighted Graphs

We investigate the problem of sequentially predicting the binary labels ...
12/21/2012 ∙ by Nicolò Cesa-Bianchi, et al. ∙ 0

• ### Point Location and Active Learning: Learning Halfspaces Almost Optimally

Given a finite set X ⊂R^d and a binary linear classifier c: R^d →{0,1}, ...
04/23/2020 ∙ by Max Hopkins, et al. ∙ 2

• ### Flattening a Hierarchical Clustering through Active Learning

We investigate active learning by pairwise similarity over the leaves of...
06/22/2019 ∙ by Claudio Gentile, et al. ∙ 0

• ### Active Community Detection: A Maximum Likelihood Approach

We propose novel semi-supervised and active learning algorithms for the ...
01/11/2018 ∙ by Benjamin Mirabelli, et al. ∙ 0

• ### HS^2: Active Learning over Hypergraphs

We propose a hypergraph-based active learning scheme which we term HS^2,...
11/25/2018 ∙ by I Chien, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The abundance of networked data in various application domains (web, social networks, bioinformatics, etc.) motivates the development of scalable and accurate graph-based prediction algorithms. An important topic in this area is the graph binary classification problem: Given a graph with unknown binary labels on its nodes, the learner receives the labels on a subset of the nodes (the training set) and must predict the labels on the remaining vertices. This is typically done by relying on some notion of label regularity depending on the graph topology, such as that nearby nodes are likely to be labeled similarly. Standard approaches to this problem predict with the assignment of labels minimizing the induced cutsize (e.g., [4, 5]

), or by binarizing the assignment that minimizes certain real-valued extensions of the cutsize function (e.g.,

[14, 2, 3] and references therein).

In the active learning version of this problem the learner is allowed to choose the subset of training nodes. Similarly to standard feature-based learning, one expects active methods to provide a significant boost of predictive ability compared to a noninformed (e.g., random) draw of the training set. The following simple example provides some intuition of why this could happen when the labels are chosen by an adversary, which is the setting considered in this paper. Consider a “binary star system” of two star-shaped graphs whose centers are connected by a bridge, where one star is a constant fraction bigger than the other. The adversary draws two random binary labels and assigns the first label to all nodes of the first star graph, and the second label to all nodes of the second star graph. Assume that the training set size is two. If we choose the centers of the two stars and predict with a mincut strategy,111 A mincut strategy considers all labelings consistent with the labels observed so far, and chooses among them one that minimizes the resulting cutsize over the whole graph.

we are guaranteed to make zero mistakes on all unseen vertices. On the other hand, if we query two nodes at random, then with constant probability both of them will belong to the bigger star, and all the unseen labels of the smaller star will be mistaken. This simple example shows that the gap between the performance of passive and active learning on graphs can be made arbitrarily big.

In general, one would like to devise a strategy for placing a certain budget of queries on the vertices of a given graph. This should be done so as to minimize the number of mistakes made on the non-queried nodes by some reasonable classifier like mincut. This question has been investigated from a theoretical viewpoint by Guillory and Bilmes [6], and by Afshani et al. [1]. Our work is related to an elegant result from [6] which bounds the number of mistakes made by the mincut classifier on the worst-case assignment of labels in terms of . Here is the cutsize induced by the unknown labeling, and is a function of the query (or training) set , which depends on the structural properties of the (unlabeled) graph. For instance, in the above example of the binary system, the value of when the query set includes just the two centers is . This implies that for the binary system graph, Guillory and Bilmes’ bound on the mincut strategy is mistakes in the worst case (note that in the above example ). Since can be efficiently computed on any given graph and query set , the learner’s task might be reduced to finding a query set that maximizes given a certain query budget (size of

). Unfortunately, no feasible general algorithm for solving this maximization problem is known, and so one must resort to heuristic methods —see

[6].

In this work we investigate the active learning problem on graphs in the important special case of trees. We exhibit a simple iterative algorithm which, combined with a mincut classifier, is optimal (up to constant factors) on any given labeled tree. This holds even if the algorithm is not given information on the actual cutsize . Our method is extremely efficient, requiring time for placing queries in an -node tree, and space linear in . As a byproduct of our analysis, we show that can be efficiently maximized over trees to within constant factors. Hence the bound can be achieved efficiently.

Another interesting question is what kind of trade-off between queries and mistakes can be achieved if the learner is not constrained by a given query budget. We show that a simple modification of our selection algorithm is able to trade-off queries and mistakes in an optimal way up to constant factors.

Finally, we prove a general lower bound for predicting the labels of any given graph (not necessarily a tree) when the query set is up to a constant fraction of the number of vertices. Our lower bound establishes that the number of mistakes must then be at least a constant fraction of the cutsize weighted by the effective resistances. This lower bound apparently yields a contradiction to the results of Afshani et al. [1], who constructs the query set adaptively. This apparent contradiction is also obtained via a simple counterexample that we detail in Section 5.

## 2 Preliminaries and basic notation

A labeled tree is a tree whose nodes are assigned binary labels . We measure the label regularity of by the cutsize induced by on , i.e., . We consider the following active learning protocol: given a tree with unknown labeling , the learner obtains all labels in a query set , and is then required to predict the labels of the remaining nodes . Active learning algorithms work in two-phases: a selection phase, where a query set of given size is constructed, and a prediction phase, where the algorithm receives the labels of the query set and predicts the labels of the remaining nodes. Note that the only labels ever observed by the algorithm are those in the query set. In particular, no labels are revealed during the prediction phase.

We measure the ability of the algorithm by the number of prediction mistakes made on , where it is reasonable to expect this number to depend on both the uknown cutsize and the number of requested labels. A slightly different prediction measure is considered in Section 4.3.

Given a tree and a query set , a node is a fork node generated by if and only if there exist three distinct nodes that are connected to through edge disjoint paths. Let be the set of all fork nodes generated by . Then is the query set obtained by adding to all the generated fork nodes, i.e., . We say that is 0-forked iff . Note that is 0-forked. That is, for all .

Given a node subset , we use to denote the forest obtained by removing from the tree all nodes in and all edges incident to them. Moreover, given a second tree , we denote by the forest , where is the set of nodes of . Given a query set , a hinge-tree is any connected component of . We call connection node of a hinge-tree a node of adjacent to any node of the hinge tree. We distinguish between 1-hinge and 2-hinge trees. A 1-hinge-tree has one connection node only, whereas a 2-hinge-tree has two (note that a hinge tree cannot have more than two connection nodes because is zero-forked, see Figure 1).

## 3 The active learning algorithm

We now describe the two phases of our active learning algorithm. For the sake of exposition, we call sel the selection phase and pred the prediction phase. sel returns a 0-forked query set of desired size. pred takes in input the query set and the set of labels for all . Then pred returns a prediction for the labels of all remaining nodes .

In order to see the way sel operates, we formally introduce the function . This is the reciprocal of the function introduced in [6] and mentioned in Section 1.

###### Definition 1.

Given a tree and a set of nodes ,

 Ψ∗(L)≜max∅≢V′⊆V∖L  |V′|∣∣{(i,j)∈E:i∈V′,j∈V∖V′}∣∣ .

In words, measures the largest set of nodes not in that share the least number of edges with nodes in . From the adversary’s viewpoint, can be described as the largest return in mistakes per unit of cutsize invested. We now move on to the description of the algorithms sel and pred.

The selection algoritm sel greedily computes a query set that minimizes to within constant factors. To this end, sel exploits Lemma 9 (a) (see Section 4.2) stating that, for any fixed query set , the subset maximizing is always included in a connected component of . Thus sel places its queries in order to end up with a query set such that the largest component of is as small as possible.

sel operates as follows. Let be the set including the first nodes chosen by sel, be the largest connected component of , and be the size (number of nodes) of the largest component of the forest , where is any tree. At each step , sel simply picks the node that minimizes over and sets . During this iterative construction, sel also maintains a set containing all fork nodes generated in each step by adding nodes to the sets .222 In Section 6 we will see that during each step at most a single new fork node may be generated. After the desired number of queries is reached (also counting the queries that would be caused by the stored fork nodes), sel has terminated the construction of the query set . The final query set , obtained by adding all stored fork nodes to , is then returned.

The Prediction Algorithm pred receives in input the labeled nodes of the 0-forked query set and computes a mincut assignment. Since each component of is either a 1-hinge-tree or a 2-hinge-tree, pred is simple to describe and is also very efficient. The algorithm predicts all the nodes of hinge-tree using the same label . This label is chosen according to the following two cases:

1. If is a 1-hinge-tree, then is set to the label of its unique connection node;

2. If is a 2-hinge-tree and the labels of its two connection nodes are equal, then is set to the label of its connection nodes, otherwise is set as the label of the closer connection node (ties are broken arbitrarily).

In Section 6 we show that sel requires overall time and memory space for selecting query nodes. Also, we will see that the total running time taken by pred for predicting all nodes in is linear in .

## 4 Analysis

For a given tree , we denote by the number of prediction mistakes that algorithm makes on the labeled tree when given the query set . Introduce the function

 mA(L,K)=maxy:ΦT(y)≤KmA(L,y)

denoting the number of prediction mistakes made by with query set on all labeled trees with cutsize bounded by . We will also find it useful to deal with the “lower bound” function . This is the maximum expected number of mistakes that any prediction algorithm can be forced to make on the labeled tree when the query set is and the cutsize is not larger than .

We show that the number of mistakes made by pred on any labeled tree when using the query set satisfies

 m\textscpred(L+\textscsel,K)≤10{lb}(L,K)

for all query sets of size up to . Though neither sel nor pred do know the actual cutsize of the labeled tree , the combined use of these procedures is competitive against any algorithm that knows the cutsize budget beforehand.

While this result implies the optimality (up to constant factors) of our algorithm, it does not relate the mistake bound to the cutsize, which is a clearly interpretable measure of the label regularity. In order to address this issue, we show that our algorithm also satisfies the bound

 m\textscpred(L+\textscsel,y)≤4Ψ∗(L)ΦT(y)

for all query sets of size up to . The proof of these results needs a number of preliminary lemmas.

###### Lemma 1.

For any tree it holds that .

###### Proof.

Let . For the sake of contradiction, assume there exists a component of such that . Let be the sum of the sizes all other components. Since , we know that . Now let be the node adjacent to which belongs to and be the largest component of . There are only two cases to consider: either or . In the first case, . In the second case, , which implies . In both cases, , which provides the desired contradiction. ∎

###### Lemma 2.

For all subsets of the nodes of a tree we have .

###### Proof.

Pick an arbitrary node of and perform a depth-first visit of all nodes in . This visit induces an ordering of the connected components in based on the order of the nodes visited first in each component. Now let be such that each is a component of extended to include all nodes of adjacent to nodes in . Then the ordering implies that, for , shares exactly one node (which must be a leaf) with all previously visited trees. Since in any tree the number of nodes of degree larger than two must be strictly smaller than the number of leaves, we have where, with slight abuse of notation, we denote by the set of all fork nodes in subtree . Also, we let be the set of leaves of . This implies that, for , each fork node in can be injectively associated with one of the leaves of that are not shared with any of the previously visited trees. Since is equal to the sum of over all indices , this implies that . ∎

###### Lemma 3.

Let be the set of the first nodes chosen by sel. Given any tree , the largest subtree of contains no more than nodes.

###### Proof.

Recall that denotes the -th node selected by sel during the incremental construction of the query set , and that is the largest component of . The first steps of the recursive splitting procedure performed by sel can be associated with a splitting tree defined in the following way. The internal nodes of are , for . The children of are the connected components of , i.e., the subtrees of created by the selection of . Hence, each leaf of is bijectively associated with a tree in .

Let be the tree obtained from by deleting all leaves. Each node of is one of the subtrees split by sel during the construction of . As is split by , it is a leaf in . We now add a second child to each internal node of having a single child. This second child of is obtained by merging all the subtrees belonging to leaves of that are also children of . Let be the resulting tree.

We now compare the cardinality of to that of the subtrees associated with the leaves of . Let be the set of all leaves of and be the set of all leaves added to to obtain . First of all, note that is not larger than the number of nodes in any leaf of . This is because the selection rule of sel ensures that cannot be larger than any subtree associated with a leaf in , since it contains no node selected before time . In what follows, we write to denote the size of the forest or subtree associated with a node of . We now prove the following claim:

Claim. For all , , and for all , .

Proof of Claim. The first part just follows from the observation that any was split by sel before time . In order to prove the second part, pick a leaf . Let be its unique sibling in and let be the parent of and , also in . Lemma 1 applied to the subtree implies . Moreover, since , we obtain , the last inequality using the first part of the claim. This implies , and the claim is proven.

Let now be the number of nodes in subtrees and forests associated with the leaves of . With each internal node of we can associate a node of which does not belong to any leaf in . Moreover, the number of internal nodes in is bigger than the number of internal nodes of to which a child has been added. Since these subtrees and forests are all distinct, we obtain . Hence, using the above claim we can write , which implies . Since each internal node of has at least two children, we have that . Hence, we can conclude that . ∎

### 4.1 Lower bounds

We now state and prove a lower bound on the number of mistakes that any prediction algorithm (even knowing the cutsize budget ) makes on any given tree, when the query set is 0-forked. The bound depends on the following quantity: Given a tree , a node subset and an integer , the component function is the sum of the sizes of the largest components of , or      if has less than components.

###### Theorem 4.

For all trees , for all 0-forked subsets , and for all cutsize budgets , we have that .

###### Proof.

We describe an adversarial strategy causing any algorithm to make at least mistakes even when the cutsize budget is known beforehand. Since is 0-forked, each component of is a hinge-tree. Let be the set of the largest hinge-trees of , and be the set of all edges in incident to at least one node of a hinge-tree . The adversary creates at most one -edge333 A -edge is one where . in each edge set for all 1-hinge-trees , exactly one -edge in each edge set for all 2-hinge-trees , and no -edges in the edge set of any remaining hinge-tree . This is done as follows. By performing a depth-first visit of , the adversary can always assign disagreeing labels to the two connection nodes of each 2-hinge-tree in , and agreeing labels to the two connection nodes of each 2-hinge-tree not in . Then, for each hinge-tree , the adversary assigns a unique random label to all nodes of , forcing mistakes in expectation. The labels of the remaining hinge-trees not in are chosen in agreement with their connection nodes. ∎

###### Remark 1.

Note that Theorem 4 holds for all query sets, not only those that are 0-forked, since any adversarial strategy for a query set can force at least the same mistakes on the subset . Note also that it is not difficult to modify the adversarial strategy described in the proof of Theorem 4 in order to deal with algorithms that are allowed to adaptively choose the query nodes in depending on the labels of the previously selected nodes. The adversary simply assigns the same label to each node in the query set and then forces, with the same method described in the proof, mistakes in expectation on the largest hinge-trees. Thus there are at most two -edges in each edge set for all hinge-trees , yielding at most -edges in total. The resulting (slightly weaker) bound is . Theorem 7 and Corollary 8 can also be easily rewritten in order to extend the results in this direction.

### 4.2 Upper bounds

We now bound the total number of mistakes that pred makes on any labeled tree when the queries are decided by sel. We use Lemma 1 and 2, together with the two lemmas below, to prove that for all cutsize budgets and for all node subset such that .

###### Lemma 5.

For all labeled trees and for all 0-forked query sets , the number of mistakes made by pred satisfies .

###### Proof.

As in the proof of Theorem 4, we first observe that each component of is a hinge-tree. Let be the set of all edges in incident to nodes of a hinge-tree , and be the set of hinge-trees such that, for all , at least one edge of is a -edge. Since for all , we have that . Moreover, since for any there are no -edges in , the nodes of must be labeled as its connections nodes. This, together with the prediction rule of pred, implies that pred makes no mistakes over any of the hinge-trees . Hence, the number of mistakes made by pred is bounded by the sum of the sizes of all hinge-trees , which (by definition of ) is bounded by . ∎

The next lemma, whose proof is a bit involved, provides the relevant properties of the component function . Figure 3 helps visualizing the main ingredients of the proof.

###### Lemma 6.

Given a tree , for all node subsets such that and for all integers , we have:  (a) ;  (b) .

###### Proof.

We prove part (a) by constructing, via sel, three bijective mappings , where is a suitable partition of , is a subset of such that any is all contained in a single connected component of , and the union of the domains of the three mappings covers the whole set . The mappings , and are shown to satisfy, for all forests444 In this proof, denotes the number of nodes in the set (of nodes) . Also, with a slight abuse of notation, for all forests , we denote by the sum of the number of nodes in all trees of . Finally, whenever contains a single tree, we refer to as it were a tree, rather than a (singleton) forest containing only one tree. ,

 |F|≤|μ1(F)|,|F|≤2|μ2(F)|,|F|≤2|μ3(F)|  .

Since each is all contained in a connected component of , this we will enable us to conclude that, for each tree , the forest of all trees mapped (via any of these mappings) to any node subset of has at most five times the number of nodes of . This would prove the statement in (a).

The construction of these mappings requires some auxiliary definitions. We call -component each connected component of containing at least one node of . Let be the -th node selected by sel during the incremental construction of the query set . We distinguish between four kinds of nodes chosen by sel—see Figure 3 for an example.

Node is:

1. A collision node if it belongs to ;

2. a -node if, at time , the tree does not contain any node of ;

3. a -node if, at time , the tree contains nodes all belonging to the same connected component of ;

4. a -node if and, at time , the tree contains nodes , which do not belong to the same connected component of .

We now turn to building the three mappings.

simply maps each tree that is not a -component to the node set of itself. This immediately implies for all forests (which are actually single trees) in the domain of . Mappings and deal with the -components of . Let be the set of all such -components, and denote by , , and the set of all -nodes, -nodes, and -nodes, respectively. Observe that . Combined with the assumption , this implies that plus the total number of collision nodes must be larger than ; as a consequence, . Each node chosen by sel splits the tree into one component containing at least one node of and one or more components all contained in a single tree of . Now mapping can be constructed incrementally in the following way. For each -node selected by sel at time , sequentially maps any -component generated to the set of nodes in , the latter being just a subset of a component of . A future time step might feature the selection of a new -node within , but mapping would cover a different subset of such component of . Now, applying Lemma 1 to tree , we can see that . Since the selection rule of sel guarantees that the number of nodes in is larger than the number of nodes of any -component, we have , for any -component considered in the construction of .

Mapping maps all the remaining -components that are not mapped through . Let be an equivalence relation over defined as follows: iff is connected to by a path containing only -nodes and nodes in . Let be the sequence of nodes of any given equivalence class , sorted according to sel’s chronological selection. Lemma 3 applied to tree shows that . Moreover, the selection rule of sel guarantees that the number of nodes of cannot be smaller than the number of nodes of any -component. Hence, for each equivalence class containing nodes of type , we map through a set of arbitrarily chosen -components to . Since the size of each -component is , we can write , which implies for all in the domain of . Finally, observe that the number of -components that are not mapped through cannot be larger than , thus the union of mappings and do actually map all -components. This, in turn, implies that the union of the domains of the three mappings covers the whole set , thereby concluding the proof of part (a).

The proof of (b) is built on the definition of collision nodes, -nodes, -nodes and -nodes given in part (a). Let be the set of the first nodes chosed by sel. Here, we make a further distinction within the collision and -nodes. We say that during the selection of node , the nodes in are captured by . This notion of capture extends to collision nodes by saying that a collision node just captures itself. We say that is an initial -node (resp., initial collision node) if is a -node (resp., collision node) such that the whole set of nodes in captured by contains no nodes captured so far. See Figure 3 for reference. The simple observation leading to the proof of part (b) is the following. If is a -node, then cannot be larger than the component of that contains , which in turn cannot be larger than . This would already imply . Let now be an initial -node and be the unique component of containing one or more nodes of . Applying Lemma 1 to tree we can see that cannot be larger than , which in turn cannot be larger than . If at time the procedure sel selects then . Hence, the maximum integer such that is bounded by the number of -nodes plus the number of initial -nodes plus the number of initial collision nodes. We now bound this sum as follows. The number of -nodes is clearly bounded by . Also, any initial -node or initial collision node selected by sel captures at least a new node in , thereby implying that the total number of initial -node or initial collision node must be . After rounds, we are sure that the size of the largest tree of is not larger than the size of the largest component of , i.e.,  . ∎

We now put the above lemmas together to prove our main result concerning the number of mistakes made by pred on the query set chosen by sel.

###### Theorem 7.

For all trees and all cutsize budgets , the number of mistakes made by pred on the query set satisfies

 m\textscpred(L+\textscsel,K)≤minL⊆V:|L|≤18|L+\textscsel|10{lb}(L,K) .
###### Proof.

Pick any such that . Then

 m\textscpred(L+\textscsel,K)(Lem% .~{}???)≤Υ(L+\textscsel,K)% (A)≤Υ(L\textscsel,K)(Lem.~{}??? (a))≤5Υ(L+,K)(Thm.~{}???)≤10{lb}(L+,K)(B)≤10{% lb}(L,K) .

Inequality (A) holds because , and thus has connected components of smaller size than . In order to apply Lemma 6 (a), we need the condition . This condition is seen to hold after combining Lemma 2 with our assumptions: . Finally, inequality (B) holds because any adversarial strategy using query set can also be used with the larger query set . ∎

Note also that Theorem 4 and Lemma 5 imply the following statement about the optimality of pred over 0-forked query sets.

###### Corollary 8.

For all trees , for all cutsize budgets , and for all 0-forked query sets , the number of mistakes made by pred satisfies .

In the rest of this section we derive a more intepretable bound on based on the function introduced in [6]. To this end, we prove that minimizes up to constant factors, and thus is an optimal query set according to the analysis of [6].

For any subset , let be the number of edges between nodes of and nodes of . Using this notation, we can write

 Ψ∗(L)=max∅≢V′⊆V∖L|V′|Γ(V′,V∖V′) .
###### Lemma 9.

For any tree and any the following holds.

• A maximizer of exists which is included in the node set of a single component of ;

• .

###### Proof.

Let be any maximizer of . For the sake of contradiction, assume that the nodes of belong to components . Let be the subset of nodes included in the node set of , for . Then and . Now let . Since for all , we immediately obtain , contradicting our assumption. This proves (a). Part (b) is an immediate consequence of (a). ∎

###### Lemma 10.

For any tree and any -forked subset we have .

###### Proof.

Let be the largest component of and be its node set. Since is a 0-forked query set, must be either a 1-hinge-tree or a 2-hinge-tree. Since the only edges that connect a hinge-tree to external nodes are the edges leading to connection nodes, we find that . We can now write

 Ψ∗(L+)=max∅≢V′⊆V∖L+|V′|Γ(V′,V∖V′)≥|Vmax|Γ(Vmax,V∖Vmax)≥|Vmax|2=Υ(L+,1)2

thereby concluding the proof. ∎

###### Lemma 11.

For any tree and any subset we have .

###### Proof.

Let be any set maximizing . Since