1 Introduction
In many social, biological, and technological networks, nodes have underlying attributes or variables that are correlated with the network’s topology. Blogs tend to link to other blogs with similar political views [1]. In vertebrate food webs, predators tend to eat prey whose mass is smaller, but not too much smaller, than their own [11]. Networks of word adjacencies are correlated with those words’ parts of speech [30]. In the Internet, different types of service providers form different kinds of links based on their capacities and business relationships [3, 13]—and so on.
There has been a great deal of work on efficient algorithms for community detection in networks (see [12, 32] for reviews). However, most of this work defines a “community” as a group of nodes with high density of connections within the group and a low density of connections to the rest of the network. While this type of assortative community structure is common in social networks, we are interested in a more general definition of functional community—a group of nodes that connect to the rest of the network in similar ways. A set of predators might form a functional group in a food web, not because they eat each other, but because they eat similar prey. In English, nouns often follow adjectives, but seldom follow other nouns. Even some social networks have disassortative structure where pairs of nodes are more likely to be connected if they are from different classes. For example, some human societies are divided into moieties, and only allow marriages between different moieties [21].
We consider a setting where the topology of the network is known, but the class labels of the nodes are not. This could be the case, for instance, if we have a network of blogs and hyperlinks between them (like citations, trackbacks, blogrolls, etc.) and we are trying to classify the blogs according to their political leanings. Another possible application is in online social networks, where friendships are known and we are trying to infer hidden demographic variables. This problem is sometimes referred to as collective classification
[35]. However, in that work the focus is on classification of individual nodes. In contrast, our focus is on the discovery of functional communities in the network, and our underlying generative model is designed around the assumption of that these communities exist.We make no initial assumptions about the structure of the network—for instance, whether its groups are assortative, disassortative, or some mixture of the two. We assume that we can learn the label of any given node, but at a cost, say in terms of work in the field or laboratory. Our goal is to identify a small subset of nodes such that, once we explore them and learn their labels, we can accurately predict the labels of all the others.
We present a general approach to this problem. Our algorithm uses informationtheoretic measures to decide which node to explore next—that is, which one will give us the most information about the rest of the network. We start with a probabilistic generative model of the network, called a stochastic block model [20, 38]
, in which groups connect to each other according to a matrix of probabilities. This model allows an arbitrary mixture of assortative and disassortative structure, as well as directed links from one group to another, and has been used to model networks in many fields (e.g.
[4, 19, 33]).We stress, however, that our approach could be applied equally well to many other probabilistic models, such as those where nodes belong to a mixture of classes [2], a hierarchy of classes and subclasses [10], locations in a latent geographical or social space [17], or niches in a food web [39]. It could also be applied to degreecorrected block models such as those in [23, 26, 31], which treat the nodes’ degrees as parameters rather than data to be predicted.
At each stage of the learning process, some of the nodes’ labels are already known and we need to decide which node to explore next. We do this by estimating, for each node, the mutual information between its label and the joint distribution of all the others’ labels, conditioned on the labels of the nodes that are known so far. We obtain this estimate by Gibbs sampling, giving each classification of nodes a probability integrated over the parameters of the block model. We then explore the node for which this mutual information is largest.
A key fact about the mutual information, which we argue is essential to our algorithm’s performance, is that it is not just a measure of uncertainty: it is a combination of uncertainty about a node’s label and the extent to which it is correlated with the labels of other nodes. Thus the algorithm explores nodes which maximize the expected amount of information it will gain about the entire network. It skips nodes whose labels seem obvious to it, or which are uncertain but have little effect on other nodes. In an assortative network, for instance, it starts by exploring nodes which are central to their communities, and then explores nodes along the boundaries between them, without being told in advance to pursue this strategy.
We also present an alternate approach which maximizes a quantity we call the average agreement. For each node , this is the average number of nodes at which two independent samples of the Gibbs distribution agree, conditioned on the event that they agree at . Like mutual information, average agreement is high for nodes that are highly correlated with the rest of the network. A similar idea (but not applied to networks) is present in [34].
We test our algorithm on three realworld networks: the social network of a karate club, a network of common adjacent words in a Charles Dickens novel, and a marine food web of species in the Antarctic. Each of these networks is curated in the sense that we possess the correct node labels, such as the faction of the social network each individual belongs to, the part of speech of each word, or the part of the habitat each species lives in. We judge our algorithm according to how accurately it predicts the labels of the unexplored nodes, as a function of the number of nodes it has explored so far. We also compare our algorithm with several simple heuristics, such as exploring nodes based on their degree or betweenness centrality, and find that it significantly outperforms them.
2 Related work
The idea of designing experiments by maximizing the mutual information between the variable we learn next and the joint distribution of the other variables, or equivalently the expected amount of information we gain about the joint distribution, has a long history in statistics, artificial intelligence, and machine learning, e.g. Mackay
[25] and Guo and Greiner [16]. Indeed, it goes back to the work of Lindley [24] in the 1950s. However, to our knowledge this is the first time it has been coupled with a generative model to discover hidden variables in networks.In recent work, Zhu, Lafferty, and Ghahramani [41] study active learning of node labels using Gaussian fields and harmonic functions defined using the graph Laplacian. However, this technique only applies to networks where neighboring nodes are likely to be in the same class—that is, networks with assortative community structure. In contrast, our techniques are capable of learning about much more general types of network structure, including disassortative and directed relationships between functional communities.
Another approach to active learning of node labels is found in the work of Bilgic and Getoor [6] and Bilgic, Mihalkova, and Getoor [7]
, who use collective vectorbased classifiers. By properly defining the collective relationships between nodes, both assortative or disassortative communities can be learned in this framework. However, our technique differs from theirs by using mutual information as the active learning criterion, which takes into account not just uncertainty, but correlations as well.
Additional works by Goldberg, Zhu, and Wright [14] and Tong and Jin [36]
also perform semisupervised learning on graphs, and handle the disassortative case. But they work in a setting where they know, for each link, if the ends should have the same or different labels, such as if one writer quotes another with pejorative words. In contrast, we work in a setting where we have no such information: only the topology is available to us, and there are no signs on the edges telling us whether we should propagate similar or dissimilar labels.
3 Model and methods
We represent our network as a directed graph with nodes. We assume that there are classes of nodes, so that each node has a class label . We are given the graph , and our goal is to learn the labels . To do this, we assume that is generated by a probabilistic model, in which its topology is correlated with these labels.
The simplest such model, although by no means the only one to which our methods could be applied, is a stochastic block model [20, 38]. It assumes that for each pair of nodes , there is an edge from to with a probability that depends only on their labels, and that these events are independent. Given a classification, i.e., a function assigning a label to each node, the probability of generating a given graph in this model is
(1) 
Here is the number of nodes of class , and is the number of edges from nodes of class to nodes of class . If we wish to focus on undirected graphs, we can modify this expression by restricting the product over pairs of classes with . We can also forbid selfloops, if we wish, by replacing in the term with or in the directed or undirected case respectively.
This kind of stochastic block model is wellknown in the machine learning, statistics, and network communities [5, 37, 15, 18, 19, 33] and has also been used in ecology to identify groups of species in food webs [4]. Unlike e.g. [37, 18, 19], we do not assume that takes one value when and a smaller value when . In other words, we do not assume an assortative community structure, where nodes are more likely to be connected to other nodes of the same class. Nor do we require in general that , since the directed nature of the edges may be important—for instance, in a food web or word adjacency network.
If all classifications are equally likely a priori, then Bayes’ rule implies that the Gibbs distribution on the classifications, i.e., the probability of given , is proportional to the probability of given :
(2) 
In order to define , we need to integrate
over some prior probability distribution on
. If we assume that the are independent, then this integral factors over the product (1). In particular, if each follows a beta prior, we have the Bayesian estimate of edge probabilities(3) 
For reasonable choices of the hyperparameters
and , the prior dominates only in small data cases, such as very small networks or sparsely populated classes. For such small data cases, the beta prior allows the user to input some domain knowledge about, say, the (dis)assortativity of the target network’s community structure. In the limit of large data, the prior will wash out and the datadriven community structure will dominate.If the user wishes to remain agnostic, however, he or she can specify a uniform prior () and allow the learning algorithm to estimate the degree of assortativity, disassortativity, directedness, and so on entirely from the data. We take this approach in this paper, in which case
(4) 
An even simpler approach is to assume that the take their maximum likelihood values
(5) 
and set . This approach was used, for instance, for a hierarchical block model in [10]. When is fixed and the are large, this will give results similar to (4), since the integral over is tightly peaked around . However, for any particular finite graph it makes more sense, at least to a Bayesian, to integrate over the , since they obey a posterior distribution rather than taking a fixed value. Moreover, averaging over the parameters as in (4) discourages overfitting, since the average likelihood goes down when we increase and hence the volume of the parameter space. This gives us a principled way to determine automatically, although in this paper we set by hand. Other methods to determine include minimum description length (MDL) techniques [33] and the Akaike information criterion [4]
4 Active Learning
In the active learning setting, the algorithm can learn the class label of any given node, but at a cost—say, by devoting resources in the laboratory or the field. Since these resources are limited, it has to decide which node to explore. Its goal is to explore a small set of nodes and use their labels to guess the labels of the remaining nodes.
One natural approach is to explore the node with the largest mutual information (MI) between its label and the labels of the other nodes according to the Gibbs distribution (2). We can write this as the difference between the entropy of and its conditional entropy given ,
(6) 
Here is the entropy, averaged over according to the marginal of in the Gibbs distribution, of the joint distribution of conditioned on . In other words, is the expected amount of information we will gain about , or equivalently the expected decrease in the entropy, that will result from learning .
Since the mutual information is symmetric, we also have
(7) 
where is the entropy of the marginal distribution of , and is the entropy, on average, of the distribution of conditioned on the labels of the other nodes. Thus is large if (i) we are uncertain about , so that is large, and (ii) is strongly correlated with the other nodes, so that is small.
We estimate these entropies by sampling from the space of classifications
according to the Gibbs distribution. Specifically, we use a singlesite heatbath Markov chain. At each step, it chooses a node
uniformly from among the unexplored nodes, and chooses its label according to the conditional distribution proportional to , assuming that the labels of all other nodes stay fixed. In addition to exploring the space, this allows us to collect a sample of the conditional distribution of the chosen node and its entropy. Since is the average of the conditional entropy, and since is the entropy of the average conditional distribution, we can write(8) 
where is the probability that and denotes the average, according to the Gibbs distribution, over the labels of the other nodes.
We offer no theoretical guarantees about the mixing time of this Markov chain, and it is easy to see that there are families of graphs and values of for which it it takes exponential time. However, for the realworld networks we have tried so far, it appears to converge to equilibrium in a reasonable amount of time. We test for equilibrium by measuring whether the marginals change noticeably when the number of updates is increased by a factor of . We improve our estimates by averaging over many runs, each one starting from an independently random initial state.
We say that the algorithm is in stage if it has already explored nodes. In that stage, it estimates for each unexplored node , using the Markov chain to sample from the Gibbs distribution conditioned on the labels of the nodes explored so far. It then explores the node with the largest MI. We provide it with the correct value of from the curated network, and it moves on to the next stage.
The mutual information is not the only quantity we might use to identify which node to explore. Another is the average agreement, which we define as follows. Given two classifications , define their agreement as the number of nodes on whose labels they agree,
(9) 
Since our goal is to label as many nodes correctly as possible, we wish we could maximize the agreement between an classification , drawn from the Gibbs distribution, and the correct classification . However, the algorithm doesn’t know , so it assumes that it is drawn from the Gibbs distribution as well. Exploring projects onto the part of the joint distribution of where . So, we define as the expected agreement between two classifications drawn independently from the Gibbs distribution, conditioned on the event that they agree at :
(10) 
We estimate the numerator and denominator of using the same heatbath Gibbs sampler as for , except that we sample independent pairs of classifications by starting the Markov chain at two independently random initial states.
5 Results and discussion
We tested our algorithms on three different networks from three different fields. The first is Zachary’s Karate Club [40]. As shown in Fig. 1, this is a social network consisting of members of a karate club, where undirected edges represent friendships. The club split into two factions, indicated by diamonds and circles respectively. One of them centered around the instructor (node 1) and the other around the club president (node 34), each of which formed their own club. Shaded nodes are more peripheral, and have weaker ties to their communities. This network is highly assortative, with a high density of edges within each faction and a low density of edges between them.
We judge the performance of each algorithm by asking, at each stage and for each node, with what probability the Gibbs distribution assigns it the correct label. In each stage we sampled the Gibbs distribution using independently chosen initial conditions, doing steps of the heatbath Markov chain for each one, and computing averages using the last steps. Increasing the number of Markov chain steps to per stage produced only marginal improvements in performance. Fig. 2 shows what fraction of the unexplored nodes are assigned the correct label with probability at least , for various thresholds , as a function of the stage .
After exploring just four or five nodes, both of our algorithms succeed in correctly predicting the labels of most of the remaining nodes—i.e., to which faction they belong—with high accuracy. The AA algorithm performs slightly better than MI, achieving an accuracy close to after exploring nine nodes. Of course, the Karate Club network is quite small, and there are many communityfinding algorithms that classify the two factions with perfect or nearperfect accuracy [32, 12].
Perhaps more interesting is the order in which our algorithms choose to explore the nodes. In Fig. 3, we sort the nodes in order of the median stage at which they are explored. Error bars show confidence intervals over
independent runs of each algorithm. Some nodes show a large variance in the stage in which they are explored, while others are consistently explored at the beginning or end of the process. Both algorithms start by exploring nodes
and , which are central to their respective communities. Note that these nodes are chosen, as we argued above, not just because their labels are uncertain, but because they are highly correlated with the labels of other nodes.After learning that nodes and are in class and respectively, the algorithms “know” that the network consists of two assortative communities. They they explore nodes such as 3, 9, and 10 which lie at the boundary between these communities. Once the boundary is clear, they can easily predict the labels of the remaining nodes. The last nodes to be explored are those such as , , and , which lie so deep inside their communities that their labels are not in doubt.
The second network consists of the 60 most commonly occurring nouns and the 60 most commonly occurring adjectives in Charles Dickens’ novel David Copperfield. A directed edge connects any pair of words that appear adjacently in the text, pointing from the preceding word to the following one. Excluding eight words which are disconnected from the rest leaves a network with 112 nodes [29]. Unlike Zachary’s Karate Club, this network is both directed and highly disassortative. Of the 1494 edges, 1123 of them point from adjectives to nouns. This lets us classify most nodes early on, simply by labeling a node as an adjective or noun if its outdegree or indegree is large.
Accordingly, our algorithms focus their attention on words about which they are uncertain, like “early,” “low,” and “nothing,” whose outdegrees and indegrees in the text are roughly equal, and words like “perfect” that precede words of both classes (see Fig. 4, where green and yellow nodes represent nouns and adjectives respectively; rectangular nodes are explored first, and elliptical ones last). Once these nodes are resolved, both algorithms achieve high accuracy— accuracy after exploring 20 nodes and close to after exploring 65 nodes (see Fig. 5).
In each stage we sampled the Gibbs distribution using independently chosen initial conditions, doing steps of the heatbath Markov chain for each one, and computing averages using the last steps. Increasing the number of Markov chain steps to per stage produced only marginal improvements in performance. As in Fig. 2, the axis shows the fraction of unexplored nodes which are labeled correctly by the conditional Gibbs distribution with probability at least , for . The performance of the two algorithms is similar in the later stages, but unlike the Karate Club, here MI performs noticeably better than AA in the early stages.
The third network is a food web of species in the Weddell Sea in the Antarctic [11, 9, 22], with edges pointing to each predator from its prey. This data set is very rich, but we focus on two particular variables—the feeding type and the habitat in which the species lives. The feeding type takes values, namely primary producer, omnivorous, herbivorous/detrivorous, carnivorous, detrivorous, and carnivorous/necrovorous. The habitat variable takes values, namely pelagic, benthic, benthopelagic, demersal, and landbased.
We show results of our algorithms for both variables in Fig. 6. The results are averaged over 100 runs of each algorithm. In each stage we sampled the Gibbs distribution using independently chosen initial conditions, doing steps of the heatbath Markov chain for each one, and computing averages using the last steps. For the feeding type, after exploring half the nodes, both algorithms correctly label about of the remaining nodes. For the habitat variable, both algorithms are less accurate, although AA performs somewhat better than MI. Note that the accuracy only includes the unexplored nodes, not the nodes we have already explored. Thus it can decrease if we explore easilyclassified nodes early on, so that hardtoclassify nodes form a larger fraction of the remaining ones.
Fig. 6 shows that both algorithms get to a state where they are confident, but wrong, about many of the unexplored nodes. For the feeding type variable, for instance, after the AA algorithm has explored species, it labels of the remaining nodes correctly with probability , but it labels the other correctly with probability less than . In other words, it has a high degree of confidence about all the nodes, but is wrong about many of them. Its accuracy improves as it explores more nodes, but it doesn’t achieve high accuracy on all the unexplored nodes until there are only about of them left.
Why is this? We argue that the fault lies, not with our learning algorithms and the order in which they explore the nodes, but with the stochastic block model and its ability to model the data. For example, for the habitat variable, these algorithms perform well on pelagic, demersal, and landbased species. But the benthic habitat, which is the largest and most diverse, includes species with many feeding types and trophic levels.
These additional variables have a large effect on the topology, but they are not taken into account by the block model. As a result, more than half the benthic species are mislabeled by the block model in the following sense: even if we condition on the correct habitats of all the other species, the species’ most likely habitat is pelagic, benthopelagic, demersal, or landbased. Specifically, 219 of the 488 species are mislabeled by the most likely block model, of them with confidence over .
Of course, we can also regard our algorithms’ mistakes as evidence that these habitat classifications are not cut and dried. Indeed, ecologists recognize that there are “connector species” that connect one habitat to another, and belong to some extent to both.
To test our hypothesis that it is the block model’s inability to model the data that causes some nodes to be misclassified, we artificially modified the data set to make it consistent with the block model. Starting with the nodes’ original class labels, we updated the habitat of each species to its most likely value according to the block model, given the habitats of all the other species. After iterating this process six times, we reached a fixed point where each species’ habitat is consistent with the block model’s predictions. On this synthetic data set both of our learning algorithms perform perfectly, predicting the habitat of every species with close to accuracy after exploring just of them.
More generally, it is important to remember that the topology of the network is only imperfectly correlated with the nodes’ types. Zachary [40] relates that one of members of the Karate Club joined the instructor’s faction even though the network’s topology suggests that he was more strongly connected to the president. The reason is that he was only three weeks away from a test for his black belt when the split occurred. He had already invested four years learning the instructor’s style of karate, and if he had joined the president’s club he would have had to start over with a white belt. In any realworld network, there is information of this kind that is not reflected in the topology and which is hidden from our algorithm. If a node is of a given class for idiosyncratic reasons like these, we cannot expect any algorithm based solely on topology and the other nodes’ class labels—no matter how sophisticated a probabilistic model we use—to correctly classify it.
6 Comparison with Simple
Heuristics
We compared our active learning algorithms with several simple heuristics. These include exploring the node with the highest degree in the subgraph of unexplored nodes, exploring the node with the highest betweenness centrality (the fraction of shortest paths that go through it, see [8, 27, 28]) in the subgraph of unexplored nodes, and exploring a node chosen uniformly at random from the unexplored ones. We judge the performance of these heuristics using the same Gibbs sampling process as for MI and AA.
In Fig. 7, we show the results of these heuristics at the accuracy threshold on all three networks, including both the habitat and feeding type variables in the food web. On Zachary’s Karate Club (left) our algorithms outperform these heuristics consistently. In the David Copperfield network (right), the highestdegree and highestbetweenness heuristics enjoy an early lead, but quickly hit a ceiling and are surpassed by MI and AA.
For the Weddell Sea food web (bottom), the highestdegree and highestbetweenness heuristics perform poorly throughout the learning process. One reason for this is that many nodes with high degree or high betweenness are easy to classify from the labels of their neighbors. By exploring these nodes first, these heuristics leave themselves mainly with hardtoclassify nodes. The randomnode heuristic performs surprisingly well early on, but all three heuristics are worse than MI or AA once they have explored half the nodes.
7 Conclusion
Active learning, using mutual information or average agreement coupled with a generative model, offers a new approach to analyzing networks where the topology is known, but knowledge of class labels is incomplete and costly to obtain. We have shown for three networks, one social, one lexical, and one biological, that our algorithms do a good job of predicting the labels of unexplored nodes after exploring a relatively small fraction of the network, correctly recognizing both assortative and disassortative functional communities. Certainly not all networks are welldescribed by the simple block model we use here, but our approach can be generalized to probabilistic network models which take information on the nodes’ locations or degrees into account.
Acknowledgments
We are grateful to Joel Bader, Aaron Clauset, Jennifer Dunne, Nathan Eagle, Brian Karrer, Jon Kleinberg, Mark Newman, Cosma Shalizi, and Jerry Zhu for helpful conversations, and to Ute Jacob for the Weddell Sea food web data. J.B. R. is also grateful to the Santa Fe Institute for their hospitality. This work was supported by the McDonnell Foundation.
References
 [1] L. Adamic and N. Glance. The political blogosphere and the 2004 US election: Divided they blog. In Proc 3rd Intl Workshop on Link Discovery., 2005.
 [2] E. M. Airoldi, D. M. Blei, S. E. Fienberg, and E. P. Xing. Mixed membership stochastic blockmodels. J. Machine Learning Research, 9:1981–2014, 2008.
 [3] D. Alderson, L. Li, W. Willinger, and J. C. Doyle. Understanding internet topology: principles, models, and validation. IEEE/ACM Trans. Networks, 13(6):1205–1218, 2005.
 [4] S. Allesina and M. Pascual. Food web models: a plea for groups. Ecology Letters, 12:652–662, 2009.
 [5] P. J. Bickel and A. Chen. A nonparametric view of network models and newmangirvan and other modularities. Proc. Natl. Acad. Sci., 106:21068–21073, 2009.
 [6] M. Bilgic and L. Getoor. Linkbased active learning. In NIPS Workshop on Analyzing Networks and Learning with Graphs, 2009.
 [7] M. Bilgic, L. Mihalkova, and L. Getoor. Active learning for networked data. In Proc. Intl. Conf. on Machine Learning, 2010.
 [8] U. Brandes. A faster algorithm for betweenness centrality. Journal of Mathematical Sociology, 25(2):163–177, 2001.
 [9] U. Brose, L. Cushing, E. L. Berlow, T. Jonsson, C. BanasekRichter, L. F. Bersier, J. L. Blanchard, T. Brey, S. R. Carpenter, M. F. Blandenier, et al. Body sizes of consumers and their resources. Ecology, 86(9):2545–2545, 2005.
 [10] A. Clauset, C. Moore, and M. E. J. Newman. Hierarchical structure and the prediction of missing links in networks. Nature, 453(7191):98–101, 2008.
 [11] U. B. et al. Consumerresource bodysize relationships in natural food webs. Ecology, 87(10):2411–2417, 2006.
 [12] S. Fortunato. Community detection in graphs. Physics Reports, 2009.
 [13] L. Gao and J. Rexford. Stable internet routing without global coordination. IEEE/ACM Trans. Networks, 9(6):681–692, 2001.
 [14] A. B. Goldberg, X. Zhu, and S. Wright. Dissimilarity in graphbased semisupervised classification. J. Machine Learning Research W&P, 2:155–162, 2007.
 [15] R. Guimera and M. SalesPardo. Missing and spurious interactions and the reconstruction of complex networks. Proc. Natl. Acad. Sci., 106:22073–22078, 2009.
 [16] Y. Guo and R. Greiner. Optimistic active learning using mutual information. In Proc. Intl. Joint Conf. on Artificial Intelligence, 2007.
 [17] M. S. Handcock, A. E. Raftery, and J. M. Tantrum. Modelbased clustering for social networks. J. Royal Statist. Soc. A, 170(2):1–22, 2007.
 [18] M. B. Hastings. Community detection as an inference problem. Physical Review E, 74(3):035102, 2006.
 [19] J. M. Hofman and C. H. Wiggins. Bayesian approach to network modularity. Physical Review Letters, 100(25):258701, 2008.
 [20] P. W. Holland, K. B. Laskey, and S. Leinhardt. Stochastic blockmodels: first steps. Social networks, 5:109–137, 1983.
 [21] M. Houseman and D. R. White. Taking Sides: Marriage Networks and Dravidian Kinship in Lowland South America, pages 214–243. Transformations of Kinship. Smithsonian Institution Press, 1998.
 [22] U. Jacob. Trophic Dynamics of Antarctic Shelf Ecosystems, Food Webs and Energy Flow Budgets. PhD thesis, University of Bremen, 2005.
 [23] B. Karrer and M. E. J. Newman. Stochastic blockmodels and community structure in networks, 2010. Preprint, arXiv:1008.3926v1.
 [24] D. V. Lindley. On a measure of the information provided by an experiment. Ann. Math. Statist., 27(4):986–1005, 1956.
 [25] D. J. MacKay. Informationbased objective functions for active data selection. Neural Computation, 4(4):590–604, 1992.
 [26] M. Mørup and L. K. Hansen. Learning latent structure in complex networks. In NIPS Workshop on Analyzing Networks and Learning with Graphs, 2009.
 [27] M. Newman. Scientific collaboration networks. II. shortest paths, weighted networks, and centrality. Physical Review E, 64(1), 2001.
 [28] M. E. J. Newman. A measure of betweenness centrality based on random walks. Social Networks, 27(1):39–54, 2005.

[29]
M. E. J. Newman.
Finding community structure in networks using the eigenvectors of matrices.
Physical Review E, 74(3):36104, 2006.  [30] M. E. J. Newman and E. A. Leicht. Mixture models and exploratory analysis in networks. Proc. Natl. Acad. Sci., 104:9564–9569, 2006.
 [31] A. S. Patterson, Y. Park, and J. S. Bader. Degreecorrected block models. Manuscript.
 [32] M. A. Porter, J. P. Onnela, and P. J. Mucha. Communities in networks. Notices of the American Mathematical Society, 56(9):1082–1097, 2009.
 [33] M. Rosvall and C. T. Bergstrom. An informationtheoretic framework for resolving community structure in complex networks. Proc. Natl. Acad. Sci., 104(18):7327, 2007.
 [34] N. Roy and A. McCallum. Toward optimal active learning through sampling estimation of error reduction. In Proc. 18th Intl. Conf. on Machine Learning, pages 441–448, 2001.
 [35] P. Sen, G. M. Namata, M. Bilgic, L. Getoor, B. Gallagher, and T. EliassiRad. Collective classification in network data. AI Magazine, 29(3):93–106, 2008.
 [36] W. Tong and R. Jin. Semisupervised learning by mixed label propagation. In Proc. 22nd Intl. Conf. on Artificial intelligence, volume 1, pages 651–656, 2007.
 [37] J. Čopič, M. O. Jackson, and A. Kirman. Identifying community structures from network data. B.E. Press Journal of Theoretical Economics, 9(1):Article 30, 2009.
 [38] Y. J. Wang and G. Y. Wong. Stochastic blockmodels for directed graphs. J. American Statistical Assn., 82(397):8–19, 1987.
 [39] R. J. Williams, A. Anandanadesan, and D. Purves. The probabilistic niche model reveals the niche structure and role of body size in a complex food web. PLoS One, 5(8):e1209, 2010.
 [40] W. W. Zachary. An information flow model for conflict and fission in small groups. J. Anthropological Research, 33(4):452–473, 1977.
 [41] X. Zhu, J. Lafferty, and Z. Ghahramani. Combining active learning and semisupervised learning using gaussian fields and harmonic functions. In Proc. ICML2003 Workshop on the Continuum from Labeled to Unlabeled Data, 2003.
Comments
There are no comments yet.