Transductive Classification Methods for Mixed Graphs

06/26/2012
by   Sundararajan Sellamanickam, et al.
0

In this paper we provide a principled approach to solve a transductive classification problem involving a similar graph (edges tend to connect nodes with same labels) and a dissimilar graph (edges tend to connect nodes with opposing labels). Most of the existing methods, e.g., Information Regularization (IR), Weighted vote Relational Neighbor classifier (WvRN) etc, assume that the given graph is only a similar graph. We extend the IR and WvRN methods to deal with mixed graphs. We evaluate the proposed extensions on several benchmark datasets as well as two real world datasets and demonstrate the usefulness of our ideas.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

03/31/2018

Multi-label Learning with Missing Labels using Mixed Dependency Graphs

This work focuses on the problem of multi-label learning with missing la...
09/29/2020

New GCNN-Based Architecture for Semi-Supervised Node Classification

The nodes of a graph existing in a specific cluster are more likely to c...
10/22/2020

Joint Use of Node Attributes and Proximity for Semi-Supervised Classification on Graphs

The node classification problem is to infer unknown node labels in a gra...
10/27/2021

Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods

Many widely used datasets for graph machine learning tasks have generall...
09/15/2011

Active Learning for Node Classification in Assortative and Disassortative Networks

In many real-world networks, nodes have class labels, attributes, or var...
07/08/2020

Mining Dense Subgraphs with Similar Edges

When searching for interesting structures in graphs, it is often importa...
09/15/2017

Deep Graph Attention Model

Graph classification is a problem with practical applications in many di...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Consider the problem of transductive classification in a relational graph consisting of labeled and unlabeled nodes. Most methods for this problem assume that connected nodes have the same labels. In many applications this assumption is violated to varying degrees depending on the underlying relational graph; that is, many edges can be formed using pairs of nodes having different class lables (this is referred to as label dissimilarity). When this happens the performance of the methods can deteriorate significantly. If such ‘dissimilar’ edges can be identified via domain knowledge or other ways, they can be eliminated to improve the performance. Even better, it makes sense to collect the identified dissimilar edges in a dissimilar graph and use it differently but together with the similar graph (set of edges connecting nodes having same labels) to improve classification. This paper is rooted on this point. Let us refer to the combination of similar and dissimilar graphs simply as a mixed graph. Recently Goldberg et al. [3] extended the graph-based semi-supervised learning method of Sindhwani et al [7] to deal with mixed graphs. In this method classification is fundamentally based on content features of nodes, with the mixed graph strongly guiding the classification. If and denote the classifier outputs associated with nodes and that form a dissimilar edge, Goldberg et al.’s method [3] includes a loss term, in the training objective function, thus putting pressure on and to have opposing signs. In many applications, content features are either weak or unavailable. Such problems have to be addressed in a purely graph transductive setting. In another related work, Tong and Jin [10] proposed a graph based approach using semi-definite programming (SDP) to explore both similar and dissimilar graphs. The problem solved in their work is a non-convex programming problem whose solution can lead to local optima. In contrast the proposed methods in this paper are simpler and more efficient. Further our extension of the information regularization method for mixed graph (IR-MG) leads to a convex programming problem and the proposed algorithm converges to the global solution.

The main aim of this paper is to extend and explore existing methods for a transductive setting to deal with mixed graphs (even when non-content based relational graphs are available). We only take up binary classification in this paper. There are many worthy methods in this group of methods; examples are: Information Regularization (IR) [1], Weighted vote Relational Neighbor classifier (WvRN) [5], Local and Global Consistency (LGC) [11] and Gaussian Function Harmonic Field (GFHF) [12]. To keep the paper short we take only the first two methods for extension. Both these methods are based on probabilistic ideas; thus, instead of the squared loss used by Goldberg et al. [3]

, we devise a divergence-based convex loss function to deal with dissimilar edges. Empirical results show that the extensions are very effective, although the ideas are simple and straight-forward.

Depending on the way they are formed, the similarity and dissimilarity graphs in a given problem may differ in pureness. So it is useful to have a hyperparameter (

) that mixes the effects of these two graphs (e.g., relative weighting between the losses corresponding to the two graphs). We make use of such a parameter; our experiments on the various datasets point to the importance of this parameter. Though Goldberg et al. [3] do not use such a parameter, it appears to be useful for that method too. The quality of the graphs relating to classification solution can also be approximately measured using a quantity called node assortativity coefficient (NAC) [4]. NAC is easy to compute and gives a good indication of the usefulness of the graphs for classification. It can also be used to quickly select a decent value for the parameter.

To demonstrate the effectiveness of our extended methods we do detailed experiments, like Goldberg et al. [3], on standard academic benchmark datasets in which mixed graphs are constructed systematically but artificially. We also show usefulness of our methods on real world datasets involving web pages of shopping domains. In these problems mixed graphs arise naturally. For example, two web pages that either have strong structural similarity or have co-citation links from a common third page may have the same labels, and, web pages that have extremely poor structural correlation may have opposing labels.

The paper is organized as follows. In section 2 we give the extensions of IR and WvRN for mixed graphs. In section 3 we define NAC and discuss its usefulness; hyperparameter tuning is also discussed there. Experimental results are given in section 4 and we conclude with section 5.

The following notations will be used in this paper. Let be an undirected graph with representing the set of nodes, and representing the set of edges and associated weights respectively. Assume that , where represents the edge weight between the nodes and . In a graph typically we have both similar and dissimilar edges. Similar edges connect nodes belonging to same class and dissimilar edges connect nodes belonging to different classes. Since an edge can be either similar or dissimilar we can separate the graph into similar and dissimilar graphs (denoted as and ) respectively. Then the nodes, edges and weights corresponding to these graphs are appropriately defined as: and . Let and

denote two probability distributions over the set of possible labels, associated with the node

. Usually represents any known or a prior distribution for node and

represents probability distribution estimate obtained from any given method. In this paper we are interested only in binary classification problem and so

and

are 2-dimensional vectors. Also, let

and . Let and denote the set of labeled and unlabeled nodes respectively.

2 Proposed Methods

In this section we show how two existing methods, namely, information regularization (IR) and Weighted vote Relational Neighbor classification (WvRN) can be extended to handle the mixed graph scenario.

2.1 Information Regularization in a mixed
graph setting

In the conventional setting only similar edges are assumed. That is, we have and the edge weights in some sense indicate our belief or confidence in the assumption that the connected nodes belong to same class. Within that assumption, we consider solving the transductive classification problem by optimizing the objective function:

(1)

where denotes any divergence measure that measures the dissimilarity between two distributions. Several divergence measures have been used in the literature. They include Kullback-Leibler (KL) divergence, Jensen-Shannon (JS) divergence, Jensen-Renyi (JR) divergence etc. Here, we consider Jensen-Shannon divergence which is a symmetric and smoothed version of the KL divergence. When is taken as the JS-divergence the regularization term is nothing but the information regularization proposed by Corduneanu and Jaakkola [1] in a graph setting. The first term in (1) is a data fitting term and measures how well the estimate matches the input distribution , . The second term is a regularization term and it regularizes the solution with respect to the underlying relational graph. The regularization constant trades off between the data fitting and regularization terms. When two nodes are strongly connected their distributions are expected to be similar and the regularization term enforces this behavior. Clearly, if the individual terms are convex then the solution is unique.

(1) assumes that all the edges are similarity edges (i.e., ). Therefore depending on the extent to which this assumption is violated the performance suffers. To address this problem we propose the following modified objective function:

(2)

where is a transformation matrix and is another regularization constant. Let . Clearly is still a distribution and the transformation facilitates divergence measurement of the distributions with the distributions for the edges in . Here, represents a vector of all ones. Therefore the dissimilar edges will also help in reinforcing the class distributions in a positive way.

Corduneanu and Jaakkola [1] proved that the solution to (1) with information regularization is unique. Using the constraint , they suggested a distributed propagation algorithm that finds the solution in an iterative fashion. In a similar way one can show that (2) is also convex and that the solution can be found in an iterative fashion. The proof is based on a standard log sum inequality and properties of KL-divergence measure [2]. Therefore (2) is a natural extension of the information regularization approach in the mixed graph setting; we will refer to this method as information regularization for mixed graphs (IR-MG). The algorithm is given in 1.

We note that when we set , and optimize only for then the solution is dependent only on the second term in (1), and, second and third terms in (2). Such a setting is useful when the labels are clean and the graph is not extremely dense in some regions [9]. Both these requirements can be often met in many practical applications. When they cannot be met, methods proposed in [9] are useful to solve (1). Such methods can be appropriately extended to find the solution for our problem of mixed graphs.

In a normalized graph setting, one way to normalize is to do node level normalization using its degree separately in each graph. That is, set and where and . Then, we can set and where and . In such a case, is the overall regularization constant and weighs the similar and dissimilar contributions. When we set , , we have only one parameter . In practice since the graphs are impure (i.e., it may not be possible to construct pure similar and dissimilar graphs) to varying degrees, the parameter plays an important role in achieving improved performance.

   and
  For all nodes , initialize to the class prior (obtained from known labeled nodes) and fix .
  repeat
     for each edge  do
        
     end for
     for each edge  do
        
     end for
     for each element  do
        
     end for
     
  until 
Algorithm 1 IR-MG Algorithm

2.2 WvRN Classification in a mixed graph setting

The original probabilistic Weighted vote Relational Neighbor classifier (with relaxation labeling) method [5] was formulated to solve the collective classification problem (for only similar graphs) where class distributions of a subset of nodes are known and fixed. Then the class distributions of the remaining (unlabeled) nodes are obtained by an iterative algorithm. It has two components, namely, weighted vote relational neighbor classifier component and relaxation labeling (RL) component. The relaxation labeling component performs collective inferencing and keeps track of the current probability estimates for all unlabeled nodes at each time instant . These frozen estimates are used by the relational classifier. The relational classifier computes the probability distribution for each unlabeled node as the weighted sum probability distributions of its neighbors with weight ; that is,

(3)

where and is a normalizing constant. Since relaxation labeling may not converge, sometimes simulated annealing is performed to ensure convergence [5].

In a mixed graph setting, we can modify (3) as:

(4)

where . As in the case of IR-MG method, the parameter weighs the similar and dissimilar graphs. With the modification given in (3), we refer to this method as WvRN-MG. The algorithm is given in 2.

  , , and
  For all nodes , initialize to the class prior (obtained from known labeled nodes) and fix .
  repeat
     for each element and  do
        
        
     end for
      and
  until 
Algorithm 2 WvRN-MG Algorithm

3 Graph Characteristics and Setting

Characteristics of graphs play a major role in achieving good classification performance. One of the key characteristics of a relational graph is the correlation of the class variable of related entities. A graph is said to have homophily characteristics when the related entities in the graph have the same label; this was studied by early social network researchers. All the methods that make use of this assumption are essentially homophily based methods [5]. There is also a link-centric notion of homophily known as assortativity studied in [6]. The assortativity coefficient [6] measures the homophily characteristics based on the correlation between the classes linked by edges in the graph. Macskassy and Provost [5] developed a variant of this coefficient. It is based on the graph’s node assortativity matrix where represents, for all nodes of class , the average weighted fraction of their weighted edges that connect them to nodes of class such that . Then the node assortativity coefficient (NAC) is defined as: where and denote the sum of the -th row and -th column respectively. This coefficient takes values in [-1,1] with the extremes indiciating strong connectivity between dissimilar and similar classes respectively. Macskassy and Provost [5] used this coefficient to study its usefulness in edge selection [5]. Macskassy [4] used this coefficient to weigh different edge types when there are multiple graphs. Specifically, each edge was scaled by its graph’s value; if it is negative the scaling factor for that edge type (graph) was set to zero. Since the original WvRN is a homophily based method, Macskassy and Provost [5] set the weight to zero for graphs having negative values. We illustrate below how this coefficient can be used to set in the mixed graph scenario.

In our proposed methods, the mixture parameter plays an important role since it decides the degree to which each graph controls the performance. In practice this parameter can be set in two ways. One way is to set using the NAC values of similar and dissimilar graphs. Let and denote estimates of the NAC values of the similar and dissimilar graphs respectively. Note that, if the dissimilar graph is pure (for example, as in section 4.1 below) then . Therefore, we can set . If and/or it is not a good idea to use the above estimate of . For best performance it is a good idea to set using cross-validation. However, unlike NAC based estimation, the CV technique is expensive since we need to run the training algorithm several times. Finally, note that, since both methods are based on labeled nodes, a good estimate of can be obtained only when the number of labeled nodes is not too small. In section 4.3 we illustrate the usefulness of these techniques on several benchmark datasets.

Dataset
G50C 550 40940 550 50 0.47 10-50
WINDOWS-MAC 1946 124806 7511 50.62 0.49 50-400
WebKB-PAGELINK 1051 269044 4840 21.88 0.57 10-50
WebKB-LINK 1051 72446 1840 21.88 0.55 10-50
IMDBALL 1441 48371 - 57.32 0.36 50-400
CORAALL1 4240 71802 - 6.25 0.69 50-800
CORAALL2 4240 71802 - 8.28 0.59 50-800
CORAALL3 4240 71802 - 12.33 0.60 50-800
CORAALL4 4240 71802 - 32.17 0.67 50-800
UG-Product () 1166 54462 - 39.71 0.99 40-160
UG-Product () 1166 47327 - 39.71 0.76 40-160
UG-Listing () 1166 54462 - 54.55 0.96 40-160
UG-Listing () 1166 47327 - 54.55 0.52 40-160
CU-Product () 1433 26201 - 46.55 0.44 40-160
CU-Product () 1433 71650 - 46.55 0.23 40-160
CU-Listing () 1433 26201 - 35.03 0.96 40-160
CU-Listing () 1433 71650 - 35.03 0.46 40-160
Table 1: Properties of datasets: and denote the number of nodes and edges in respectively; , , and denote the number of labeled nodes, the number of (content) features, percentage of positive examples and node assortativity coefficient values respectively.

4 Experiments

In this section we present results obtained from various experiments conducted on several academic benchmark datasets as well as real world datasets formed from web pages of shopping web sites. First we study the performances of the proposed methods, namely, IR-MG and WvRN-MG on mixed graphs constructed from already available relational graphs of benchmark datasets; these results demonstrate gains that accrue as a result of moving from a noisy similar graph towards a quite pure similar-dissimilar graph combination. Next, we evaluate the performances on similar and dissimilar graphs that arise naturally from web pages of shopping sites. Finally, we compare the relative performances of our methods as well as evaluate them against the method of Goldberg et al.[3].

4.1 Experiments on partitions of given graph into dissimilar and similar graphs

Usually a given relational graph () with partially labeled nodes is impure and consists of both similar and dissimilar edges. For our experiments we extract similar and dissimilar graphs (denoted as and ) from using the following model. Similar to the work of Goldberg et al. [3] we use an oracle which takes a pair of nodes and tells whether the edge formed by them is similar or dissimilar. We construct by randomly picking a percentage of dissimilar edges () connecting only unlabeled nodes in by querying the oracle. Note that the learner only knows that the edges are dissimilar; it does not know the actual labels of the nodes. Thus, the dissimilar graph is a pure graph consisting of only unlabeled nodes. Then the similar graph is obtained as - . Note that, unlike , may not be pure. This is because we vary the percentage of edges picked from to construct ; also, even if we pick all the dissimilar edges connecting unlabeled nodes, there can still be some dissimilar edges connecting labeled and unlabeled nodes left in . This model is different from the model used by Goldberg et al. [3]. In that work, the original graph is taken as and, is constructed by taking random pairs of nodes having opposing labels using the oracle. Our model is appropriate when we are given a graph and there is some way of filtering out dissimilar edges from it. On the other hand, the model used by Goldberg et al. [3] is appropriate when we are given a similar graph and, additionally one can construct a dissimilar graph using domain knowledge. In both models the dissimilar graph is pure; one can also think of experimenting with alternate models which introduce some noise in the dissimilar graph.

A summary description of various benchmark datasets used in the experiments is given in Table 1. All the datasets indicated correspond to binary classification problems. The datasets G50C, WINDOWSMAC, WebKB-PAGELINK and WebKB-LINK used in [7] are taken from http://people.cs.uchicago.edu/~vikass/research.html

. G50C is an artificial dataset generated from two unit covariance normal distributions with equal probabilities; the means are adjusted so that the true Bayes error is

 [7]. WINDOWSMAC dataset is a subset of 20-newsgroup dataset with the documents belonging to two categories windows and mac. The WebKB dataset arises from hypertext-based categorization of web documents with two classes course and non-course. The WebKB-LINK dataset uses features derived from the anchortext associated with links on other webpages that point to a given web page. The WebKB-PAGELINK dataset uses both PAGE and LINK features where PAGE features are derived from the content of a page. In each of these four datasets mentioned above, following [7, 3], we construct the relational graph with

-nearest neighbors using Gaussian weights. Specifically, the weight between kNN points

and is , while other weights are zero; is set to , and for G50C, WINDOWSMAC and WebKB datasets respectively. We also consider the datasets, CORAALL and IMDBALL that do not have any input feature representation. They have the relational graph matrix constructed purely from underlying relations. The CORAALL dataset is derived from the CORA dataset which comprises of computer science research papers; the relational graph is constructed using both co-citation and common author relationships between papers. This dataset has seven classes with each class representing topics like Neural Networks, Genetic Algorithms etc. We converted this seven class problem into 7 one versus all binary classification problems and the corresponding datasets are referred as CORAALL1, CORAALL2 and so on, with the number indicating the positive class. The IMDBALL dataset is based on networked data from the Internet Movie Database (IMDb) (http://www.imdb.com); here classification is about predicting movie success determined by box-office receipts (high-revenue versus low-revenue) and the relational graph is constructed between movies by linking them when they share a production company. The weight of an edge in the resulting graph is the number of production companies two movies have in common [5]. The CORAALL and IMDBALL datasets are available with the toolkit described in [5].

Next we give more details on the experiments. We provide plots only for a few datasets and comment on other datasets when needed. For each dataset, we varied the number of labeled nodes (), the mixture parameter and the percentage of dissimilar edges () in used for forming the dissimilar graph. In all our experiments we considered 25 realizations where each realization corresponds to one random stratified labeling of nodes.

We present various observations from the experimental study conducted on all the academic benchmark datasets given in Table 1. Compared to using the original graph significant performance improvements were observed with the use of the mixed graph, in a vast majority of cases of varying , and on all the datasets. Performance results on two representative datasets, viz. IMDBALL and CORAALL1 are given in figure 1. It is clearly seen that the best performance is achieved for some intermediate values of ; see for instance the results of CORAALL1, IR-MG, L=80 and 200. This demonstrates that although the similar graph is noisy, it is still useful in the mixed graph setting to get improved performance. In the case of IMDBALL dataset, the best performance is achieved at low value of and smaller values; this is because the similar graph is more noisy (with the original graph having a node assortativity coefficient of only 0.36). However, for large values, the similar graph becomes purer (but still noisy) and the best performance is achieved again for some intermediate values of .

We also conducted paired-t statistical significance tests to compare IR-MG and WvRN-MG methods on each dataset. On the original graph, the WvRN-MG method was slighly better on WebKB-PAGELINK, CORAALL1, CORAALL2, CORAALL3 and CORAALL4 datasets and the significance reduces as the number of labeled nodes is increased. Next we consider the mixed graph case. In the case of CORAALL1 dataset, we observed that the IR-MG method started performing better in an intermediate range of values of as the graph becomes purer. At higher values (corresponding to the original graph when and subsequently purer similar graph as increases), there was no statistical significance found. Similar observations were found in the case of IMDBALL dataset. Overall we found that the IR-MG method performs better on purer graphs.

In practice we need automatic ways of using domain knowledge or otherwise to identify similar and dissimilar edges. This is an important research topic; but it is beyond the scope of this paper. In several applications similar and dissimilar graphs occur naturally, and both the graphs are typically noisy. We demonstrate the usefulness of the proposed methods on one such application next.

Figure 1: AUC score performance the of IR-MG and WvRN-MG methods on IMDBALL and CORAALL1 datasets under two different label size conditions. The numbers in the legend (applicable for all plots) indicate the percentage of dissimilar edges (with respect to the total number of dissimilar edges connecting unlabeled nodes) in . The dotted black line indicate the performance with the original graph .
Figure 2: AUC score performance of the IR-MG and WvRN-MG methods on two shopping domain datasets under three different label sizes (40, 80 and 160 - indicated as 1, 2 and 3) for dissimilar (dark blue), similar (blue) and mixed graph (3 cases - with NAC (green), CV (orange) and the best (maroon) values) (in that order).
Dataset Method
G50C (50) WvRN-MG (NAC) 0.9844 0.0043 0.9886 0.0040 0.9983 0.0014
WvRN-MG (CV) 0.9916 0.0042 0.9930 0.0061 0.9970 0.0043
IR-MG (NAC) 0.9851 0.0042 0.9892 0.0039 0.9986 0.0012
IR-MG (CV) 0.9914 0.0055 0.9938 0.0059 0.9967 0.0048
Goldberg et al. 0.9886 0.0016 0.9946 0.0011 0.9980 0.0007
WINDOWSMAC (100) WvRN-MG (NAC) 0.9632 0.0056 0.9714 0.0050 0.9927 0.0026
WvRN-MG (CV) 0.9811 0.0091 0.9887 0.0084 0.9938 0.0061
IR-MG (NAC) 0.9639 0.0056 0.9722 0.0050 0.9933 0.0024
IR-MG (CV) 0.9815 0.0090 0.9883 0.0082 0.9940 0.0015
Goldberg et al. 0.9714 0.0029 0.9863 0.0012 0.9950 0.0003
WebKB-LINK (40) WvRN-MG (NAC) 0.9465 0.0120 0.9524 0.0120 0.9696 0.0074
WvRN-MG (CV) 0.9626 0.0073 0.9723 0.0059 0.9800 0.0041
IR-MG (NAC) 0.9432 0.0113 0.9499 0.0118 0.9693 0.0074
IR-MG (CV) 0.9614 0.0077 0.9718 0.0062 0.9801 0.0042
Goldberg et al. 0.9451 0.0260 0.9545 0.0230 0.9607 0.0201
Table 2: AUC Performance comparison of Goldberg et al., IR-MG and WvRN-MG methods on various datasets. The number of labeled examples (L) used in each dataset is indicated in parentheses. The number of realizations in each case was 25. The values used in IR-MG and WvRN-MG are indicated in parentheses - here, NAC and CV indicate the techniques that were used to set .

4.2 Evaluation on natural graphs from shopping sites

We also evaluated the proposed methods on natural graphs constructed using structural signature (shingle) of web pages from shopping sites http://www.uncommongoods.com (referred as UG) and http://www.compusa.com (referred as CU). The similar and dissimilar graphs were constructed as follows. A similar edge between two pages was formed when their structural signatures had a match score of at least 6 (the values are in the range [0,8]) and, a dissimilar edge was put when the match score was 0111We used binary representation (i.e., edge with unit weight or no edge) for the graphs since the signatures are not accurate.. In practice both the dissimilar and similar graphs have noise since the signatures are not accurate. We considered two binary classification problems. In the first problem, the goal was to differentiate product detail pages from the rest. In the second problem, the intent was to distinguish product listing pages from others. The properties of the datasets are given in Table 1.

Since the similar and dissimilar graphs are fixed we varied only the number of labeled nodes (). We evaluated the AUC performance of the IR-MG and WvRN-MG methods on the similar graph () and dissimilar graph () separately. Further, we evaluated the performance on the mixed graph for the values of set by the NAC and CV based estimation techniques. To study the quality of these two estimation techniques, we also found the best AUC score given by the optimal (searched over a grid of values in the interval used in the cross-validation). The average performance over 25 partitions for each of these settings is presented in figure 2. It is clearly seen that the performance with the dissimilar graph is inferior compared to the performance with the similar graph, particularly when is small. This correlates well with the NAC values given in Table 1. Although the dissimilar graph is quite impure, it is still useful. This is clearly seen in figure 2 where the performance with the mixed graph is better than the performance with similar and dissimilar graphs used alone; see for instance the results for CU-Listing, WvRN, L=40. This improvement is quite significant when is small. Further, the performance with the cross-validation choice of is very close to the best performance and is only slightly inferior when . The NAC based estimate of becomes useful for sufficiently large values of . The performance difference between the IR-MG and WvRN-MG methods was statistically significant at the level of 0.05 only on the CU-Product and CU-Listing datasets when . We have not reported the results for the UG-Product dataset since the AUC scores were almost same (around 0.99) for all the graphs and methods.

4.3 Comparison with Goldberg et al.’s method

Since Goldberg et al.’s method [3] depends on content features we restrict our comparison to the four datasets, G50C, WINDOWSMAC, WebKB-PAGELINK and WebKB-LINK. Goldberg et al. give two methods: one is based on regularized least squares (Lap-RLSC) and the other is based on SVMs (Lap-SVM) [7]. Both methods perform similarly. We use Lap-RLSC for comparing against IR-MG and WvRN-MG. For IR-MG and WvRN-MG we tuned using both cross validation (CV) and NAC values; CV tuning is obviously better and it is the one that should be used. The results for the methods are given in Table 2 for various values of . Clearly all three methods give competitive performance. The results are statistically significant for lower values of . As in [3], for Goldberg et al.’s method we did not tune the hyperparameters for each choice of . In the next section we show how tuning can be done and demonstrate its usefulness. In terms of computational speed Goldberg et al.’s method is comparable with IR-MG; WvRN-MG has an advantage over the other two methods because it is much faster ( 10 times) and also provides decent competitive performance.

4.4 Setting up parameter in Goldberg et al.’s method

The above experiments clearly indicate the importance of in the mixed graph to get improved performance. It would be useful to introduce such a parameter in Goldberg et al.’s method [3] also. One way of doing this is as follows. In their method there is a graph regularization term which smoothens the decision function. Here, corresponds to a vector of function values at the nodes of the graph and the matrix is a mixed graph analog of the graph Laplacian . The combinatorial graph Laplacian matrix is defined as where is the diagonal degree matrix with and its normalized version is given as: . is defined as: where is a matrix of all ones and is the Hadamard (elementwise) product. is an edge type matrix with (i,j) th element if there is a similarity edge between ; if there is a dissimilarity edge. To introduce we can modify to be a convex combination of matrices and corresponding to the similar and dissimilar graphs; that is, we set . Using convex combination of Laplacian has been studied [8] in the context of multiview learning. Here, is nothing but the graph Laplacian obtained using and . To verify the usefulness of this we conducted a simple experiment on the LINK dataset by setting =0.7, =1.0 and =20. While the original method gave an average AUC score of 0.93, the modified method gave a value of 0.96. Like earlier, can be tuned using cross-validation along with the other hyperparameters.

5 Conclusion

In this paper we provided a principled approach to extend probabilistic scores based transductive classification methods for mixed graphs. The proposed methods are simple and efficient. We highlighted the importance of hyperparameter optimization and showed how this parameter can be optimized particularly when the number of labeled nodes is not too small. Experiments on several benchmark and real world datasets show the usefulness of the proposed methods.

6 Acknowledgments

The authors are thankful to the anonymous reviewers for their helpful comments.

References

  • [1] A. Corduneanu and T. Jaakkola. Distributed information regularization on graphs. In NIPS, pages 297–304, 2005.
  • [2] T. Cover and J. Thomas. Elements of Information Theory. Wiley, 1991.
  • [3] A. B. Goldberg, X. Zhu, and S. Wright. Dissimilarity in graph-based semi-supervised classification. In AISTATS, 2007.
  • [4] S. A. Macskassy. Improving learning in networked data by combining explicit and mined links. In AAAI, 2007.
  • [5] S. A. Macskassy and F. Provost. Classification in networked data: A toolkit and a univariate case study. JMLR, 8:935–983, 2007.
  • [6] M. E. J. Newman. Mixing patterns in networks. In Physical Review E, 2003.
  • [7] V. Sindhwani, P. Niyogi, and M. Belkin. Beyond the point cloud: from transductive to semi-supervised learning. In ICML, 2005.
  • [8] V. Sindhwani, P. Niyogi, and M. Belkin. A co-regularization approach to semi-supervised learning with multiple views. In ICML Workshop on learning with multiple views, 2005.
  • [9] A. Subramanya and J. Bilmes. Entropic graph regularization in non-parametric semi-supervised classification. In NIPS, 2009.
  • [10] W. Tong and R. Jin. Semi-supervised learning by mixed label propagation. In AAAI, 2007.
  • [11] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Scholkopf. Learning with local and global consistency. In NIPS, pages 321–328, 2004.
  • [12] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and harmonic function. In ICML, pages 912–919, 2003.