sfl
Sparse Fused Lasso (NYC Taxi dataset)
view repo
We introduce a family of adaptive estimators on graphs, based on penalizing the ℓ_1 norm of discrete graph differences. This generalizes the idea of trend filtering [Kim et al. (2009), Tibshirani (2014)], used for univariate nonparametric regression, to graphs. Analogous to the univariate case, graph trend filtering exhibits a level of local adaptivity unmatched by the usual ℓ_2-based graph smoothers. It is also defined by a convex minimization problem that is readily solved (e.g., by fast ADMM or Newton algorithms). We demonstrate the merits of graph trend filtering through examples and theory.
READ FULL TEXT VIEW PDF
We consider additive models built with trend filtering, i.e., additive m...
read it
We study trend filtering, a recently proposed tool of Kim et al. [SIAM R...
read it
We study the denoising of piecewise smooth graph signals that exhibit
in...
read it
The problem of estimating a one-dimensional signal possessing mixed degr...
read it
Simulation on directed graphs is an important method for understanding t...
read it
We study quantile trend filtering, a recently proposed method for
one-di...
read it
In this paper we study the estimation of changing trends in time-series ...
read it
Sparse Fused Lasso (NYC Taxi dataset)
Nonparametric regression has a rich history in statistics, carrying well over 50 years of associated literature. The goal of this paper is to port a successful idea in univariate nonparametric regression, trend filtering [36, 20, 40, 46], to the setting of estimation on graphs. The proposed estimator, graph trend filtering, shares three key properties of trend filtering in the univariate setting.
Local adaptivity: graph trend filtering can adapt to inhomogeneity in the level of smoothness of an observed signal across nodes. This stands in contrast to the usual -based methods, e.g., Laplacian regularization [34], which enforce smoothness globally with a much heavier hand, and tends to yield estimates that are either smooth or else wiggly throughout.
Computational efficiency: graph trend filtering is defined by a regularized least squares problem, in which the penalty term is nonsmooth, but convex and structured enough to permit efficient large-scale computation.
Analysis regularization: the graph trend filtering problem directly penalizes (possibly higher order) differences in the fitted signal across nodes. Therefore graph trend filtering falls into what is called the analysis framework for defining estimators. Alternatively, in the synthesis framework, we would first construct a suitable basis over the graph, and then regress the observed signal over this basis; e.g., Shuman et al. [33]
survey a number of such approaches using wavelets; likewise, kernel methods regularize in terms of the eigenfunctions of the graph Laplacian
[21]. An advantage of analysis regularization is that it easily yields complex extensions of the basic estimator by mixing penalties.As a motivating example, consider a denoising problem on 402 census tracts of Allegheny County, PA, arranged into a graph with 402 vertices and 2382 edges obtained by connecting spatially adjacent tracts. To illustrate the adaptive property of graph trend filtering we generated an artificial signal with inhomogeneous smoothness across the nodes, and two sharp peaks near the center of the graph, as can be seen in the top left panel of Figure 1. (The signal was formed using a mixture of five Gaussians, in the underlying spatial coordinates.) We drew noisy observations around this signal, shown in the top right panel of the figure, and we fit graph trend filtering, graph Laplacian smoothing, and wavelet smoothing to these observations. Graph trend filtering is to be defined in Section 2 (here we used , quadratic order); the latter two, recall, are defined by the optimization problems
where
the vector of observations measured over the
nodes in the graph, is the graph Laplacian matrix, and is a wavelet basis built over the graph. The wavelet smoothing problem displayed above is really an oversimplified representation of the class of wavelets methods, since it only encapsulates estimators that employ an orthogonal wavelet basis (and soft-threshold the wavelet coefficients). For the present experiment, we constructed according to the spanning tree wavelet design of Sharpnack et al. [31]; we found this construction performed best among the graph wavelet designs we considered for the data at hand. For completeness, the results from alternative wavelet designs are given in the Appendix.True signal | Noisy observations | Graph trend filtering, 68 df |
Laplacian smoothing, 68 df | Laplacian smoothing, 132 df | Wavelet smoothing, 160 df |
Graph trend filtering, Laplacian smoothing, and wavelet smoothing each have their own regularization parameters
, and these parameters are not generally on the same scale. Therefore, in our comparisons we use effective degrees of freedom (df) as a common measure for the complexities of the fitted models. The top right panel of Figure
1 shows the graph trend filtering estimate with 68 df. We see that it adaptively fits the sharp peaks in the center of the graph, and smooths out the surrounding regions appropriately. The graph Laplacian estimate with 68 df (bottom left), substantially oversmooths the high peaks in the center, while at 132 df (bottom middle), it begins to detect the high peaks in the center, but undersmooths neighboring regions. Wavelet smoothing performs quite poorly across all df values—it appears to be most affected by the level of noise in the observations.As a more quantitative assessment, Figure 2 shows the mean squared errors between the estimates and the true underlying signal. The differences in performance here are analogous to the univariate case, when comparing trend filtering to smoothing splines [40]. At smaller df values, Laplacian smoothing, due to its global considerations, fails to adapt to local differences across nodes. Trend filtering performs much better at low df values, and yet it matches Laplacian smoothing when both are sufficiently complex, i.e., in the overfitting regime. This demonstrates that the local flexibility of trend filtering estimates is a key attribute.
Here is an outline for the rest of this article. Section 2 defines graph trend filtering and gives underlying motivation and intuition. Section 3 covers basic properties and extensions of the graph trend filtering estimator. Section 4 examines computational approaches, and Section 5 looks at a number of both real and simulated data examples. Section 6 presents asymptotic error bounds for graph trend filtering. Section 7 concludes with a discussion. As for notation, we write to extract the rows of a matrix that correspond to a subset , and to extract the complementary rows. We use a similar convention for vectors: and denote the components of a vector that correspond to the set and its complement, respectively. We write and for the row and null spaces of , respectively, and for the pseudoinverse of , with when is rectangular.
In this section, we motivate and formally define graph trend filtering.
We begin by reviewing trend filtering in the univariate setting, where discrete difference operators play a central role. Suppose that we observe across input locations ; for simplicity, suppose that the inputs are evenly spaced, say, . Given an integer , the th order trend filtering estimate is defined as
(1) |
where is a tuning parameter, and is the discrete difference operator of order . When , problem (1) employs the first difference operator,
(2) |
Therefore , and the 0th order trend filtering estimate in (1) reduces to the 1-dimensional fused lasso estimator [39], also called 1-dimensional total variation denoising [28]. For the operator is defined recursively by
(3) |
with above denoting the version of the first difference operator in (2). In words, is given by taking first differences of th differences. The interpretation is hence that problem (1) penalizes the changes in the th discrete differences of the fitted trend. The estimated components exhibit the form of a th order piecewise polynomial function, evaluated over the input locations . This can be formally verified [40, 46] by examining a continuous-time analog of (1).
Let be an graph, with vertices and undirected edges , and suppose that we observe over the nodes. Following the univariate definition in (1), we define the th order graph trend filtering (GTF) estimate by
(4) |
In broad terms, this problem (like univariate trend filtering) is a type of generalized lasso problem [41], in which the penalty matrix is a suitably defined graph difference operator, of order . In fact, the novelty in our proposal lies entirely within the definition of this operator.
When , we define first order graph difference operator in such a way it yields the graph-equivalent of a penalty on local differences:
so that the penalty term in (4) sums the absolute differences across connected nodes in . To achieve this, we let be the oriented incidence matrix of the graph , containing one row for each edge in the graph; specifically, if , then has th row
(5) |
where the orientations of signs are arbitrary. Like trend filtering in the 1d setting, the 0th order graph trend filtering estimate coincides with the fused lasso (total variation regularized) estimate over [17, 41, 30].
For , we use a recursion to define the higher order graph difference operators, in a manner similar to the univariate case. The recursion alternates in multiplying by the first difference operator and its transpose (taking into account that this matrix not square):
(6) |
Above, we abbreviated the oriented incidence matrix by of , and exploited the fact that is one representation for the graph Laplacian matrix. Note that for odd , and for even .
An important point is that our defined graph difference operators (5), (6) reduce to the univariate ones (2), (3) in the case of a chain graph (in which and ), modulo boundary terms. That is, when is even, if one removes the first rows and last rows of for the chain graph, then one recovers ; when is odd, if one removes the first and last rows of for the chain graph, then one recovers . Further intuition for our graph difference operators is given next.
We give some insight for our definition of graph difference operators (5), (6), based on the idea of piecewise polynomials over graphs. In the univariate case, as described in Section 2.1, sparsity of under the difference operator implies a specific th order piecewise polynomial structure for the components of [40, 46]. Since the components of correspond to (real-valued) input locations , the interpretation of a piecewise polynomial here is unambiguous. But for a graph, one might ask: does sparsity of mean that the components of are piecewise polynomial? And what does the latter even mean, as the components of are defined over the nodes? To address these questions, we intuitively define a piecewise polynomial over a graph, and show that it implies sparsity under our constructed graph difference operators.
Piecewise constant (): we say that a signal is piecewise constant over a graph if many of the differences are zero across edges in . Note that this is exactly the property associated with sparsity of , since , the oriented incidence matrix of .
Piecewise linear (): we say that a signal has a piecewise linear structure over if satisfies
for many nodes , where is the number of nodes adjacent to
. In words, we are requiring that the signal components can be linearly interpolated from its neighboring values at many nodes in the graph. This is quite a natural notion of (piecewise) linearity: requiring that
be equal to the average of its neighboring values would enforce linearity at under an appropriate embedding of the points in Euclidean space. Again, this is precisely the same as requiring to be sparse, since , the graph Laplacian.Piecewise polynomial (): We say that has a piecewise quadratic structure over if the first differences of the second differences are mostly zero, over edges . Likewise, has a piecewise cubic structure over if the second differences of the second differences are mostly zero, over nodes . This argument extends, alternating between leading first and second differences for even and odd . Sparsity of in either case exactly corresponds to many of these differences being zero, by construction.
In Figure 3, we illustrate the graph trend filtering estimator on a 2d grid graph of dimension , i.e., a grid graph with 400 nodes and 740 edges. For each of the cases , we generated synthetic measurements over the grid, and computed a GTF estimate of the corresponding order. We chose the 2d grid setting so that the piecewise polynomial nature of GTF estimates could be visualized. Below each plot, the utilized graph trend filtering penalty is displayed in more explicit detail.
GTF with | GTF with |
Penalty: |
GTF with
It is instructive to compare the graph trend filtering estimator, as defined in (4), (5), (6) to Laplacian smoothing [34]. Standard Laplacian smoothing uses the same least squares loss as in (4), but replaces the penalty term with . A natural generalization would be to allow for a power of the Laplacian matrix , and define th order graph Laplacian smoothing according to
(7) |
The above penalty term can be written as for odd , and for even ; i.e., this penalty is exactly for the graph difference operator defined previously.
As we can see, the critical difference between graph Laplacian smoothing (7) and graph trend filtering (4) lies in the choice of penalty norm: in the former, and in the latter. The effect of the penalty is that the GTF program can set many (higher order) graph differences to zero exactly, and leave others at large nonzero values; i.e., the GTF estimate can simultaneously be smooth in some parts of the graph and wiggly in others. On the other hand, due to the (squared)
penalty, the graph Laplacian smoother cannot set any graph differences to zero exactly, and roughly speaking, must choose between making all graph differences small or large. The relevant analogy here is the comparison between the lasso and ridge regression, or univariate trend filtering and smoothing splines
[40], and the suggestion is that GTF can adapt to the proper local degree of smoothness, while Laplacian smoothing cannot. This is strongly supported by the examples given throughout this paper.Some authors from the signal processing community, e.g., Bredies et al. [5], Setzer et al. [29], have studied total generalized variation (TGV), a higher order variant of total variation regularization. Moreover, several discrete versions of these operators have been proposed. They are often similar to the construction that we have. However, the focus of these works is mostly on how well a discrete functional approximates its continuous counterpart. This is quite different from our concern, as a signal on a graph (say a social network) may not have any meaningful continuous-space embedding at all. In addition, we are not aware of any study on the statistical properties of these regularizers. In fact, our theoretical analysis in Section 6 may be extended to these methods too.
We first study the structure of graph trend filtering estimates, then discuss interpretations and extensions.
We describe the basic structure of graph trend filtering estimates and present an unbiased estimate for their degrees of freedom. Let the tuning parameter
be arbitrary but fixed. By virtue of the penalty in (4), the solution satisfies for some active set (typically is smaller when is larger). Trivially, we can reexpress this as , or . Therefore, the basic structure of GTF estimates is revealed by analyzing the null space of the suboperator .Assume without a loss of generality that is connected (otherwise the results apply to each connected component of ). Let be the oriented incidence matrix and Laplacian matrix of . For even , let , and let denote the subgraph induced by removing the edges indexed by (i.e., removing edges , ). Let be the connected components of . Then
where , and are the indicator vectors over connected components. For odd , let . Then
The proof of Lemma 1 appears in the Appendix. The lemma is useful for a few reasons. First, as motivated above, it describes the coarse structure of GTF solutions. When , we can see (as ) that will indeed be piecewise constant over groups of nodes of . For , this structure is smoothed by multiplying such piecewise constant levels by . Meanwhile, for , the structure of the GTF estimate is based on assigning nonzero values to a subset of nodes, and then smoothing through multiplication by . Both of these smoothing operations, which depend on , have interesting interpretations in terms of to the electrical network perspective for graphs. This is developed in the next subsection.
A second use of Lemma 1 is that it leads to a simple expression for the degrees of freedom, i.e., the effective number of parameters, of the GTF estimate . From results on generalized lasso problems [41, 42], we have , with denoting the support of , and the dimension of the null space of a matrix . Applying Lemma 1 then gives the following.
Assume that is connected. Let denote the GTF estimate at a fixed but arbitrary value of . Under the normal error model , the GTF estimate has degrees of freedom given by
Here denotes the active set of .
As a result of Lemma 2, we can form simple unbiased estimate for ; for odd, this is , and for even, this is the number of connected components of , where is the support of . When reporting degrees of freedom for graph trend filtering (as in the example in the introduction), we use these unbiased estimates.
Lemma 1 reveals a mathematical structure for GTF estimates , which satisfy for some set . It is interesting to interpret the results using the electrical network perspective for graphs [45]. In this perspective, we imagine replacing each edge in the graph with a resistor of value 1. If describes how much current is going in at each node in the graph, then describes the induced voltage at each node. Provided that , which means that the total accumulation of current in the network is 0, we can solve for the current values from the voltage values: .
The odd case in Lemma 1 asserts that
For , this says that GTF estimates are formed by assigning a sparse number of nodes in the graph a nonzero voltage , then solving for the induced current (and shifting this entire current vector by a constant amount). For , we assign a sparse number of nodes a nonzero voltage, solve for the induced current, and then repeat this: we relabel the induced current as input voltages to the nodes, and compute the new induced current. This process is again iterated for .
The even case in Lemma 1 asserts that
For , this result says that GTF estimates are given by choosing a partition of the nodes, and assigning a constant input voltage to each element of the partition. We then solve for the induced current (and potentially shift this by an overall constant amount). The process is iterated for by relabeling the induced current as input voltage.
The comparison between the structure of estimates for and is informative: in a sense, the above tells us that 2nd order GTF estimates will be smoother than 3rd order estimates, as a sparse input voltage vector need not induce a current that is piecewise constant over nodes in the graph. For example, an input voltage vector that has only a few nodes with very large nonzero values will induce a current that is peaked around these nodes, but not piecewise constant.
Several extensions of the proposed graph trend filtering model are possible. Trend filtering over a weighted graph, for example, could be performed by using a properly weighted version of the edge incidence matrix in (5), and carrying forward the same recursion in (6) for the higher order difference operators. As another example, the Gaussian regression loss in (4) could be changed to another suitable likelihood-derived losses in order to accommodate a different data type for
, say, logistic regression loss for binary data, or Poisson regression loss for count data.
In Section 5.2, we explore a modest extension of GTF, where we add a strongly convex prior term to the criterion in (4
) to assist in performing graph-based imputation from partially observed data over the nodes. In Section
5.3, we investigate a modification of the proposed regularization scheme, where we add a pure penalty on in (4), hence forming a sparse variant of GTF. Other potentially interesting penalty extensions include: mixing graph difference penalties of various orders, and tying together several denoising tasks with a group penalty. Extensions such as these are easily built, recall, as a result of the analysis framework used by the GTF program, wherein the estimate defined through direct regularization via an analyzing operator, the -based graph difference penalty .Graph trend filtering is defined by a convex optimization problem (4). In principle this means that, at least for small or moderately sized problems, its solutions can be reliably computed using a variety of standard algorithms. In order to handle larger scale problems, we describe two specialized algorithms that improve on generic procedures by taking advantage of the structure of .
We reparametrize (4) by introducing auxiliary variables, so that we can apply ADMM. For even , we use a special transformation that is critical for fast computation (following Ramdas and Tibshirani [27] in univariate trend filtering); for odd , this is not possible. The reparametrizations for even and odd are
respectively. Recall that is the oriented incidence matrix and is the graph Laplacian. The augmented Lagrangian is
where or depending on whether is even or odd, and likewise or . ADMM then proceeds by iteratively minimizing the augmented Lagrangian over , minimizing over , and performing a dual update over . The and updates are of the form, for some ,
(8) | |||
(9) |
The linear system in (8) is well-conditioned, sparse, and can be solved efficiently using the preconditioned conjugate gradient method. This involves only multiplication with Laplacian matrices. For a small enough choices of (the augmented Lagrangian parameter), the system in (8) is diagonally dominant, special Laplacian/SDD solvers can be applied, which run in almost linear time [35, 22, 19].
For , the update in (9) is simply given by soft-thresholding, and for , it is given by graph TV denoising, i.e., given by solving a graph fused lasso problem. Note that this subproblem has the exact structure of the graph trend filtering problem (4) with . A direct approach for graph TV denoising is available based on parametric max-flow [8], and this algorithm is empirically much faster than its worst-case complexity [4]. In the special case that the underlying graph is a grid, a promising alternative method employs proximal stacking techniques [2].
As an alternative to ADMM, a projected Newton-type method [3, 1] can be used to solve (4) via its dual problem:
The solution of (4) is then given by . (For univariate trend filtering, Kim et al. [20] adopt a similar strategy, but instead use an interior point method.) The projected Newton method performs updates using a reduced Hessian, so abbreviating , each iteration boils down to
(10) |
for some and set of indices . The linear system in (10) is always sparse, but conditioning becomes an issue as grows (note that the same problem does not occur in (8
) because of the addition of the identity matrix
). We have found empirically that a preconditioned conjugate gradient method works quite well for (10) for , but struggles for larger .In our experience, the following algorithms work well for the various order of graph trend filtering. We remark that orders are of most practical interest (and solutions of polynomial order are less likely to be sought in practice).^{1}^{1}1Loosely speaking, each order provides solutions that exhibit a different class of structure: gives piecewise constant solutions, gives piecewise linear, and gives piecewise smooth. All orders continue to give piecewise smooth fits, with less and less transparent differences (the practical differences between piecewise quadratic versus piecewise linear fits is greater than piecewise cubic versus piecewise quadratic, etc.). Since the conditioning of the graph trend filtering operator worsens as increases, which makes computation more difficult, it makes most practical sense to simply choose whenever a piecewise smooth fit is desired.
Order | Algorithm |
---|---|
Parametric max-flow | |
Projected Newton method | |
ADMM with parametric max-flow | |
ADMM with soft-thresholding |
Figure 4 compares performances of the described algorithms on a moderately large simulated example, using a 2d grid graph. We see that when , the projected Newton method converges faster than ADMM (superlinear versus at best linear convergence). When , the story is reversed, as the projected Newton iterations quickly become stagnant, and the ADMM enjoys better convergence.
GTF with | GTF with |
---|---|
In this section, we present a variety of examples of running graph trend filtering on real graphs.
In the Introduction, we examined the denoising power of graph trend filtering in a spatial setting. Here we examine the behavior of graph trend filtering on a nonplanar graph: the Facebook graph from the Stanford Network Analysis Project (http://snap.stanford.edu). This is composed of 4039 nodes representing Facebook users, and 88,234 edges representing friendships, collected from real survey participants; the graph has one connected component, but the observed degree sequence is very mixed, ranging from 1 to 1045 (refer to McAuley and Leskovec [26] for more details).
We generated synthetic measurements over the Facebook nodes (users) based on three different ground truth models, so that we can precisely evaluate and compare the estimation accuracy of GTF, Laplacian smoothing, and wavelet smoothing. For the latter, we again used the spanning tree wavelet design of Sharpnack et al. [31], because it performed among the best of wavelets designs in all data settings considered here. Results from other wavelet designs are presented in the Appendix. The three ground truth models represent very different scenarios for the underlying signal , each one favorable to different estimation methods. These are:
Dense Poisson equation: we solved the Poisson equation for , where is arbitrary and dense (its entries were i.i.d. normal draws).
Sparse Poisson equation: we solved the Poisson equation for , where is sparse and has 30 nonzero entries (again i.i.d. normal draws).
Inhomogeneous random walk: we ran a set of decaying random walks at different starter nodes in the graph, and recorded in
the total number of visits at each node. Specifically, we chose 10 nodes as starter nodes, and assigned each starter node a decay probability uniformly at random between 0 and 1 (this is the probability that the walk terminates at each step instead of travelling to a neighboring node). At each starter node, we also sent out a varying number of random walks, chosen uniformly between 0 and 1000.
In each case, the synthetic measurements were formed by adding noise to . We note that model 1 is designed to be favorable for Laplace smoothing; model 2 is designed to be favorable for GTF; and in the inhomogeneity in model 3 is designed to be challenging for Laplacian smoothing, and favorable for the more adaptive GTF and wavelet methods.
Dense Poisson equation | Sparse Poisson equation |
---|---|
Inhomogeneous random walk
is the noise variance. Hence the noise level is increasing from left to right. The y-axis shows the denoised negative SnR:
, where MSE denotes mean squared error, so the achieved MSE is increasing from bottom to top.Figure 5 shows the performance of the three estimation methods, over a wide range of noise levels in the synthetic measurements; performance here is measured by the best achieved mean squared error, allowing each method to be tuned optimally at each noise level. The summary: GTF estimates are (expectedly) superior when the Laplacian-based sparsity pattern is in effect (model 2), but are nonetheless highly competitive in both other settings—the dense case, in which Laplacian smoothing thrives, and the inhomogeneous random walk case, in which wavelets thrive.
Graph trend filtering can used for graph-based transductive learning, as motivated by the work of Talukdar and Crammer [37], Talukdar and Pereira [38]
, who rely on Laplacian regularization. Consider a semi-supervised learning setting, where we are given only a small number of seed labels over nodes of a graph, and the goal is to impute the labels on the remaining nodes. Write
for the set of observed nodes, and assume that each observed label falls into . Then we can define the modified absorption problem under graph trend filtering regularization (MAD-GTF) by(11) |
The matrix is an indicator matrix: each observed row is described by if class is observed and otherwise. The matrix contains fitted probabilities, with giving the probability that node is of class . We write for its th column, and hence the middle term in the above criterion encourages each set of class probabilities to behave smoothly over the graph. The last term in the above criterion ties the fitted probabilities to some given prior weights . In principle could act as a second tuning parameter, but for simplicity we take to be small and fixed, with any guaranteeing that the criterion in (11) is strictly convex, and thus has a unique solution . The entries of need not be probabilites in any strict sense, but we can still interpret them as relative probabilities, and imputation can be performed by assigning each unobserved node a class label such that is largest.
iris | adult | wine | car | breast | abalone | wine-qual. | poker | heart | ads | yeast | |
# of classes | 3 | 2 | 3 | 4 | 2 | 29 | 6 | 10 | 2 | 2 | 10 |
# of samples | 150 | 32,561 | 178 | 1,728 | 569 | 4,177 | 1,599 | 3,000 | 303 | 3,279 | 1,484 |
Laplacian | 0.085 | 0.270 | 0.060 | 0.316 | 0.064 | 0.872 | 0.712 | 0.814 | 0.208 | 0.306 | 0.566 |
GTF, | 0.102 | 0.293 | 0.055 | 0.294 | 0.500 | 0.888 | 0.709 | 0.801 | 0.472 | 0.212 | 0.726 |
p-value | 0.254 | 0.648 | 0.406 | 0.091 | 0.000 | 0.090 | 0.953 | 0.732 | 0.000 | 0.006 | 0.000 |
GTF, | 0.087 | 0.275 | 0.055 | 0.293 | 0.063 | 0.874 | 0.713 | 0.813 | 0.175 | 0.218 | 0.563 |
p-value | 0.443 | 0.413 | 0.025 | 0.012 | 0.498 | 0.699 | 0.920 | 0.801 | 0.134 | 0.054 | 0.636 |
GTF, | 0.084 | 0.259 | 0.052 | 0.309 | 0.059 | 0.865 | 0.738 | 0.774 | 0.175 | 0.244 | 0.552 |
p-value | 0.798 | 0.482 | 0.024 | 0.523 | 0.073 | 0.144 | 0.479 | 0.138 | 0.301 | 0.212 | 0.100 |
Misclassification rates of MAD-Laplacian and MAD-GTF for imputation in the UCI data sets. We also compute p-values over the 10 repetitions for each data set (10 draws of nodes to serve as seed labels) via paired t-tests. Cases where MAD-GTF achieves significantly better misclassification rate, at the 0.1 level, are highlighted in
green; cases where MAD-GTF achieves a significantly worse miclassification rate, at the 0.1 level, are highlighted in red.Our specification of MAD-GTF only deviates from the MAD proposal of Talukdar and Crammer [37] in that these authors used the Laplacian regularization term , in place of -based graph difference regularizer in (11). If the underlying class probabilities are thought to have heterogeneous smoothness over the graph, then replacing the Laplacian regularizer with the GTF-designed one might lead to better performance. As a broad comparison of the two methods, we ran them on the 11 most popular classification data sets from the UCI Machine Learning repository (http://archive.ics.uci.edu/ml/).^{2}^{2}2We used all data sets here, except the “forest-fires” data set, which is a regression problem. Also, we zero-filled the missing data in “internet-ads” data set and used a random one third of the data in the “poker” data set. For each data set, we constructed a -nearest-neighbor graph based on the Euclidean distance between provided features, and randomly selected 5 seeds per class to serve as the observed class labels. Then we set , used prior weights for all and , and chose the tuning parameter over a wide grid of values to represent the best achievable performance by each method, on each experiment. Figure 6 and Table 1 summarize the misclassification rates from imputation using MAD-Laplacian and MAD-GTF, averaged over 10 repetitions of the randomly selected seed labels. We see that MAD-GTF with (basically a graph partition akin to MRF-based graph cut, using an Ising model) does not seem to work as well as the smoother alternatives. Importantly, MAD-GTF with and both perform at least as well, and sometimes better, than MAD-Laplacian on each one of the UCI data sets. Recall that these data sets were selected entirely based on their popularity, and not at all on the belief that they might represent favorable scenarios for GTF (i.e., not on the prospect that they might exhibit some heterogeneity in the distribution of class labels over their respective graphs). Therefore, the fact that MAD-GTF nonetheless performs competitively in such a broad range of experiments is convincing evidence for the utility of the GTF regularizer.
We illustrate a sparse variant of our proposed regularizers, given by adding a pure penalty to the coefficients in (4), as in
(12) |
We call this sparse graph trend filtering, now with two tuning parameters . Under the proper tuning, the sparse GTF estimate will be zero at many nodes in the graph, and will otherwise deviate smoothly from zero. This can be useful in situations where the observed signal represents a difference between two smooth processes that are mostly similar, but exhibit (perhaps significant) differences over a few regions of the graph. Here we apply it to the problem of detecting events based on abnormalities in the number of taxi trips at different locations of New York city. This data set was kindly provided by authors of Doraiswamy et al. [13], who obtained the data from NYC Taxi & Limosine Commission.^{3}^{3}3These authors also considered event detection, but their topological definition of an “event” is very different from what we considered here, and hence our results not directly comparable. Specifically, we consider the graph to be the road network of Manhattan, which contains 3874 nodes (junctions) and 7070 edges (sections of roads that connect two junctions). For measurements over the nodes, we used the number of taxi pickups and dropoffs over a particular time period of interest: 12:00–2:00 pm on June 26, 2011, corresponding to the Gay Pride parade. As pickups and dropoffs do not generically occur at road junctions, we used interpolation to form counts over the graph nodes. A baseline seasonal average was calculated by considering data from the same time block 12:00–2:00 pm on the same day of the week across the nearest eight weeks. Thus the measurements were then taken to be the difference between the counts observed during the Gay Pride parade and the seasonal averages.
Note that the nonzero node estimates from sparse GTF applied to , after proper tuning, mark events of interest, because they convey substantial differences between the observed and expected taxi counts. According to descriptions in the news, we know that the Gay Pride parade was a giant march down at noon from 36th St. and Fifth Ave. all the way to Christopher St. in Greenwich Village, and traffic was blocked over the entire route for two hours (meaning no pickups and dropoffs could occur). We therefore hand-labeled this route as a crude “ground truth” for the event of interest, illustrated in the left panel of Figure 7.
True parade route | Unfiltered signal |
Sparse trend filtering | Sparse Laplacian smoothing |
In the bottom two panels of Figure 7, we compare sparse GTF with (i.e., the sparse graph fused lasso) and a sparse variant of Laplacian smoothing, obtained by replacing the first regularization term in (12) by . For a qualitative visual comparison, the smoothing parameter was chosen so that both methods have 200 degrees of freedom (without any sparsity imposed). The sparsity parameter was then set as . Similar to what we have seen already, GTF is able to better localize its estimates around strong inhomogenous spikes in the measurements, and is able to better capture the event of interest. The result of sparse Laplacian smoothing is far from localized around the ground truth event, and displays many nonzero node estimates throughout distant regions of the graph. If we were to decrease its flexibility (increase the smoothing parameter in its problem formulation), then the sparse Laplacian output would display more smoothness over the graph, but the node estimates around the ground truth region would also be grossly shrunken.
In this section, we assume that , and study asymptotic error rates for graph trend filtering. (The assumption of a normal error model could be relaxed, but is used for simplicity). Our analysis actually focuses more broadly on the generalized lasso problem
(13) |
where is an arbitrary linear operator, and denotes its number of rows. Throughout, we specialize the derived results to the graph difference operator , to obtain concrete statements about GTF over particular graphs. All proofs are deferred to the Appendix.
Using similar arguments to the basic inequality for the lasso [6], we have the following preliminary bound.
Let denote the maximum norm of the columns of . Then for a tuning parameter value , the generalized lasso estimate in (13) has average squared error
Recall that denotes the dimension of the null space of . For the GTF operator of any order , note that is the number of connected components in the underlying graph.
When both and , Theorem 3 says that the estimate converges in average squared error at the rate , in probability. This theorem is quite general, as it applies to any linear operator , and one might therefore think that it cannot yield fast rates. Still, as we show next, it does imply consistency for graph trend filtering in certain cases.
Consider the trend filtering estimator of order , and the choice of the tuning parameter as in Theorem 3. Then:
for univariate trend filtering (i.e., essentially GTF on a chain graph),
for GTF on an Erdos-Renyi random graph, with edge probability , and expected degree ,
for GTF on a Ramanujan -regular graph, and ,
Cases 2 and 3 of Corollary 4 stem from the simple inequality
, the largest singular value of
. When , the GTF operator of order , we havewhere
is the smallest nonzero eigenvalue of the Laplacian
(also known as the Fiedler eigenvalue [15]). In general, can be very small, leading to loose error bounds, but for the particular graphs in question, it is well-controlled. When is bounded, cases 2 and 3 of the corollary show that the average squared error of GTF converges at the rate . As increases, this rate is stronger, but so is the assumption that is bounded.Case 1 in Corollary 4 covers univariate trend filtering (which, recall, is basically the same as GTF over a chain graph; the only differences between the two are boundary terms in the construction of the difference operators). The result in case 1 is based on direct calculation of , using specific facts that are known about the univariate difference operators. It is natural in the univariate setting to assume that is bounded (this is the scaling that would link to the evaluations of a piecewise polynomial function over , with bounded). Under such an assumption, the above corollary yields a convergence rate of for univariate trend filtering, which is not tight. A more refined analysis shows the univariate trend filtering estimator to converge at the minimax optimal rate , proved in Tibshirani [40] by using a connection between univariate trend filtering and locally adaptive regression splines, and relying on sharp entropy-based rates for locally adaptive regression splines from Mammen and van de Geer [24]. We note that in a pure graph-centric setting, the latter strategy is not generally applicable, as the notion of a spline function does not obviously extend to the nodes of an arbitrary graph structure.
In the next subsections, we develop more advanced strategies for deriving fast GTF error rates, based on incoherence, and entropy. These can provide substantial improvements over the basic error bound established in this subsection, but are only applicable to certain graph models. Fortunately, this includes common graphs of interest, such as regular grids. To verify the sharpness of these alternative strategies, we will show that they can be used to recover optimal rates of convergence for trend filtering in the 1d setting.
A key step in the proof of Theorem 3 argues, roughly speaking, that
(14) |
where . The second bound holds by a standard result on maxima of Gaussians (recall that is largest norm of the columns of ). The first bound above uses Holder’s inequality; note that this applies to any , i.e., it does not use any information about the distribution of , or the properties of . The next lemma reveals a potential advantage that can be gained from replacing the bound (14), stemming from Holder’s inequality, with a “linearized” bound.
Denote , and assume that
(15) |
where . With , the generalized lasso estimate satisfies
The inequality in (15) is referred to as a “linearized” bound because it implies that for ,
and the right-hand side is a linear function of . Indeed, for and , this encompasses the bound in (14) as a special case, and the result of Lemma 5 reduces to that of Theorem 3. But the result in Lemma 5 can be much stronger, if can be adjusted so that is smaller than , and is also small. Such an arrangement is possible for certain operators ; e.g., it is possible under an incoherence-type assumption on .
Let , and let denote the singular values of , in increasing order. Also let be the corresponding left singular vectors. Assume that these vectors are incoherent:
for some constant . For , let
Then the generalized lasso estimate has average squared error
Theorem 6 is proved by leveraging the linearized bound (15), which holds under the incoherence condition on the singular vectors of . Compared to the basic result in Theorem 3, the result in Theorem 6 is clearly stronger as it allows us to replace —which can grow like the reciprocal of the minimum nonzero singular value of —with something akin to the average reciprocal of larger singular values. But it does, of course, also make stronger assumptions (incoherence). It is interesting to note that the functional in the theorem, , was also determined to play a leading role in error bounds for a graph Fourier based scan statistic in the hypothesis testing framework [32].
Applying the above theorem to the GTF estimator requires knowledge of the singular vectors of , the st order graph difference operator. The validity of an incoherence assumption on these singular vectors depend on the graph in question. When
is odd, these singular vectors are eigenvectors of the Laplacian
; when is even, they are left singular vectors of the edge incidence matrix . Loosely speaking, these vectors will be incoherent when neighborhoods of different vertices look roughly the same. Most social networks will have this property for the bulk of their vertices (i.e., with the exception of a small number of high degree vertices). Grid graphs also have this property. First, we consider trend filtering over a 1d grid, i.e., a chain (which, recall, is essentially equivalent to univariate trend filtering).Consider the GTF estimator of order , over a chain graph, i.e., a 1d grid graph. Letting
the estimator (here, essentially, the univariate trend filtering estimator) satisfies
We note that the above corollary essentially recovers the optimal rate of convergence for the univariate trend filtering estimator, for all orders . (To be precise, it studies GTF on a chain graph instead, but this is basically the same problem.) When is assumed to be bounded, a natural assumption in the univariate setting, the corollary shows the estimator to converge at the rate . Ignoring the log factor, this matches the minimax optimal rate as established in Tibshirani [40], Wang et al. [46]. Importantly, the proof of Corollary 7, unlike that used in previous works, is free from any dependence on univariate spline functions; it is completely graph-theoretic, and only uses on the incoherence properties of the 1d grid graph. The strength of this approach is its wider applicability, which we demonstrate by moving up to 2d grids.
Consider the GTF estimator of order , over a 2d grid graph, of size . Letting
the estimator satisfies
The 2d result in Corollary 8 is written in a form that mimics the 1d result in Corollary 7, as we claim that the analog of boundedness of in 1d is boundedness of in 2d.^{4}^{4}4This is because is the distance between adjacent 2d grid points, when viewed as a 2d lattice over . Thus, under the appropriate boundedness condition, the 2d rate shows improvement over the 1d rate, which makes sense, since regularization here is being enforced in a richer manner. It is worthwhile highlighting the result for in particular: this says that, when the sum of absolute discrete differences is bounded over a 2d grid, the 2d fused lasso (i.e., 2d total variation denoising) has error rate . This is faster than the rate for the 1d fused lasso, when the sum of absolute differences is bounded. Rates for higher dimensional grid graphs (for all ) follow from analogous arguments, but we omit the details.
A different “fractional” bound on the Gaussian contrast , over provides an alternate route to deriving sharper rates. This style of bound is inspired by the seminal work of van de Geer [43].
Denote , and assume that for a constant ,
(16) |
where recall . Then with
the generalized lasso estimate satisfies
The main motivation for bounds of the form (16) is that they follow from entropy bounds on the set . Recall that for a set , the covering number is the fewest number of balls of radius that cover , with respect to the norm . The log covering or entropy number is . In the next result, we make the connection between between entropy and fractional bounds precise; this follows closely from Lemma 3.5 of van de Geer [43].
Suppose that there exist a constant such that for large enough,
(17) |
for , where can depend on .
Comments
There are no comments yet.