. Objectives of these transforms are to sparsely represent different classes of graph signals and/or efficiently reveal relevant structural properties of high-dimensional data on graphs. As we move forward, it is important to both test these transforms on myriad applications, as well as to develop additional theory to help answer the question of which transforms are best suited to which types of data.
are an important tool in designing and evaluating linear transforms for processing “classical” signals such as audio signals, time series, and images residing on Euclidean domains. It is desirable that the dictionary atoms are jointly localized in time and frequency, and uncertainty principles characterize the resolution tradeoff between these two domains. Moreover, while “the uncertainty principle is [often] used to show that certain things are impossible,” Donoho and Stark present “examples where the generalized uncertainty principle shows something unexpected is possible; specifically, the recovery of a signal or image despite significant amounts of missing information.” In particular, uncertainty principles can provide guarantees that if a signal has a sparse decomposition in a dictionary of incoherent atoms, this is indeed a unique representation that can be recovered via optimization [26, 27]. This idea underlies the recent wave of sparse signal processing techniques, with applications such as denoising, source separation, inpainting, and compressive sensing. While there is still limited theory showing that different mathematical classes of graph signals are sparsely represented by the recently proposed transforms (see  for one preliminary work along these lines), there is far more empirical work showing the potential of these transforms to sparsely represent graph signals in various applications.
Many of the multiscale transforms designed for graph signals attempt to leverage intuition from signal processing techniques designed for signals on Euclidean data domains by generalizing fundamental operators and transforms to the graph setting (e.g., by checking that they correspond on a ring graph). While some intuition, such as the notion of filtering with a Fourier basis of functions that oscillate at different rates (see, e.g., ) carries over to the graph setting, the irregular structure of the graph domain often restricts our ability to generalize ideas. One prime example is the lack of a shift-invariant notion of translation of a graph signal. As shown in [32, 33] and discussed in [23, Section 3.2]
, the concentration of the Fourier basis functions is another example where the intuition does not carry over directly. Complex exponentials, the basis functions for the classical Fourier transform, have global support across the real line. On the other hand, the eigenvectors of the combinatorial or normalized graph Laplacians, which are most commonly used as the basis functions for a graph Fourier transform, are sometimes localized to small regions of the graph. Because the incoherence between the Fourier basis functions and the standard normal basis underlies many uncertainty principles, we demonstrate this issue with a short example.
Motivating Example (Part I: Laplacian eigenvector localization).
Let us consider the two manifolds (surfaces) embedded in and shown in the first row of Figure 1. The first one is a flat square. The second is identical except for the center where it contains a spike. We sample both of these manifolds uniformly across the - plane and create a graph by connecting the nearest neighbors with weights depending on the distance (). The energy of each Laplacian eigenvector of the graph arising from the first manifold is not concentrated on any particular vertex;
i.e., , where is the eigenvector associated with eigenvalue
is the eigenvector associated with eigenvalue. However, the graph arising from the second manifold does have a few eigenvectors, such as eigenvector 3 shown in the middle row Figure 1, whose energy is highly concentrated on the region of the spike; i.e: . Yet, the Laplacian eigenvectors of this second graph whose energy resides primarily on the flatter regions of the manifold, such as eigenvector 17 shown in the bottom row of Figure 1, are not too concentrated on any single vertex. Rather, they more closely resemble some of the Laplacian eigenvectors of the graph arising from the first manifold.
Below we discuss three different families of uncertainty principles, and their extensions to the graph setting, both in prior work and in this contribution.
The first family of uncertainty principles measure the spreading around some reference point, usually the mean position of the energy contained in the signal. The well-known Heisenberg uncertainty principle [34, 35]
belongs to this family. It views the modulus square of the signal in both the time and Fourier domains as energy probability density functions, and takes the variance of those energy distributions as measures of the spreading in each domain. The uncertainty principle states that the product of variances in the time and in the Fourier domains cannot be arbitrarily small. The generalization of this uncertainty principle to the graph setting is complex since there does not exist a simple formula for the mean value or the variance of graph signals, in either the vertex or the graph spectral domains. For unweighted graphs, Agaskar and Lu[36, 37, 38] also view the square modulus of the signal in the vertex domain as an energy probability density function and use the geodesic graph distance (shortest number of hops) to define the spread of a graph signal around a given center vertex. For the spread of a signal in the graph spectral domain, Agaskar and Lu use the normalized variation , which captures the smoothness of a signal. They then specify uncertainty curves that characterize the tradeoff between the smoothness of a graph signal and its localization in the vertex domain. This idea is generalized to weighted graphs in . As pointed out in , the tradeoff between smoothness and localization in the vertex domain is intuitive as a signal that is smooth with respect to the graph topology cannot feature values that decay too quickly from the peak value. However, as shown in Figure 1 (and subsequent examples in Table 1), graph signals can indeed be simultaneously highly localized or concentrated in both the vertex domain and the graph spectral domain. This discrepancy is because the normalized variation used as the spectral spread in  is one method to measure the spread of the spectral representation around the eigenvalue 0, rather than around some mean of that signal in the graph spectral domain. In fact, using the notion of spectral spread presented in , the graph signal with the highest spectral spread on a graph is the graph Laplacian eigenvector associated with the highest eigenvalue. The graph spectral representation of that signal is a Kronecker delta whose energy is completely localized at a single eigenvalue. One might argue that its spread should in fact be zero. So, in summary, while there does exist a tradeoff between the smoothness of a graph signal and its localization around any given center vertex in the vertex domain, the classical idea that a signal cannot be simultaneously localized in the time and frequency domains does not always carry over to the graph setting. While certainly an interesting avenue for continued investigation, we do not discuss uncertainty principles based on spreads in the vertex and graph spectral domains any further in this paper.
The second family of uncertainty principles involve the absolute sparsity or concentration of a signal. The key quantities are typically either support measures counting the number of non-zero elements, or concentration measures, such as -norms. An important distinction is that these sparsity and concentration measures are not localization measures. They can give the same values for different signals, independent of whether the predominant signal components are clustered in a small region of the vertex domain or spread across different regions of the graph. An example of a recent work from the graph signal processing literature that falls into this family is , in which Tsitsvero et al. propose an uncertainty principle that characterizes how jointly concentrated graph signals can be in the vertex and spectral domains. Generalizing prolate spheroidal wave functions , their notion of concentration is based on the percentage of energy of a graph signal that is concentrated on a given set of vertices in the vertex domain and a given set of frequencies in the graph spectral domain.
Since we can interpret signals defined on graphs as finite dimensional vectors with well-defined-norms, we can also apply directly the results of existing uncertainty principles for finite dimensional signals. As one example, the Elad-Bruckstein uncertainty principle of  states that if and are the coefficients of a vector in two different orthonormal bases, then
where is the maximum magnitude of the inner product between any vector in the first basis with any vector in the second basis. In Section 3.1, we apply (1) to graph signals by taking one basis to be the canonical basis of Kronecker delta functions in the graph vertex domain and the other to be a Fourier basis of graph Laplacian eigenvectors. We also apply other such finite dimensional uncertainty principles from , , and  to the graph setting. In Section 3.2, we adapt the Hausdorff-Young inequality [43, Section IX.4], a classical result for infinite dimensional signals, to the graph setting. These results typically depend on the mutual coherence between the graph Laplacian eigenvectors and the canonical basis of deltas. For the special case of shift-invariant graphs with circulant graph Laplacians [44, Section 5.1], such as ring graphs, these bases are incoherent, and we can attain meaningful uncertainty bounds. However, for less homogeneous graphs (e.g., a graph with a vertex with a much higher or lower degree than other vertices), the two bases can be more coherent, leading to weaker bounds. Moreover, as we discuss in Section 2, the bounds are global bounds, so even if the majority of a graph is for example very homogenous, inhomogeneity in one small area can prevent the result from informing the behavior of graph signals across the rest of the graph.
The third family of uncertainty principles characterize a single joint representation of time and frequency. The short-time Fourier transform (STFT) is an example of a time-frequency representation that projects a function onto a set of translated and modulated copies of a function . Usually, is a function localized in the time-frequency plane, for example a Gaussian, vanishing away from some known reference point in the joint time and frequency domain. Hence this transformation reveals local properties in time and frequency of by separating the time-frequency domain into regions where the translated and modulated copies of
are localized. This representation obeys an uncertainty principle: the STFT coefficients cannot be arbitrarily concentrated. This can be shown by estimating the different-norms of this representation (note that the concentration measures of the second family of uncertainty principles are used). For example, Lieb  proves a concentration bound on the ambiguity function (e.g., the STFT coefficients of the STFT atoms). Lieb’s approach is more general than the Heisenberg uncertainty principle, because it handles the case where the signal is concentrated around multiple different points (see, e.g., the signal in Figure 2).
In Section 5, we generalize Lieb’s uncertainty principle to the graph setting to provide upper bounds on the concentration of the transform coefficients of any graph signal under (i) any frame of dictionary atoms, and (ii) a special class of dictionaries called localized spectral graph filter frames, whose atoms are of the form , where is a localization operator that centers on vertex a pattern described in the graph spectral domain by the kernel .
While the second family of uncertainty principles above yields global uncertainty principles, we can generalize the third family to the graph setting in a way that yields local uncertainty principles. In the classical Euclidean setting, the underlying domain is homogenous, and thus uncertainty principles apply to all signals equally, regardless of where on the real line they are concentrated. However, in the graph setting, the underlying domain is irregular, and a change in the graph structure in a single small region of the graph can drastically affect the uncertainty bounds. For instance, the second family of uncertainty principles all depend on the coherence between the graph Laplacian eigenvectors and the standard normal basis of Kronecker deltas, which is a global quantity in the sense that it incorporates local behavior from all regions of the graph. To see how this can limit the usefulness of such global uncertainty principles, we return to the motivating example from above.
Motivating Example (Part II: Global versus local uncertainty principles).
In Section 3.1, we show that a direct application of a result from  to the graph setting yields the following uncertainty relationship, which falls into the second family described above, for any signal :
Each fraction in the left-hand side of (2) is a measure of concentration that lies in the interval ( is the number of vertices), and the coherence between the graph Laplacian eigenvectors and the Kronecker deltas on the right-hand side lies in the same interval. On the graph arising from manifold 1, the coherence is close to , and (2) yields a meaningful uncertainty principle. However, on the graph arising from manifold 2, the coherence is close to 1 due to the localized eigenvector 3 in Figure 1, (2) is trivially true for any signal in from the properties of vector norms, and thus the uncertainty principle is not particularly useful. Nevertheless, far away from the spike, signals should behave similarly on manifold 2 to how they behave on manifold 1. Part of the issue here is that the uncertainty relationship holds for any graph signal , even those concentrated on the spike, which we know can be jointly localized in both the vertex and graph spectral domains. An alternative approach is to develop a local uncertainty principle that characterizes the uncertainty in different regions of the graph on a separate basis. Then, if the energy of a given signal is concentrated on a more homogeneous part of the graph, the concentration bounds will be tighter.
In Section 6, we generalize the approach of Lieb to build a local uncertainty principle that bounds the concentration of the analysis coefficients of each atom of a localized graph spectral filter frame in terms of quantities that depend on the local structure of the graph around the center vertex of the given atom. Thus, atoms localized to different regions of the graph feature different concentration bounds. Such local uncertainty principles also have constructive applications, and we conclude with an example of non-uniform sampling for graph inpainting, where the varying uncertainty levels across the graph suggest a strategy of sampling more densely in areas of higher uncertainty. For example, if we were to take measurements of a smooth signal on manifold 2 in Figure 1, this method would lead to a higher probability of sampling signal values near the spike, and a lower probability of sampling signal values in the more homogenous flat parts of the manifold, where reconstruction of the missing signal values is inherently easier.
2 Notation and graph signal concentration
In this section, we introduce some notation and illustrate further how certain intuition from signal processing on Euclidean spaces does not carry over to the graph setting.
Throughout the paper, we consider signals residing on an undirected, connected, and weighted graph , where is a finite set of vertices (), is a finite set of edges, and is the weight or adjacency matrix. The entry of represents the weight of an edge connecting vertices and . A graph signal is a function assigning one value to each vertex. Such a signal can be written as a vector of size with the component representing the signal value at the vertex. The generalization of Fourier analysis to the graph setting requires a graph Fourier basis . The most commonly used graph Fourier bases are the eigenvectors of the combinatorial (or non-normalized) graph Laplacian, which is defined as , where is the diagonal degree matrix with diagonal entries , and , or the eigenvectors of the normalized graph Laplacian . However, the eigenbases (or Jordan eigenbases) of other matrices such as the adjacency matrix have also been used as graph Fourier bases [46, 2]. All of our results in this paper hold for any choice of the graph Fourier basis. For concreteness, we use the combinatorial Laplacian, which has a complete set of orthonormal eigenvectors associated with the real eigenvalues . The graph Fourier transform of a function defined on a graph is the projection of the signal onto the orthonormal graph Fourier basis , which we take to be the eigenvectors of the graph Laplacian associated with :
2.2 Concentration measures
In order to discuss uncertainty principles, we must first introduce some concentration/sparsity measures. Throughout the paper, we use the terms sparsity and concentration somewhat interchangeably, but we reserve the term spread to describe the spread of a function around some mean or center point, as discussed in the first family of uncertainty principles in Section 1. The first concentration measure is the support measure of , denoted , which counts the number of non-zero elements of . The second concentration measure is the Shannon entropy, which is used often in information theory and physics:
where the variable has values in for functions on graphs and in in the graph Fourier representation. Another class of concentration measures is the -norms, with . For , the sparsity of may be measured using the following quantity:
For any vector and any , . If is high (close to 1), then is sparse, and if is low, then is not concentrated. Figure 2 uses some basic signals to illustrate this notion of concentration, for different values of . In addition to sparsity, one can also relate -norms to the Shannon entropy via Renyi entropies (see, e.g., [48, 49] for more details).
2.3 Concentration of the graph Laplacian eigenvectors
The spectrum of the graph Laplacian replaces the frequencies as coordinates in the Fourier domain. For the special case of shift-invariant graphs with circulant graph Laplacians [44, Section 5.1], the Fourier eigenvectors can still be viewed as pure oscillations. However, for more general graphs (i.e., all but the most highly structured), the oscillatory behavior of the Fourier eigenvectors must be interpreted more broadly. For example, [1, Fig. 3] displays the number of zero crossings of each eigenvector; that is, for each eigenvector, the number of pairs of connected vertices where the signs of the values of the eigenvector at the connected vertices are opposite. It is generally the case that the graph Laplacian eigenvectors associated with larger eigenvalues contain more zero crossings, yielding a notion of frequency to the graph Laplacian eigenvalues. However, despite this broader notion of frequency, the graph Laplacian eigenvectors are not always globally-supported, pure oscillations like the complex exponentials. In particular, they can feature sharp peaks, meaning that some of the Fourier basis elements can be much more similar to an element of the canonical basis of Kronecker deltas on the vertices of the graph. As we will see, uncertainty principles for signals on graphs are highly affected by this phenomenon.
One way to compare a graph Fourier basis to the canonical basis is to compute the coherence between these two representations.
Definition 1 (Graph Fourier Coherence ).
Let be a graph of vertices. Let denote the canonical basis of of Kronecker deltas and let be the orthonormal basis of eigenvectors of the graph Laplacian of . The graph Fourier coherence is defined as:
This quantity measures the similarity between the two sets of vectors. If the sets possess a common vector, then (the maximum possible value for ). If the two sets are maximally incoherent, such as the canonical and Fourier bases in the standard discrete setting, then (the minimum possible value).
Because the graph Laplacian matrix encodes the weights of the edges of the graph, the coherence clearly depends on the structure of the underlying graph. It remains an open question exactly how structural properties of weighted graphs such as the regularity, clustering, modularity, and other spectral properties can be linked to the concentration of the graph Laplacian eigenvectors. For certain classes of random graphs [50, 51, 52] or large regular graphs , the eigenvectors have been shown to be non-localized, globally oscillating functions (i.e., is low). Yet, empirical studies such as  show that graph Laplacian eigenvectors can be highly concentrated (i.e., can be close to 1), particularly when the degree of a vertex is much higher or lower than the degrees of other vertices in the graph. The following example illustrates how can be influenced by the graph structure.
In this example, we discuss two classes of graphs that can have graph Fourier coherences. The first, called comet graphs, are studied in [33, 54]. They are composed of a star with vertices connected to a center vertex, and a single branch of length greater than one extending from one neighbor of the center vertex (see Figure 3, top). If we fix the length of the longer branch (it has length 10 in Figure 3), and increase , the number of neighbors of the center vertex, the graph Laplacian eigenvector associated with the largest eigenvalue approaches a Kronecker delta centered at the center vertex of the star. As a consequence, the coherence between the graph Fourier and the canonical bases approaches 1 as increases.
The second class are the modified path graphs, which we use several times in this contribution. We start with a standard path graph of 10 nodes equally spaced (all edge weights are equal to one) and we move the first node out to the left; i.e., we reduce the weight between the first two nodes (see Figure 3, bottom). The weight is related to the distance by with being the distance between nodes 1 and 2. When the weight between nodes 1 and 2 decreases, the eigenvector associated with the largest eigenvalue of the Laplacian becomes more concentrated, which increases the coherence . These two examples of simple families of graphs illustrate that the topology of the graph can impact the graph Fourier coherence, and, in turn, uncertainty principles that depend on the coherence.
In Figure 4, we display the eigenvector associated with the largest graph Laplacian eigenvalue for a modified path graph of 100 nodes, for several values of the weight . Observe that the shape of the eigenvector has a sharp local change at node 1.
Example 1 demonstrates an important point to keep in mind. A small local change in the graph structure can greatly affect the behavior of one eigenvector, and, in turn, a global quantity such as . However, intuitively, a small local change in the graph should not drastically change the processing of signal values far away, for example in a denoising or inpainting task. For this reason, in Section 6, we introduce a notion of local uncertainty that depicts how the graph is behaving locally.
Note that not only special classes of graphs or pathological graphs yield highly localized graph Laplacian eigenvectors. Rather, graphs arising in applications such as sensor or transportation networks, or graphs constructed from sampled manifolds (such as the graph sampled from manifold 2 in Figure 1) can also have graph Fourier coherences close to 1 (see, e.g., [23, Section 3.2] for further examples).
3 Global uncertainty principles relating the concentration of graph signals in two domains
In this section, we derive basic uncertainty principles using concentration measures and highlight the limitations of those uncertainty principles.
3.1 Direct applications of uncertainty principles for discrete signals
We start by applying three known uncertainty principles for discrete signals to the graph setting.
Let be a nonzero signal defined on a connected, weighted, undirected graph , let be a graph Fourier basis for , and let . We have the following four uncertainty principles:
The first uncertainty principle is given by a direct application of the Elad-Bruckstein inequality . It states that the sparsity of a function in one representation limits the sparsity in a second representation. As displayed in (1), the work of  holds for representations in any two bases. As we have seen, if we focus on the canonical basis and the graph Fourier basis , the coherence depends on the graph topology. For the ring graph, , and we recover the result from the standard discrete case (regular sampling, periodic boundary conditions). However, for graphs where is closer to 1, the uncertainty principle (4) is much weaker and therefore less informative. For example, is trivially true of nonzero signals. The same caveat applies to (5) and (6), which follow directly from  and , respectively, by once again specifying the canonical and graph Fourier bases. The last inequality (7) is an adaptation [34, Eq. (4.1)] to the graph setting, using the Hausdorff-Young inequality of Theorem 2 (see next section). It states that the energy of a function in a subset of the domain is bounded from above by the size of the selected subset and the sparsity of the function in the Fourier domain. If the subset is small and the function is sparse in the graph Fourier domain, this uncertainty principle limits the amount of energy of that fits insides of the subset of . Because can be chosen to be a local region of the domain (the graph vertex domain in our case), Folland and Sitaram  refer to such principles as “local uncertainty inequalities.” However, the term in the uncertainty bound is not local in the sense that it depends on the whole graph structure and not just on the topology of the subgraph containing vertices in .
The following example illustrates the relation between the graph, the concentration of a specific graph signal, and one of the uncertainty principles from Theorem 1. We return to this example in Section 3.3 to discuss further the limitations of these uncertainty principles featuring .
Figure 5 shows the computation of the quantities involved in (5), with and different ’s taken to be the modified path graphs of Example 1, with different distances between the first two vertices. We show the lefthand side of (5) for two different Kronecker deltas, one centered at vertex 1, and one centered at vertex 10. We have seen in Figure 3 that as the distance between the first two vertices increases, the coherence increases, and therefore the lower bound on the right-hand side of (5) decreases. For , the uncertainty quantity on the left-hand side of (5) follows a similar pattern. The intuition behind this is that as the weight between the first two vertices decreases, a few of the eigenvectors start to have local jumps around the first vertex (see Figure 4). As a result, we can sparsely represent as a linear combination of those eigenvectors and is reduced. However, since there are not any eigenvectors that are localized around the last vertex in the path graph, we cannot find a sparse linear combination of the graph Laplacian eigenvectors to represent . Therefore, its uncertainty quantity on the left-hand side of (5) does not follow the behavior of the lower bound.
3.2 The Hausdorff-Young inequalities for signals on graphs
The classical Hausdorff-Young inequality [43, Section IX.4] is a fundamental harmonic analysis result behind the intuition that a high degree of concentration of a signal in one domain (time or frequency) implies a low degree of concentration in the other domain. This relation is used in the proofs of the entropy and -norm uncertainty principles in the continuous setting. In this section, as we continue to explore the role of and the differences between the Euclidean and graph settings, we extend the Hausdorff-Young inequality to graph signals.
Let be the coherence between the graph Fourier and canonical bases of a graph . Let be such that . For any signal defined on and , we have
Conversely, for , we have
, is an extension of the classical proof using the Riesz-Thorin interpolation theorem. In the classical (infinite dimensional) setting, the inequality only depends onand . On a graph, it depends on and hence on the structure of the graph.
Dividing both sides of each inequality in Theorem 2 by leads to bounds on the concentrations (or sparsity levels) of a graph signal and its graph Fourier transform.
Let be such that . For any signal defined on the graph , we have
Theorem 2 and Corollary 1 assert that concentration or sparsity level of a graph signal in one domain (vertex or graph spectral) limits the concentration or sparsity level in the other domain. However, once again, if the coherence is close to 1, the result is not particularly informative as is trivially upper bounded by 1. The following numerical experiment illustrates the quantities involved in the Hausdorff-Young inequalities for graph signals. We again see that as the graph Fourier coherence increases, signals may be simultaneously concentrated in both the vertex domain and the graph spectral domain.
Continuing with the modified path graphs of Examples 1 and 2, we illustrate the bounds of the Hausdorff-Young inequalities for graph signals in Figure 6. For this example, we take the signal to be , a Kronecker delta centered on the first node of the modified path graph. As a consequence, for all , which makes it easier to compare the quantities involved in the inequalities. For this example, the bounds of Theorem 2 are fairly close to the actual values of .
Sharpness of the graph Hausdorff-Young inequalities.
For , (8) and (9) becomes equalities. Moreover, for or , there is always at least one signal for which the inequalities (8) and (9) become equalities, respectively. Let and satisfy . For , let . Then , and , and thus (8) is tight. For , let . Then , , and thus (9) is tight. The red curve and its bound in Figure 6 show the tight case for and .
3.3 Limitations of global concentration-based uncertainty principles in the graph setting
The motivation for this section was twofold. First, we wanted to derive the uncertainty principles for graph signals analogous to some of those that are so fundamental for signal processing on Euclidean domains. However, we also want to highlight the limitations of this approach (the second family of uncertainty principles described in Section 1) in the graph setting. The graph Fourier coherence is a global parameter that depends on the topology of the entire graph. Hence, it may be greatly influenced by a small localized changes in the graph structure. For example, in the modified path graph examples above, a change in a single edge weight leads to an increased coherence, and in turn significantly weakens the uncertainty principles characterizing the concentrations of the graph signal in the vertex and spectral domains. Such examples call into question the ability of such global uncertainty principles for graph signals to accurately describe phenomena in inhomogeneous graphs. This is the primary motivation for our investigation into local uncertainty principles in Section 6. However, before getting there, we consider global uncertainty principles from the third family of uncertainty principles described in Section 1 that bound the concentration of the analysis coefficients of a graph signal in a time-frequency transform domain.
4 Graph signal processing operators and dictionaries
As mentioned in Section 1, uncertainty principles can inform dictionary design. In the next section, we present uncertainty principles characterizing the concentration of the analysis coefficients of graph signals in different transform domains. We focus on three different classes of dictionaries for graph signal analysis: (i) frames, (ii) localized spectral graph filter frames, and (iii) graph Gabor filter bank frames, where localized spectral graph filter frames are a subclass of frames, and graph Gabor filter bank frames are a subclass of localized spectral graph filter frames. In this section, we define these different classes of dictionaries, and highlight some of their mathematical properties. Note that our notation uses dictionary atoms that are double indexed by and , but these could be combined into a single index for the most general case.
Definition 2 (Frame).
A dictionary is a frame if there exist constants and called the lower and upper frame bounds such that for all :
If , the frame is said to be a tight frame.
For more properties of frames, see, e.g., [55, 56, 57]. Most of the recently proposed dictionaries for graph signals are either orthogonal bases (e.g., [6, 15, 20]) , which are a subset of tight frames, or overcomplete frames (e.g., [13, 23, 22]).
In order to define localized spectral graph filter frames, we need to first recall one way to generalize the translation operator to the graph setting.
We localize (or translate) a kernel to center vertex by applying the localization operator , whose action is defined as
Note that this generalized localization operator is a kernelized operator. It does not translate an arbitrary signal defined in the vertex domain to different regions of the graph, but rather localizes a pattern defined in the graph spectral domain to be centered at different regions of the graph. The smoothness of the kernel to be localized can be used to bound the localization of the translated kernel around a center vertex ; i.e., if a smooth kernel is localized to center vertex , then the magnitude of decays as the distance between and increases [13, Section 5.2], [23, Section 4.4]. Except for special cases such as when is a circulant graph with and the Laplacian eigenvectors are the DFT basis, the generalized localization operator of Definition 3 is not isometric. Rather, we have
Lemma 1 (, Lemma 1).
For any ,
which yields the following upper bound on the operator norm of :
It is interesting to note that although the norm is not preserved when a kernel is localized on an arbitrary graph, it is preserved on average when translated to separately to every vertex on the graph:
The following example presents more precise insights on the interplay between the localization operator, the graph structure, and the concentration of localized functions.
Figure 7 illustrates the effect of the graph structure on the norms of localized functions. We take the kernel to be localized to be a heat kernel of the form , for some constant . We localize the kernel to be centered at each vertex of the graph with the operator , and we compute and plot their -norms . The figure shows that when a center node and its surrounding vertices are relatively weakly connected, the -norm of the localized heat kernel is large, and when the nodes are relatively well connected, the norm is smaller. Therefore, the norm of the localized heat kernel may be seen as a measure of vertex centrality.111In fact, the square norm of the localized heat kernel at vertex is, up to constants, the average diffusion distance from to all other vertices. It is therefore a genuine measure of centrality. Moreover, in the case of the heat kernel, we can relate the -norm of to its concentration . Localized heat kernels are comprised entirely of nonnegative components; i.e., for all and . This property comes from (i) the fact that (see ), and (ii) the non-trivial property that the entries of are always nonnegative for the heat kernel . Since for all and , we have
where the second equality follows from [23, Corollary 1]. Thus, recalling that a large value for means that is concentrated, we can combine (10) and (12) to derive an upper bound on the concentration of :
Thus, serves as a measure of concentration, and according to the numerical experiments of Figure 7, localized heat kernels centered on the relatively well-connected regions of a graph tend to be less concentrated than the ones centered on relatively less well-connected areas. Intuitively, the values of the localized heat kernels can be linked to the diffusion of a unit of energy from the center vertex to surrounding vertices over a fixed time. In the well-connected regions of the graph, energy diffuses faster, making the localized heat kernels less concentrated.
The main class of dictionaries for graph signals that we consider is localized spectral graph filter frames.
Definition 4 (Localized spectral graph filter frame).
Let be a sequence of kernels (or filters), where each is a function defined on the graph Laplacian spectrum of a graph . Define the quantity . Then is a localized spectral graph filter dictionary, and it forms a frame if for all .
In practice, each filter is often defined as a continuous function over the interval and then applied to the discrete set of eigenvalues in . The following lemma characterizes the frame bounds for a localized spectral graph filter frame.
Lemma 2 (, Lemma 1).
Let be a localized spectral graph filter frame of atoms on a graph generated from the sequence of filters . The lower and upper frame bounds for are given by and , respectively. If is constant over , then is a tight frame.
Examples of localized spectral graph filter frames include the spectral graph wavelets of , the Meyer-like tight graph wavelet frames of [59, 16], the spectrum-adapted wavelets and vertex-frequency frames of , and the learned parametric dictionaries of . The dictionaries constructions in [13, 22] choose the filters so that their energies are localized in different spectral bands. Different choices of filters lead to different tilings of the vertex-frequency space, and can for example lead to wavelet-like frames or vertex-frequency frames (analogous to classical windowed Fourier frames). The frame condition that for all ensures that these filters cover the entire spectrum, so that no band of information is lost during analysis and reconstruction.
In this paper, in order to generalize classical windowed Fourier frames, we often use a localized graph spectral filter bank where the kernels are uniform translates, which we refer to as a graph Gabor filter bank.
Definition 5 (Graph Gabor filter bank).
When the kernels used to generate the localized graph spectral filter frame are uniform translates of each other, we refer to the resulting dictionary as a graph Gabor filter bank or a graph Gabor filter frame. If we use the warping technique of  on these uniform translates, we refer to the resulting dictionary as a spectrum-adapted graph Gabor filter frame.
Graph Gabor filter banks are generalizations of the short time Fourier transform. When is smooth, the atoms are localized in the vertex domain. In this contribution, for all graph Gabor filter frames, we use the following mother window: for and elsewhere. A few desirable properties of this choice of window are (a) it is perfectly localized in the spectral domain in , (b) it is smooth enough to be approximated by a low order polynomial, and (c) the frame formed by uniform translates (with an even overlap) is tight.
Definition 6 (Analysis operator).
The analysis operator of a dictionary to a signal is given by
When is a localized spectral graph filter frame, we denote it with . In all cases, we view as a function from to , and thus we use (or ) to denote a vector norm of the analysis coefficients.
5 Global uncertainty principles bounding the concentration of the analysis coefficients of a graph signal in a transform domain
5.1 Discrete version of Lieb’s uncertainty principle
Lieb’s uncertainty principle in the continuous one-dimensional setting  states that the cross-ambiguity function of a signal cannot be too concentrated in the time-frequency plane. In the following, we transpose these statements to the discrete periodic setting, and then generalize them to frames and signals on graphs. The following discrete version of Lieb’s uncertainty principle is partially presented in [61, Proposition 2].
Define the discrete Fourier transform (DFT) as
and the discrete windowed Fourier transform (or discrete cross-ambiguity function) as (see, e.g., [35, Section 4.2.3])
For two discrete signals of period , we have for
These inequalities are proven in Section 8.2.2 of the Appendix. Note that the minimizers of this uncertainty principle are the so-called "picket fence" signals, trains of regularly spaced diracs.
5.2 Generalization of Lieb’s uncertainty principle to frames
5.3 Lieb’s uncertainty principle for localized spectral graph filter frames
Let be a localized spectral graph filter frame of atoms on a graph generated from the sequence of filters . For any signal on and for any , we have
where is the lower frame bound and is the upper frame bound. When is a tight frame with frame bound , (18) reduces to
The bounds depend on the frame bounds and , which are fixed with the design of the filter bank. However, in the tight frame case, we can choose the filters in a manner such that the bound does not depend on the graph structure. For example, if the are defined continuously on the interval and is equal to a constant for all , is not affected by a change in the values of the Laplacian eigenvalues, e.g., from a change in the graph structure. The second quantity, , reveals the influence of the graph. The maximum -norm of the atoms depends on the filter design, but also, as discussed previously in Section 4, on the graph topology. However, the bound is not local as it depends on the maximum over all localizations and filters , which takes into account the entire graph structure.
The second bounds in (18) and (19) also suggest how the filters can be designed so as to improve the uncertainty bound. The quantity depends on the distribution of the eigenvalues , and, as consequence, on the graph structure. However, the distribution of the eigenvalues can be taken into account when designing the filters in order to reduce or cancel this dependency .
In the following example, we compute the first uncertainty bound in (19) for different types of graphs and filters. It provides some insight on the influence of the graph topology and filter bank design on the uncertainty bound.
We use the techniques of  to construct four tight localized spectral graph filter frames for each of eight different graphs. Figure 8 shows an examples of the four sets of filters for a 64 node sensor network. For each graph, two of the sets of filters (b and d in Figure 8) are adapted via warping to the distribution of the graph Laplacian eigenvalues so that each filter contains an appropriate number of eigenvalues (roughly equal in the case of translates and roughly logarithmic in the case of wavelets). The warping avoids filters containing zero or very few eigenvalues at which the filter has a nonzero value. These tight frames are designed such that , and thus Theorem 5 yields
Table 1 displays the values of the first concentration bound for each graph and frame pair. The uncertainty bound is largest when the graph is far from a regular lattice (ring or path). As expected, the worst cases are for highly inhomogeneous graphs like the comet graph or a modified path graph with one isolated vertex. The choice of the filter bank may also decrease or increase the bound, depending on the graph.
|Graph||(Uniform Translates)||Graph Gabor||Wavelets||Wavelets|
|Random sensor network||0.68|
The uncertainty principle in Theorem 5 bounds the concentration of the graph Gabor transform coefficients. In the next example, we examine these coefficients for a series of signals with different vertex and spectral domain localization properties.
Example 6 (Concentration of the graph Gabor coefficients for signals with varying vertex and spectral domain concentrations.).
In Figure 9, we analyze a series of signals on a random sensor network of 100 vertices. Each signal is created by localizing a kernel to be centered at vertex 1 (circled in black). To generate the four different signals, we vary the value of the parameter in the heat kernel. We plot the four localized kernels in the graph spectral and vertex domains in the first two columns, respectively. The more we “compress” in the graph spectral domain (i.e. we reduce its spectral spreading by increasing ), the less concentrated the localized atom becomes in the vertex domain. The joint vertex-frequency representation of each signal is shown in the third column, which illustrates the trade-off between concentration in the vertex and the spectral domains. The concentration of these graph Gabor transform coefficients is the quantity bounded by the uncertainty principle presented in Theorem 5. In the last row of the Figure 9, which leads to a Kronecker delta for the kernel and a constant on the vertex domain. On the contrary, when the kernel is constant, with (top row), the energy of the graph Gabor coefficients stays concentrated around one vertex but spreads along all frequencies.
6 Local uncertainty principles for signals on graphs
In the previous section, we defined a global bound for the concentration of the localized spectral graph filter frame analysis coefficients. In the classical setting, such a global bound is also local in the sense that each part of the domain has the same structure, due to the regularity of the underlying domain. However, this is not the case for the graph setting where the domain is irregular. Example 1 shows that a “bad” structure (a weakly connected node) in a small region of the graph reduces the uncertainty bound even if the rest of the graph is well behaved. Functions localized near the weakly connected node can be highly concentrated in both the vertex and frequency domains, whereas functions localized away from it are barely impacted. Importantly, the worst case determines the global uncertainty bound. As another example, suppose one has two graphs and with two different structures, each of them having a different uncertainty bound. The uncertainty bound for the graph that is the union of these two disconnected graphs is the minimum of the uncertainty bounds of the two disconnected graphs, which is suboptimal for one of the two graphs.
In this section, we ask the following questions. Where does this worse case happen? Can we find a local principle that more accurately characterizes the uncertainty in other parts of the graph? In order to answer this question, we investigate the concentration of the analysis coefficients of the frame atoms, which are localized signals in the vertex domain. This technique is used in the classical continuous case by Lieb , who defines the (cross-) ambiguity function, the STFT of a short-time Fourier atom. The result is a joint time-frequency uncertainty principle that does not depend on the localization in time or in frequency of the analyzed atom.
Thus, we start by generalizing to the graph setting the definition of ambiguity (or cross-ambiguity) functions from time-frequency analysis of one-dimensional signals.
Definition 7 (Ambiguity function).
The ambiguity function of a localized spectral frame is defined as:
When the kernels are appropriately warped uniform translates, the operator becomes a generalization of the short time Fourier transform. Additionally, the ambiguity function assesses the degree of coherence (linearly dependence) between the atoms and . In the following, we use this ambiguity function to probe locally the structure of the graph, and derive local uncertainty principles.
6.1 Local uncertainty principle
In order to probe the local uncertainty of a graph, we take a set of localized kernels in the graph spectral domain and center them at different local regions of the graph in the vertex domain. The atoms resulting from this construction are jointly localized in both the vertex and graph spectral domains, where "localized" means that the values of the function are zero or close to zero away from some reference point. By ensuring that the atoms are localized or have support within a small region of the graph, we focus on the properties of the graph in that region. In order to get a local uncertainty principle, we apply the frame operator to these localized atoms, and analyze the concentration of the resulting coefficients. In doing so, we develop an uncertainty principle relating these concentrations to the local graph structure.
To prepare for the theorem, we first state a lemma that gives a hint to how the scalar product of two localized functions depends on the graph structure and properties. In the following, we multiply two kernels and in the graph spectral domain. For notation, we represent the product of these two kernels in vertex domain as .
For two kernels , and two nodes , the localization operator satisfies
Equation (20) shows more clearly the conditions on the kernels and nodes under which the scalar product is small. Let us take two examples. First, suppose and