Background
Graph nomenclature
Let us define as an undirected weighted graph where is the set of vertices and the set of edges representing connections between nodes in . The vertices of the graph are ordered from to . The matrix , which is symmetric and positive, is called the weighted adjacency matrix of the graph . The weight represents the weight of the edge between vertices and and a value of 0 means that the two vertices are not connected. The degree of a node is defined as the sum of the weights of all its edges
. Finally, a graph signal is defined as a vector of scalar values over the set of vertices
where the th component of the vector is the value of the signal at vertex .Spectral theory
The combinatorial Laplacian operator can be defined from the weighted adjacency matrix as with being the degree matrix defined as a diagonal matrix with . One alternative and often used Laplacian definition is the normalized Laplacian . Since the weight matrix is symmetric positive semidefinite, so is by construction. By application of the spectral theorem, we know that
can be decomposed into an orthonormal basis of eigenvectors noted
. The ordering of the eigenvectors is given by the eigenvalues noted
sorted in ascending order . In a matrix form we can write this decomposition as with the matrix of eigenvectors and the diagonal matrix containing the eigenvalues in ascending order. Given a graph signal, its graph Fourier transform is thus defined as
, and the inverse transform . It is called a Fourier transform by analogy to the continuous Laplacian whose spectral components are Fourier modes, and the matrix is sometimes referred to as the graph Fourier matrix (see e.g., [chung1997spectral]). By the same analogy, the set is often seen as the set of graph frequencies [shuman2013vertex].Graph filtering
In traditional signal processing, filtering can be carried out by a pointwise multiplication in Fourier. Thus, since the graph Fourier transform is defined, it is natural to consider a filtering operation on the graph using a multiplication in the graph Fourier domain. To this end, we define a graph filter as a continuous fonction directly in the graph Fourier domain. If we consider the filtering of a signal , whose graph Fourier transform is written , by a filter the operation in the spectral domain is a simple multiplication , with and the filtered signal and its graph Fourier transform respectively. Using the graph Fourier matrix to recover the vertexbased signals we get the explicit matrix formulation for graph filtering:
where . The graph filtering operator is often used to reformulate the graph filtering equation as a simple vectormatrix operation . Since the filtering equation defined above involves the full set of eigenvectors , it implies the diagonalization of the Laplacian which is costly for large graphs. To circumvent this problem, one can represent the filter as a polynomial approximation, since polynomial filtering only involves the multiplication of the signal by a power of of the same order as the polynomial. Filtering using good polynomial approximations can be done using Chebyshev or Lanczos polynomials [hammond2011wavelets, susnjara2015accelerated].
Localization operator
The concept of translation, which is well defined in traditional signal processing cannot be directly applied to graphs, as they can be irregular. However, inspired by the notion of translation, we can define the localization of a function defined on the graph spectrum as a convolution with a Kronecker delta , where is called the localization operator, and means localization at vertex . Going back to the vertex domain, we get :
The reason for calling a localization operator comes from the fact that for smooth functions , is localized around the vertex . The proof of this result and more information on the localization operator can be found in [shuman2016vertex]. The localization of filters is quite naturally called atoms as a filtering operation of a signal using a filter can be expressed as .
Additional notation
We use for the induced norm of the matrix and for the Froebenius norm. The maximum eigenvalue of a matrix is written . We reserve the number notation for vectors. For example, we write the Euclidean norm as and the uniform (sup) norm . We abusively use the to count the number of nonzero elements in a vector. Furthermore, when an univariate function is applied to a vector , we mean . As a result, is the number of eigenvalues where . Given a kernel , we define as a matrix made of the columns of where . Similarly, we denote the diagonal matrix containing the associated eigenvalues. Note that we have
Random sampling on graphs
In this section, we first define a graph sampling schemes and then prove related theoretical limits. In particular, it is of particular interest to understand the number of samples needed in order to diffuse energy on every node by localizing filters on the samples. We will prove that the number of samples needed is direclty linked with the rank of the filter.
Adaptive sampling scheme
Let us define the probability distribution represented by a vector . We use two different sampling schemes. Uniform sampling is given by the probability vector
and adapted sampling is given by
Remember that we have , implying that . Let us associate the matrix
to Then, we draw independently (with replacement) indices from the set according to the probability distribution . We have
For any signal defined on the vertices of the graph, its sampled version satisfies
Finally, the downsampling matrix is defined as
for all and Note that .
Embedding Theorems
The first theorem shows that given enough samples, the random projection conserves the energy contained in . In this sense, given enough samples, it is an embedding of . Given a graph and a kernel with a given rank , given and using the sampling scheme of Section Document, if
we have with a probability of for all :
() 
Note that the above expression is normalized by in order to remove the scaling factor of the kernel . Let us now analyze the most important term of the bound:
() 
It is a measure of concentration of the kernel on its support. It is maximized with the value when is a rectangle. In general, it will be small for concentrated kernels. For example, a rapidly decreasing kernel such as the heat kernel () will lead to a very small ratio. Note that contrarily to almost all bound available in the literature this bound does not require the kernel to be low rank but only concentrated. For a comparison [puy2016random, Corollary 2.3] requires
Optimality of the sampling scheme.
Although we have no formal proof of optimality, the sampling scheme presented in Section Document is a good candidate. Indeed, when reading the proof of Theorem Document, the reader may notice that it minimizes the number of samples . Building on top of Theorem Document, we establish a lower bound on the number of samples required by Algorithm Document to capture enough information from each node with a given confidence level. It will ensure that the information diffused from the samples can reach all nodes. Using the sampling scheme described in Section Document, for , a graph and a kernel such that , each node is guaranteed with a probability to have
given that the number of samples satisfies
where . Theorem Optimality of the sampling scheme. warrants that given enough samples , Algorithm Document captures with some probability (close to ), at least a good percentage of the energy at node . The factor is always greater than and varies depending on the shape of the kernel and of the graph eigenvectors. However it is and exactly equal to if is a rectangular kernel. Indeed, a simple transformation shows that
The first term is smaller than but is usually close to for a kernel close to a rectangle. The second term is greater than but close to given that the kernel is close to a rectangle. Problematically, this bound becomes loose if the kernel has a large rank because of the term . To cope with this problem we can use another kernel that is a lowrank approximation of . Given a graph , let (with ) to be the rank approximation of the kernel , i.e.,
Using the sampling scheme described in Section Document with the kernel , for , each node is assured with a probability to have
providing the number of samples satisfies^{1}^{1}1Note that .
Using Theorem Optimality of the sampling scheme., the number of samples required can be highly reduced. Indeed, when the kernel is well concentrated but not low rank, we trade some approximation error encoded by (which will be low if is concentrated) but we will need a smaller number of samples due to the fact that is low rank. This theorem can be interesting for a heat kernel for example.
Metrics based on localized filters
Before moving on to the information diffusion from the samples, we need to take a closer look to localized filters and in particular see how they can be used to measure distances or correlations between nodes.
Localized Kernel Distance
Since localized filters are proven to be concentrated in the vertex domain (see [shuman2013vertex, Theorem 1]), it seems natural to use them to get geodesic measures or correlations between nodes. To this end, we introduce the Localized Kernel Distance (LKD), which is defined as :
() 
Let us now examine its properties by stating the following theorem: The space with the vertex set of a graph and as defined in is a pseudosemimetric space, that is, for every :
First, let us derive an alternative form of eq:lkd_definition :
() 
This can be derived as follows :
Now let us verify the properties one by one :

We have using lkd_definition_2 :
where the last inequality stands because (CauchySchwartz inequality).

Let us verify that :

Finally, we have
The space with the vertex set of a graph and as defined in , with constant, is a semimetric space, that is, for every :
Properties 1 and 3, as well as the backward implication are still valid as stated in Theorem . Now let us check that . We want to do it by contradiction and thus search any , for which , implying :
() 
We can rewrite this equality as :
() 
For , with a constant, the left hand side is :
() 
The last equality comes from the fact that two lines of an orthonormal matrix are orthogonal, and . Now the righthand side is :
() 
with the last equality coming from the fact that is an orthonormal basis. Now, since we have a contradiction, and thus the proof is completed.
Kernelized Diffusion Distance
Another approach to use localized atoms to define distances is to measure the norm of the difference between a filter localized at two different nodes. We call it the Kernelized Diffusion Distance and define it as:
() 
where is a kernel defined in the graph spectral domain. Before going further, and as it will be useful later, let us derive a corollary definition of :
() 
This alternative definition can be quickly derived as follows :
which implies by taking the square root on both sides. Let us now examine the properties of the KDD by stating the following theorem: The space with the vertex set of a graph and as defined in is a pseudometric space, that is, for every :
Let us verify the properties in order :

This property holds trivially due to the positivity of the norm .

We have

We have
which holds using the triangle inequality for vectors.
Now that we proved that the KDD is a pseudometric, we only need to have the identity of the indiscernibles, i.e. to prove it is a metric. However, we can only do it using an additional hypothesis on . This is formulated in the following theorem : The space with the vertex set of a graph and as defined in , with being full rank, is a metric space, that is, for every :
Properties 13 are still valid as stated in Theorem theatequationgroupatIDd. Now let us check Property 4.

We first prove :

Now let us check that . We do it by contradiction and thus want to find any pair , for which . In particular we need that :
() with . Since is full rank then , and thus the only way for eq:kdd_contradiction to hold is if , . In other words it would imply that the lines and of are identical. Since is a basis, it implies that all its lines are orthonormal, which means there exist no pair such as eq:kdd_contradiction hold, and thus the contradiction is established, which concludes the proof.
Diffusion distance
As was hinted in the name, the distance defined in eq:kdd_definition happens to be a generalized diffusion distance. Indeed, taking its spectral formulation we have :
() 
where is the diffusion distance associated to specific kernels depending on (i.e. the diffusion parameter). If we take two common definitions of the diffusion distance, the original works of [nadler2005diffusion] and [coifman2006diffusion] use a kernel of the form and the Graph Diffusion Distance defined in [hammond2013graph] uses the heat kernel .
Graph transductive learning
In this section we want to cast the problem of diffusing the information obtained on a few samples of the data (e.g. using sampling schemes such as defined in Section Document) in a transductive inference framework. In this setting, we are observing a label field or signal only at a subset of vertices , i.e , , with being the observed signal also called the label function. The goal of transductive learning is to predict the missing signal/labels using both the observed signal and the remaining data points.
Global graph diffusion
Solutions of transductive inference using graphs can be solved in a number of ways, for example using Tikhonov regression :
() 
where is the sampling operator and the graph Laplacian. An alternative to the use of the Dirichlet smoothness constraint is to use graph Total Variation (TV). The regression would thus become :
() 
with , . For large scale learning, solving the optimization problems as described above can be too expensive and one typically uses accelerated descent methods.
RKHS transductive learning on graphs
Motivation
Our first contribution is to replace the smoothness term arising in by constraining the solution to belong to the finite dimensional Reproducing Kernel Hilbert Space (RKHS) corresponding to the graph kernel , for some filter . In this case, we instead solve the following problem :
and show that the solution is given by a simple lowpass filtering step applied to the labelled examples.
Transductive learning and graph filters
In this section, we formulate transductive learning as a finite dimensional regression problem. This problem is solved by constructing a reproducing kernel Hilbert space from a graph filter, which controls the smoothness of the solution and provides a fast algorithm to compute it.
An empirical reproducing kernel Hilbert space
Let be a smooth, strictly positive function defining a graph filter as defined in Section Document. The graph filter defines the following matrix :
where is the localisation operator at vertex . Since the filter is strictly positive definite,
is positive definite and can be written as the Gram matrix of a set of linearly independent vectors. To see this, we use the spectral representation :
Let be the th row of , we immediately see that . More explicitly, these vectors are written in terms of the graph filter :
These expressions suggest to define the Hilbert space as the closure of all linear combinations of localized graph filters . This space is therefore composed of functions of the form :
() 
Note that any has a welldefined graph Fourier transform :
This allows to equip with following scalar product :
and the vectors form an orthonormal basis of :
Let us now see that is a reproducing kernel Hilbert space (rkhs). We show that the scalar product with in is the evaluation functional at vertex . We first compute :
By linearity of the scalar product and the definition of eq:rkhs we have :
Finally, for any , , we have the following explicit form of their norm :
Transductive learning
Now that we have established as a valid RKHS, we will seek to recover the full signal by solving the following problem :
Comments
There are no comments yet.