Topics Course on Deep Learning UC Berkeley
Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures.READ FULL TEXT VIEW PDF
In this work, we are interested in generalizing convolutional neural net...
Motivated by the necessity for parameter efficiency in distributed machi...
We propose a generalization of convolutional neural networks (CNNs) to
Numerous important problems can be framed as learning from graph data. W...
Although Convolutional Neural Networks (CNNs) are widely used, their
In this paper we challenge the common assumption that convolutional laye...
We propose a simple and generic layer formulation that extends the prope...
Topics Course on Deep Learning UC Berkeley
Neural Message Passing for Computer Vision
Convolutional Neural Networks (CNNs) have been extremely succesful in machine learning problems where the coordinates of the underlying data representation have a grid structure (in 1, 2 and 3 dimensions), and the data to be studied in those coordinates has translational equivariance/invariance with respect to this grid. Speech, images [14, 20, 22] or video [23, 18] are prominent examples that fall into this category.
On a regular grid, a CNN is able to exploit several structures that play nicely together to greatly reduce the number of parameters in the system:
The translation structure, allowing the use of filters instead of generic linear maps and hence weight sharing.
The metric on the grid, allowing compactly supported filters, whose support is typically much smaller than the size of the input signals.
The multiscale dyadic clustering of the grid, allowing subsampling, implemented through stride convolutions and pooling.
If there are input coordinates on a grid in dimensions, a fully connected layer with outputs requires parameters, which in typical operating regimes amounts to a complexity of parameters. Using arbitrary filters instead of generic fully connected layers reduces the complexity to parameters per feature map, as does using the metric structure by building a “locally connected” net [8, 17]. Using the two together gives parameters, where is the number of feature maps and is the support of the filters, and as a result the learning complexity is independent of . Finally, using the multiscale dyadic clustering allows each succesive layer to use a factor of less (spatial) coordinates per filter.
In many contexts, however, one may be faced with data defined over coordinates which lack some, or all, of the above geometrical properties. For instance, data defined on 3-D meshes, such as surface tension or temperature, measurements from a network of meteorological stations, or data coming from social networks or collaborative filtering, are all examples of structured inputs on which one cannot apply standard convolutional networks. Another relevant example is the intermediate representation arising from deep neural networks. Although the spatial convolutional structure can be exploited at several layers, typical CNN architectures do not assume any geometry in the “feature” dimension, resulting in 4-D tensors which are only convolutional along their spatial coordinates.
Graphs offer a natural framework to generalize the low-dimensional grid structure, and by extension the notion of convolution. In this work, we will discuss constructions of deep neural networks on graphs other than regular grids. We propose two different constructions. In the first one, we show that one can extend properties (2) and (3) to general graphs, and use them to define “locally” connected and pooling layers, which require parameters instead of . We term this the spatial construction. The other construction, which we call spectral construction, draws on the properties of convolutions in the Fourier domain. In , convolutions are linear operators diagonalised by the Fourier basis , . One may then extend convolutions to general graphs by finding the corresponding “Fourier” basis. This equivalence is given through the graph Laplacian, an operator which provides an harmonic analysis on the graphs . The spectral construction needs at most paramters per feature map, and also enables a construction where the number of parameters is independent of the input dimension . These constructions allow efficient forward propagation and can be applied to datasets with very large number of coordinates.
Our main contributions are summarized as follows:
We show that from a weak geometric structure in the input domain it is possible to obtain efficient architectures using parameters, that we validate on low-dimensional graph datasets.
We introduce a construction using parameters which we empirically verify, and we discuss its connections with an harmonic analysis problem on graphs.
The most immediate generalisation of CNN to general graphs is to consider multiscale, hierarchical, local receptive fields, as suggested in . For that purpose, the grid will be replaced by a weighted graph , where is a discrete set of size and is a symmetric and nonnegative matrix.
The notion of locality can be generalized easily in the context of a graph. Indeed, the weights in a graph determine a notion of locality. For example, a straightforward way to define neighborhoods on is to set a threshold and take neighborhoods
We can restrict attention to sparse “filters” with receptive fields given by these neighborhoods to get locally connected networks, thus reducing the number of parameters in a filter layer to , where is the average neighborhood size.
CNNs reduce the size of the grid via pooling and subsampling layers. These layers are possible because of the natural multiscale clustering of the grid: they input all the feature maps over a cluster, and output a single feature for that cluster. On the grid, the dyadic clustering behaves nicely with respect to the metric and the Laplacian (and so with the translation structure). There is a large literature on forming multiscale clusterings on graphs, see for example [16, 25, 6, 13]. Finding multiscale clusterings that are provably guaranteed to behave well w.r.t. Laplacian on the graph is still an open area of research. In this work we will use a naive agglomerative method.
Figure 1 illustrates a multiresolution clustering of a graph with the corresponding neighborhoods.
The spatial construction starts with a multiscale clustering of the graph, similarly as in  We consider scales. We set , and for each , we define , a partition of into clusters; and a collection of neighborhoods around each element of :
With these in hand, we can now define the -th layer of the network. We assume without loss of generality that the input signal is a real signal defined in , and we denote by the number of “filters” created at each layer . Each layer of the network will transform a -dimensional signal indexed by into a -dimensional signal indexed by , thus trading-off spatial resolution with newly created feature coordinates.
More formally, if is the is the input to layer , its the output is defined as
where is a sparse matrix with nonzero entries in the locations given by , and outputs the result of a pooling operation over each cluster in . This construcion is illustrated in Figure 2.
In the current code, to build and we use the following construction:
and is found as an covering for 111An -covering of a set using a similarity kernel is a partition such that .. This is just one amongst many strategies to perform hierarchicial agglomerative clustering. For a larger account of the problem, we refer the reader to .
If is the average support of the neighborhoods , we verify from (2.1) that the number of parameters to learn at layer is
In practice, we have , where is the oversampling factor, typically .
The spatial construction might appear naïve, but it has the advantage that it requires relatively weak regularity assumptions on the graph. Graphs having low intrinsic dimension have localized neighborhoods, even if no nice global embedding exists. However, under this construction there is no easy way to induce weight sharing across different locations of the graph. One possible option is to consider a global embedding of the graph into a low dimensional space, which is rare in practice for high-dimensional data.
The global structure of the graph can be exploited with the spectrum of its graph-Laplacian to generalize the convolution operator.
The combinatorial Laplacian or graph Laplacian are generalizations of the Laplacian on the grid; and frequency and smoothness relative to are interrelated through these operators [2, 25]. For simplicity, here we use the combinatorial Laplacian. If is an
-dimensional vector, a natural definition of the smoothness functionalat a node is
With this definition, the smoothest vector is a constant:
is an eigenvector of
, and the eigenvaluesallow the smoothness of a vector to be read off from the coefficients of in , equivalently as the Fourier coefficients of a signal defined in a grid. Thus, just an in the case of the grid, where the eigenvectors of the Laplacian are the Fourier vectors, diagonal operators on the spectrum of the Laplacian modulate the smoothness of their operands. Moreover, using these diagonal operators reduces the number of parameters of a filter from to .
These three structures above are all tied together through the Laplacian operator on the -dimensional grid :
Filters are multipliers on the eigenvalues of the Laplacian .
Functions that are smooth relative to the grid metric have coefficients with quick decay in the basis of eigenvectors of .
The eigenvectors of the subsampled Laplacian are the low frequency eigenvectors of .
As in section 2.3, let be a weighted graph with index set denoted by , and let be the eigenvectors of the graph Laplacian , ordered by eigenvalue. Given a weighted graph, we can try to generalize a convolutional net by operating on the spectrum of the weights, given by the eigenvectors of its graph Laplacian.
For simplicity, let us first describe a construction where each layer transforms an input vector of size into an output of dimensions , that is, without spatial subsampling:
where is a diagonal matrix and, as before, is a real valued nonlinearity.
Often, only the first eigenvectors of the Laplacian are useful in practice, which carry the smooth geometry of the graph. The cutoff frequency depends upon the intrinsic regularity of the graph and also the sample size. In that case, we can replace in (3.2) by , obtained by keeping the first columns of .
If the graph has an underlying group invariance this construction can discover it; the best example being the standard CNN; see 3.3. However, in many cases the graph does not have a group structure, or the group structure does not commute with the Laplacian, and so we cannot think of each filter as passing a template across and recording the correlation of the template with that location. may not be homogenous in a way that allows this to make sense, as we shall see in the example from Section 5.1.
Assuming only eigenvectors of the Laplacian are kept, equation (3.2) shows that each layer requires paramters to train. We shall see in section 3.4 how the global and local regularity of the graph can be combined to produce layers with parameters, i.e. such that the number of learnable parameters does not depend upon the size of the input.
This construction can suffer from the fact that most graphs have meaningful eigenvectors only for the very top of the spectrum. Even when the individual high frequency eigenvectors are not meaningful, a cohort of high frequency eigenvectors may contain meaningful information. However this construction may not be able to access this information because it is nearly diagonal at the highest frequencies.
Finally, it is not obvious how to do either the forwardprop or the backprop efficiently while applying the nonlinearity on the space side, as we have to make the expensive multiplications by and ; and it is not obvious how to do standard nonlinearities on the spectral side. However, see 4.1.
A simple, and in some sense universal, choice of weight matrix in this construction is the covariance of the data. Let be the input data distribution, with . If each coordinate
has the same variance,
then diagonal operators on the Laplacian simply scale the principal components of . While this may seem trivial, it is well known that the principal components of the set of images of a fixed size are (experimentally) correspond to the Discrete Cosine Transform basis, organized by frequency. This can be explained by noticing that images are translation invariant, and hence the covariance operator
satisfies , hence it is diagonalized by the Fourier basis. Moreover, it is well known that natural images exhibit a power spectrum , since nearby pixels are more correlated than far away pixels. It results that principal components of the covariance are essentially ordered from low to high frequencies, which is consistent with the standard group structure of the Fourier basis.
The upshot is that, when applied to natural images, the construction in 3.2 using the covariance as the similarity kernel recovers a standard convolutional network, without any prior knowledge. Indeed, the linear operators from Eq (3.2) are by the previous argument diagonal in the Fourier basis, hence translation invariant, hence “classic” convolutions. Moreover, Section 4.1
explains how spatial subsampling can also be obtained via dropping the last part of the spectrum of the Laplacian, leading to max-pooling, and ultimately to deep convolutonal networks.
In the standard grid, we do not need a parameter for each Fourier function because the filters are compactly supported in space, but in (3.2), each filter requires one parameter for each eigenvector on which it acts. Even if the filters were compactly supported in space in this construction, we still would not get less than parameters per filter because the spatial response would be different at each location.
One possibility for getting around this is to generalize the duality of the grid. On the Euclidian grid, the decay of a function in the spatial domain is translated into smoothness in the Fourier domain, and viceversa. It results that a funtion which is spatially localized has a smooth frequency response . In that case, the eigenvectors of the Laplacian can be thought of as being arranged on a grid isomorphic to the original spatial grid.
This suggests that, in order to learn a layer in which features will be not only shared across locations but also well localized in the original domain, one can learn spectral multipliers which are smooth. Smoothness can be prescribed by learning only a subsampled set of frequency multipliers and using an interpolation kernel to obtain the rest, such as cubic splines. However, the notion of smoothness requires a geometry in the domain of spectral coordinates, which can be obtained by defining a dual graphas shown by (3.1). As previously discussed, on regular grids this geometry is given by the notion of frequency, but this cannot be directly generalized to other graphs.
A particularly simple and navie choice consists in choosing a -dimensional arrangement, obtained by ordering the eigenvectors according to their eigenvalues. In this setting, the diagonal of each filter (of size at most is parametrized by
where is a fixed cubic spline kernel and are the spline coefficients. If one seeks to have filters with constant spatial support (ie, whose support is independent of the input size ), it follows that one can choose a sampling step in the spectral domain, which results in a constant number of coefficients per filter.
Although results from section 5 seem to indicate that the 1-D arrangement given by the spectrum of the Laplacian is efficient at creating spatially localized filters, a fundamental question is how to define a dual graph capturing the geometry of spectral coordinates. A possible algorithmic stategy is to consider an input distribution consisting on spatially localized signals and to construct a dual graph by measuring the similarity of in the spectral domain: . The similarity could be measured for instance with .
. A wavelet basis on a grid, in the language of neural networks, is a linear autoencoder with certain provable regularity properties (in particular, when encoding various classes of smooth functions, sparsity is guaranteed). The forward propagation in a classical wavelet transform strongly resembles the forward propagation in a neural network, except that there is only one filter map at each layer (and it is usually the same filter at each layer), and the output of each layer is kept, rather than just the output of the final layer. Classically, the filter is not learned, but constructed to facilitate the regularity proofs.
In the graph case, the goal is the same; except that the smoothness on the grid is replaced by smoothness on the graph. As in the classical case, most works have tried to construct the wavelets explicitly (that is, without learning), based on the graph, so that the corresponding autencoder has the correct sparsity properties. In this work, and the recent work , the “filters” are constrained by construction to have some of the regularity properties of wavelets, but are also trained so that they are appropriate for a task separate from (but perhaps related to) the smoothness on the graph. Whereas  still builds a (sparse) linear autoencoder that keeps the basic wavelet transform setup, this work focuses on nonlinear constructions; and in particular, tries to build analogues of CNN’s.
Another line of work which is rellevant to the present work is that of discovering grid topologies from data. In , the authors empirically confirm the statements of Section 3.3, by showing that one can recover the 2-D grid structure via second order statistics. In [3, 12]
the authors estimate similarities between features to construct locally connected networks.
We could improve both constructions, and to some extent unify them, with a multiscale clustering of the graph that plays nicely with the Laplacian. As mentioned before, in the case of the grid, the standard dyadic cubes have the property that subsampling the Fourier functions on the grid to a coarser grid is the same as finding the Fourier functions on the coarser grid. This property would eliminate the annoying necessity of mapping the spectral construction to the finest grid at each layer to do the nonlinearity; and would allow us to interpret (via interpolation) the local filters at deeper layers in the spatial construction to be low frequency.
This kind of clustering is the underpinning of the multigrid method for solving discretized PDE’s (and linear systems in general) . There have been several papers extending the multigrid method, and in particular, the multiscale clustering(s) associated to the multigrid method, in settings more general than regular grids, see for example [16, 15] for situations as in this paper, and see  for the algebraic multigrid method in general. In this work, for simplicity, we use a naive multiscale clustering on the space side construction that is not guaranteed to respect the original graph’s Laplacian, and no explicit spatial clustering in the spectral construction.
The previous constructions are tested on two variations of the MNIST data set. In the first, we subsample the normalgrid to get coordinates. These coordinates still have a -D structure, but it is not possible to use standard convolutions. We then make a dataset by placing points on the -D unit sphere and project random MNIST images onto this set of points, as described in Section 5.2.
In all the experiments, we use Rectified Linear Units as nonlinearities and max-pooling. We train the models with cross-entropy loss, using a fixed learning rate ofwith momentum .
show the hierarchical clustering constructed from the graph and some eigenfunctions of the graph Laplacian, respectively. The performance of various graph architectures is reported in Table1
. To serve as a baseline, we compute the standard Nearest Neighbor classifier, which performs slightly worse than in the full MNIST dataset (). A two-layer Fully Connected neural network reduces the error to . The geometrical structure of the data can be exploited with the CNN graph architectures. Local Receptive Fields adapted to the graph structure outperform the fully connected network. In particular, two layers of filtering and max-pooling define a network which efficiently aggregates information to the final classifier. The spectral construction performs slightly worse on this dataset. We considered a frequency cutoff of . However, the frequency smoothing architecture described in section 3.4, which contains the smallest number of parameters, outperforms the regular spectral construction.
These results can be interpreted as follows. MNIST digits are characterized by localized oriented strokes, which require measurements with good spatial localization. Locally receptive fields are constructed to explicitly satisfy this constraint, whereas in the spectral construction the measurements are not enforced to become spatially localized. Adding the smoothness constraint on the spectrum of the filters improves classification results, since the filters are enforced to have better spatial localization.
This fact is illustrated in Figure 6. We verify that Locally Receptive fields encode different templates across different spatial neighborhoods, since there is no global strucutre tying them together. On the other hand, spectral constructions have the capacity to generate local measurements that generalize across the graph. When the spectral multipliers are not constrained, the resulting filters tend to be spatially delocalized, as shown in panels (c)-(d). This corresponds to the fundamental limitation of Fourier analysis to encode local phenomena. However, we observe in panels (e)-(f) that a simple smoothing across the spectrum of the graph Laplacian restores some form of spatial localization and creates filters which generalize across different spatial positions, as should be expected for convolution operators.
|400-SP1600-10 (, )|
|400-SP1600-10 (, )|
|400-SP4800-10 (, )|
We test in this section the graph CNN constructions on another low-dimensional graph. In this case, we lift the MNIST digits to the sphere. The dataset is constructed as follows. We first sample random points from the unit sphere . We then consider an orthogonal basis of with and a random covariance operator , where is a Gaussian iid matrix with variance . For each signal from the original MNIST dataset, we sample a covariance operator from the former distribution and consider its PCA basis . This basis defines a point of view and in-plane rotation which we use to project onto using bicubic interpolation. Figure 7 shows examples of the resulting projected digits. Since the digits ‘6’ and ‘9’ are equivalent modulo rotations, we remove the ‘9’ from the dataset. Figure 8 shows two eigenvectors of the graph Laplacian.
We first consider “mild” rotations with . The effect of such rotations is however not negligible. Indeed, table 2 shows that the Nearest Neighbor classifer performs considerably worse than in the previous example. All the neural network architectures we considered significatively improve over this basic classifier. Furthermore, we observe that both convolutional constructions match the fully connected constructions with far less parameters (but in this case, do not improve its performance). Figure 9 displays the filters learnt using different constructions. Again, we verify that the smooth spectral construction consistently improves the performance, and learns spatially localized filters, even using the naive -D organization of eigenvectors, which detect similar features across different locations of the graph (panels (e)-(f)).
Finally, we consider the uniform rotation case, where now the basis is a random basis of . In that case, the intra-class variability is much more severe, as seen by inspecting the performance of the Nearest neighbor classifier. All the previously described neural network architectures significantly improve over this classifier, although the performance is notably worse than in the mild rotation scenario. In this case, an efficient representation needs to be fully roto-translation invariant. Since this is a non-commutative group, it is likely that deeper architectures perform better than the models considered here.
|4096-SP32K-MP3000-FC300-9 (, )|
|4096-SP32K-MP3000-FC300-9 (, )|
|4096-SP32K-MP3000-FC300-9 (, )|
|4096-SP32K-MP3000-FC300-9 (, )|
Using graph-based analogues of convolutional architectures can greatly reduce the number of parameters in a neural network without worsening (and often improving) the test error, while simultaneously giving a faster forward propagation. These methods can be scaled to data with a large number of coordinates that have a notion of locality.
There is much to be done here. We suspect with more careful training and deeper networks we can consistently improve on fully connected networks on “manifold like” graphs like the sampled sphere. Furthermore, we intend to apply these techniques to less artifical problems, for example, on netflix like recommendation problems where there is a biclustering of the data and coordinates. Finally, the fact that smoothness on the naive ordering of the eigenvectors leads to improved results and localized filters suggests that it may be possible to make “dual” constructions with parameters per filter in much more generality than the grid.
Multiscale wavelets on trees, graphs and high dimensional data: Theory and applications to semi supervised learning.In Johannes Frankranz and Thorsten Joachims, editors, ICML, pages 367–374, 2010.
Wavelets on graphs via deep learning.In NIPS, 2013.
A tutorial on spectral clustering.Technical Report 149, 08 2006.