Learning Compressed Transforms with Low Displacement Rank

10/04/2018 ∙ by Anna T. Thomas, et al. ∙ 2

The low displacement rank (LDR) framework for structured matrices represents a matrix through two displacement operators and a low-rank residual. Existing use of LDR matrices in deep learning has applied fixed displacement operators encoding forms of shift invariance akin to convolutions. We introduce a rich class of LDR matrices with more general displacement operators, and explicitly learn over both the operators and the low-rank component. This class generalizes several previous constructions while preserving compression and efficient computation. We prove bounds on the VC dimension of multi-layer neural networks with structured weight matrices and show empirically that our compact parameterization can reduce the sample complexity of learning. When replacing weight layers in fully-connected, convolutional, and recurrent neural networks for image classification and language modeling tasks, our new classes exceed the accuracy of existing compression approaches, and on some tasks even outperform general unstructured layers while using more than 20X fewer parameters.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

page 16

page 26

page 27

Code Repositories

structured-nets

Structured matrices for compressing neural networks


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recent years have seen a surge of interest in structured representations for deep learning, motivated by achieving compression and acceleration while maintaining generalization properties. A popular approach for learning compact models involves constraining the weight matrices to exhibit some form of dense but compressible structure and learning directly over the parameterization of this structure. Examples of structures explored for the weight matrices of deep learning pipelines include low-rank matrices [15, 41], low-distortion projections [48], (block-)circulant matrices [8, 17], Toeplitz-like matrices [33, 44], and constructions derived from Fourier-related transforms [36]. Though they confer significant storage and computation benefits, these constructions tend to underperform general fully-connected layers in deep learning. This raises the question of whether broader classes of structured matrices can achieve superior downstream performance while retaining compression guarantees.

Our approach leverages the low displacement rank (LDR) framework (Section 2), which encodes structure through two sparse displacement operators and a low-rank residual term [26]. Previous work studying neural networks with LDR weight matrices assumes fixed displacement operators and learns only over the residual [44, 49]. The only case attempted in practice that explicitly employs the LDR framework uses fixed operators encoding shift invariance, producing weight matrices which were found to achieve superior downstream quality than several other compression approaches [44]. Unlike previous work, we consider learning the displacement operators jointly

with the low-rank residual. Building upon recent progress on structured dense matrix-vector multiplication 

[14], we introduce a much more general class of LDR matrices and develop practical algorithms for using these matrices in deep learning architectures. We show that the resulting class of matrices subsumes many previously used structured layers, including constructions that did not explicitly use the LDR framework [36, 17]. When compressing weight matrices in fully-connected, convolutional, and recurrent neural networks, we empirically demonstrate improved accuracy over existing approaches. Furthermore, on several tasks our constructions achieve higher accuracy than general unstructured layers while using an order of magnitude fewer parameters.

To shed light on the empirical success of LDR matrices in machine learning, we draw connections to recent work on learning equivariant representations, and hope to motivate further investigations of this link. Notably, many successful previous methods for compression apply classes of structured matrices related to convolutions 

[8, 17, 44]; while their explicit aim is to accelerate training and reduce memory costs, this constraint implicitly encodes a shift-invariant structure that is well-suited for image and audio data. We observe that the LDR construction enforces a natural notion of approximate equivariance to transformations governed by the displacement operators, suggesting that, in contrast, our approach of learning the operators allows for modeling and learning more general latent structures in data that may not be precisely known in advance.

Despite their increased expressiveness, our new classes retain the storage and computational benefits of conventional structured representations. Our construction provides guaranteed compression (from quadratic to linear parameters) and matrix-vector multiplication algorithms that are quasi-linear in the number of parameters. We additionally provide the first analysis of the sample complexity of learning neural networks with LDR weight matrices, which extends to low-rank, Toeplitz-like and other previously explored fixed classes of LDR matrices. More generally, our analysis applies to structured matrices whose parameters can interact multiplicatively with high degree. We prove that the class of neural networks constructed from these matrices retains VC dimension almost linear in the number of parameters, which implies that LDR matrices with learned displacement operators are still efficiently recoverable from data. This is consistent with our empirical results, which suggest that constraining weight layers to our broad class of LDR matrices can reduce the sample complexity of learning compared to unstructured weights.

We provide a detailed review of previous work and connections to our approach in Appendix B.

Summary of contributions
  • We introduce a rich class of LDR matrices where the displacement operators are explicitly learned from data, and provide multiplication algorithms implemented in PyTorch (Section 

    3).111Our code is available at https://github.com/HazyResearch/structured-nets.

  • We prove that the VC dimension of multi-layer neural networks with LDR weight matrices, which encompasses a broad class of previously explored approaches including the low-rank and Toeplitz-like classes, is quasi-linear in the number of parameters (Section 4).

  • We empirically demonstrate that our construction improves downstream quality when compressing weight layers in fully-connected, convolutional, and recurrent neural networks compared to previous compression approaches, and on some tasks can even outperform general unstructured layers (Section 5).

2 Background: displacement rank

The generic term structured matrix refers to an matrix that can be represented in much fewer than parameters, and admits fast operations such as matrix-vector multiplication. The displacement rank approach represents a structured matrix through displacement operators defining a linear map on matrices, and a residual , so that if

(1)

then can be manipulated solely through the compressed representation . We assume that and

have disjoint eigenvalues, which guarantees that

can be recovered from (c.f. Theorem 4.3.2, Pan [39]). The rank of (also denoted ) is called the displacement rank of w.r.t. .222Throughout this paper, we use square matrices for simplicity, but LDR is well-defined for rectangular.

The displacement approach was originally introduced to describe the Toeplitz-like matrices, which are not perfectly Toeplitz but still have shift-invariant structure [26]. These matrices have LDR with respect to shift/cycle operators. A standard formulation uses , where denotes the matrix with on the subdiagonal and in the top-right corner. The Toeplitz-like matrices have previously been applied in deep learning and kernel approximation, and in several cases have performed significantly better than competing compressed approaches [44, 33, 10]. Figure 1 illustrates the displacement (1) for a Toeplitz matrix, showing how the shift invariant structure of the matrix leads to a residual of rank at most 2.

Figure 1: Displacement equation for a Toeplitz matrix with respect to shift operators .

A few distinct classes of useful matrices are known to satisfy a displacement property: the classic types are the Toeplitz-, Hankel-, Vandermonde-, and Cauchy-like matrices (Appendix C, Table 5), which are ubiquitous in other disciplines [39]. These classes have fixed operators consisting of diagonal or shift matrices, and LDR properties have traditionally been analyzed in detail only for these special cases. Nonetheless, a few elegant properties hold for generic operators, stating that certain combinations of (and operations on) LDR matrices preserve low displacement rank. We call these closure properties, and introduce an additional block closure property that is related to convolutional filter channels (Section 5.2).

We use the notation to refer to the matrices of displacement rank with respect to .

Proposition 1.

LDR matrices are closed under the following operations:

  1. [label=()]

  2. Transpose/Inverse If , then and .

  3. Sum If and , then .

  4. Product If and , then .

  5. Block Let satisfy for . Then the block matrix has displacement rank .

Proposition 1 is proved in Appendix C.

3 Learning displacement operators

We consider two classes of new displacement operators. These operators are fixed to be matrices with particular sparsity patterns, where the entries are treated as learnable parameters.

The first operator class consists of subdiagonal (plus corner) matrices: , along with the corner , are the only possible non-zero entries. As is a special case matching this sparsity pattern, this class is the most direct generalization of Toeplitz-like matrices with learnable operators.

The second class of operators are tridiagonal (plus corner) matrices: with the exception of the outer corners and , can only be non-zero if . Figure 2 shows the displacement operators for the Toeplitz-like class and our more general operators. We henceforth let LDR-SD and LDR-TD denote the classes of matrices with low displacement rank with respect to subdiagonal and tridiagonal operators, respectively. Note that LDR-TD contains LDR-SD.

Figure 2: The operator (left), and our learnable subdiagonal (center) and tridiagonal (right) operators, corresponding to our proposed LDR-SD and LDR-TD classes.
Expressiveness

The matrices we introduce can model rich structure and subsume many types of linear transformations used in machine learning. We list some of the structured matrices that have LDR with respect to tridiagonal displacement operators:

Proposition 2.

The LDR-TD matrices contain:

  1. [label=()]

  2. Toeplitz-like matrices, which themselves include many Toeplitz and circulant variants (including standard convolutional filters - see Section 5.2 and Appendix C, Corollary 1[44, 8, 17].

  3. low-rank matrices.

  4. the other classic displacement structures: Hankel-like, Vandermonde-like, and Cauchy-like matrices.

  5. orthogonal polynomial transforms, including the Discrete Fourier and Cosine Transforms.

  6. combinations and derivatives of these classes via the closure properties (Proposition 1), including structured classes previously used in machine learning such as ACDC [36] and block circulant layers [17].

These reductions are stated more formally and proved in Appendix C.1. We also include a diagram of the structured matrix classes included by the proposed LDR-TD class in Figure LABEL:fig:expressivity in Appendix C.1.

Our parameterization

Given the parameters , the operation that must ultimately be performed is matrix-vector multiplication by . Several schemes for explicitly reconstructing from its displacement parameters are known for specific cases [40, 43], but do not always apply to our general operators. Instead, we use to implicitly construct a slightly different matrix with at most double the displacement rank, which is simpler to work with.

Proposition 3.

Let denote the Krylov matrix, defined to have -th column . For any vectors , then the matrix

(2)

has displacement rank at most with respect to .

Thus our representation stores the parameters , where are either subdiagonal or tridiagonal operators (containing or parameters), and . These parameters implicitly define the matrix (2), which is the LDR weight layer we use.

Algorithms for LDR-SD

Generic and near-linear time algorithms for matrix-vector multiplication by LDR matrices with even more general operators, including both the LDR-TD and LDR-SD classes, were recently shown to exist [14]. However, complete algorithms were not provided, as they relied on theoretical results such as the transposition principle [6] that only imply the existence of algorithms. Additionally, the recursive polynomial-based algorithms are difficult to implement efficiently. For LDR-SD, we provide explicit and complete near-linear time algorithms for multiplication by (2), as well as substantially simplify them to be useful in practical settings and implementable with standard library operations. We empirically compare the efficiency of our implementation and unstructured matrix-vector multiplication in Figure 5 and Table 6 in Appendix LABEL:sec:additional-results, showing that LDR-SD accelerates inference by 3.34-46.06x for . We also show results for the low-rank and Toeplitz-like classes, which have a lower computational cost. For LDR-TD, we explicitly construct the and matrices for from Proposition 3 and then apply the standard matrix-vector multiplication algorithm. Efficient implementations of near-linear time algorithms for LDR-TD are an interesting area of future work.

Theorem 1.

Define the simultaneous computation of

Fast Fourier Transforms (FFT), each with size

, to be a batched FFT with total size .

Consider any subdiagonal matrix and vectors . Then or can be multiplied by any vector by computing batched FFTs, each of total size . The total number of computations is .

These algorithms are also automatically differentiable, which we use to compute the gradients when learning. More complete descriptions of these algorithms are presented in Appendix C.

4 Theoretical properties of structured matrices

Complexity of LDR neural networks

The matrices we use (2) are unusual in that the parameters interact multiplicatively (namely in ) to implicitly define the actual layer. In contrast, fully-connected layers are linear and other structured layers, such as Fastfood and ACDC [30, 48, 36], are constant degree in their parameters. However, we can prove that this does not significantly change the learnability of our classes:

Theorem 2.

Let denote the class of neural networks with LDR layers, total parameters, and piecewise linear activations. Let denote the corresponding classification functions, i.e. . The VC dimension of this class is

Theorem 2 matches the standard bound for unconstrained weight matrices [4, 24]. This immediately implies a standard PAC-learnable guarantee [46]. Theorem 2 holds for even more general activations and matrices that for example include the broad classes of [14]. The proof is in Appendix LABEL:sec:vc_dim, and we empirically validate the generalization and sample complexity properties of our class in Section 5.3.

Displacement rank and equivariance

We observe that displacement rank is related to a line of work outside the resource-constrained learning community, specifically on building equivariant (also called covariant in some contexts [5, 34]) feature representations that transform in predictable ways when the input is transformed. An equivariant feature map satisfies

(3)

for transformations (invariance is the special case when is the identity) [32, 16, 42]. This means that perturbing the input by a transformation before passing through the map is equivalent to first finding the features then transforming by .

Intuitively, LDR matrices are a suitable choice for modeling approximately equivariant linear maps, since the residual of (3) has low complexity. Furthermore, approximately equivariant maps should retain the compositional properties of equivariance, which LDR satisfies via Proposition 1. For example, Proposition 13 formalizes the notion that the composition of two approximately equivariant maps is still approximately equivariant. Using this intuition, the displacement representation (1) of a matrix decomposes into two parts: the operators define transformations to which the model is approximately equivariant, and the low complexity residual controls standard model capacity.

Equivariance has been used in several ways in the context of machine learning. One formulation, used for example to model ego-motions, supposes that (3) holds only approximately, and uses a fixed transformation along with data for (3) to learn an appropriate  [1, 32]. Another line of work uses the representation theory formalization of equivariant maps [12, 27]. We describe this formulation in more detail and show how LDR satisfies this definition as well in Appendix LABEL:sec:group_rep, Proposition LABEL:prop:equivariance. In contrast to previous settings, which fix one or both of , our formulation stipulates that can be uniquely determined from , , and learns the latter as part of an end-to-end model. In Section 5.4 we include a visual example of latent structure that our displacement operators learn, where they recover centering information about objects from a 2D image dataset.

5 Empirical evaluation

Overview

In Section 5.1 we consider a standard setting of compressing a single hidden layer (SHL) neural network and the fully-connected (FC) layer of a CNN for image classification tasks. Following previous work [7, 44], we test on two challenging MNIST variants [29], and include two additional datasets with more realistic objects (CIFAR-10 [28] and NORB [31]). Since SHL models take a single channel as input, we converted CIFAR-10 to grayscale for this task. Our classes and the structured baselines are tested across different parameter budgets in order to show tradeoffs between compression and accuracy. As shown in Table 1, in the SHL model, our methods consistently have higher test accuracy than baselines for compressed training and inference, by 3.14, 2.70, 3.55, and 3.37 accuracy points on MNIST-bg-rot, MNIST-noise, CIFAR-10, and NORB respectively. In the CNN model, as shown in Table 1 in Appendix LABEL:sec:additional-results, we found improvements of 5.56, 0.95, and 1.98 accuracy points over baselines on MNIST-bg-rot, MNIST-noise, and NORB respectively. Additionally, to explore whether learning the displacement operators can facilitate adaptation to other domains, we replace the input-hidden weights in an LSTM for a language modeling task, and show improvements of 0.81-30.47 perplexity points compared to baselines at several parameter budgets.

In addition to experiments on replacing fully-connected layers, in Section 5.2 we also replace the convolutional layer of a simple CNN while preserving performance within 1.05 accuracy points on CIFAR-10. In Section 5.3, we consider the effect of a higher parameter budget. By increasing the rank to just , the LDR-SD class meets or exceeds the accuracy of the unstructured FC layer in all datasets we tested on, for both SHL and CNN.333In addition to the results reported in Table 1, Figure 3 and Table LABEL:table:images-extended-cnn in Appendix LABEL:sec:additional-results, we also found that at rank 16 the LDR-SD class on the CNN architecture achieved test accuracies of 68.48% and 75.45% on CIFAR-10 and NORB respectively. Appendix D includes more experimental details and protocols. Our PyTorch code is publicly available at github.com/HazyResearch/structured-nets.

5.1 Compressing fully-connected layers

Image classification

Sindhwani et al. [44] showed that for a fixed parameter budget, the Toeplitz-like class significantly outperforms several other compression approaches, including Random Edge Removal [11], Low Rank Decomposition [15], Dark Knowledge [25], HashedNets [7], and HashedNets with Dark Knowledge. Following previous experimental settings [7, 44], Table 1 compares our proposed classes to several baselines using dense structured matrices to compress the hidden layer of a single hidden layer neural network. In addition to Toeplitz-like, we implement and compare to other classic LDR types, Hankel-like and Vandermonde-like, which were previously indicated as an unexplored possibility [44, 49]. We also show results when compressing the FC layer of a 7-layer CNN based on LeNet in Appendix LABEL:sec:additional-results, Table LABEL:table:images-extended-cnn. In Appendix LABEL:sec:additional-results, we show comparisons to additional baselines at multiple budgets, including network pruning [23] and a baseline used in [7], in which the number of hidden units is adjusted to meet the parameter budget.

Method MNIST-bg-rot MNIST-noise CIFAR-10 NORB
Unstructured 44.08 65.15 46.03 59.83
622506 622506 1058826 1054726
LDR-TD () 45.81 78.45 45.33 62.75
14122 14122 18442 14342
Toeplitz-like  [44] () 42.67 75.75 41.78 59.38
14122 14122 18442 14342
Hankel-like () 42.23 73.65 41.40 60.09
14122 14122 18442 14342
Vandermonde-like () 37.14 59.80 33.93 48.98
14122 14122 18442 14342
Low-rank  [15] () 35.67 52.25 32.28 43.66
14122 14122 18442 14342
Fastfood  [48] 38.13 63.55 39.64 59.02
10202 10202 13322 9222
Circulant  [8] 34.46 65.35 34.28 46.45
8634 8634 11274 7174
Table 1: Test accuracy when replacing the hidden layer with structured classes. Where applicable, rank () is in parentheses, and the number of parameters in the architecture is in italics below each method. Comparisons to previously unexplored classic LDR types as well as additional structured baselines are included, with the ranks adjusted to match the parameter count of LDR-TD where possible. The Fastfood [48] and Circulant [8] methods do not have rank parameters, and the parameter count for these methods cannot be exactly controlled. Additional results when replacing the FC layer of a CNN are in Appendix LABEL:sec:additional-results. Details for all experiments are in Appendix D.

At rank one (the most compressed setting), our classes with learned operators achieve higher accuracy than the fixed operator classes, and on the MNIST-bg-rot, MNIST-noise, and NORB datasets even improve on FC layers of the same dimensions, by 1.73, 13.30, and 2.92 accuracy points respectively on the SHL task, as shown in Table 1. On the CNN task, our classes improve upon unstructured fully-connected layers by 0.85 and 2.25 accuracy points on the MNIST-bg-rot and MNIST-noise datasets (shown in Table LABEL:table:images-extended-cnn in Appendix LABEL:sec:additional-results). As noted above, at higher ranks our classes meet or improve upon the accuracy of FC layers on all datasets in both the SHL and CNN architectures.

Additionally, in Figure 3 we evaluate the performance of LDR-SD at higher ranks. Note that the ratio of parameters between LDR-SD and the Toeplitz-like or low-rank is , which becomes negligible at higher ranks. Figure 3 shows that at just rank , the LDR-SD class meets or exceeds the performance of the FC layer on all four datasets, by 5.87, 15.05, 0.74, and 6.86 accuracy points on MNIST-bg-rot, MNIST-noise, CIFAR-10, and NORB respectively, while still maintaining at least 20x fewer parameters.

Of particular note is the poor performance of low-rank matrices. As mentioned in Section 2, every fixed-operator class has the same parameterization (a low-rank matrix). We hypothesize that the main contribution to their marked performance difference is the effect of the learned displacement operator modeling latent invariances in the data, and that the improvement in the displacement rank classes—from low-rank to Toeplitz-like to our learned operators—comes from more accurate representations of these invariances. As shown in Figure 3, broadening the operator class (from Toeplitz-like at to LDR-SD at ) is consistently a more effective use of parameters than increasing the displacement rank (from Toeplitz-like at to ). Note that LDR-SD () and Toeplitz-like () have the same parameter count.

Figure 3: Test accuracy vs. rank for unstructured, LDR-SD, Toeplitz-like, low-rank classes. On each dataset, LDR-SD meets or exceeds the accuracy of the unstructured FC baseline at higher ranks. At rank 16, the compression ratio of an LDR-SD layer compared to the unstructured layer ranges from to

. Shaded regions represent two standard deviations from the mean, computed over five trials with randomly initialized weights.

For the rest of our experiments outside Section 5.1 we use the algorithms in Appendix C specifically for LDR-SD matrices, and focus on further evaluation of this class on more expensive models.

Language modeling

Here, we replace the input-hidden weights in a single layer long short-term memory network (LSTM) for a language modeling task. We evaluate on the WikiText-2 dataset, consisting of 2M training tokens and a vocabulary size of 33K 

[35]. We compare to Toeplitz-like and low-rank baselines, both previously investigated for compressing recurrent nets [33]. As shown in Table 2, LDR-SD improves upon the baselines for each budget tested. Though our class does not outperform the unstructured model, we did find that it achieves a significantly lower perplexity than the fixed Toeplitz-like class (by 19.94-42.92 perplexity points), suggesting that learning the displacement operator can help adapt to different domains.

Num. Parameters LDR-SD Toeplitz-like Low-rank
2048 166.97 186.91 205.72
3072 154.51 177.60 179.46
5120 141.91 178.07 172.38
9216 143.60 186.52 144.41
17408 132.43 162.58 135.65
25600 129.46 155.73 133.37
Table 2: Test perplexity when replacing input-hidden matrices of an LSTM with structured classes on WikiText-2. An unconstrained layer, with 65536 parameters, has perplexity 117.74. Parameter budgets correspond to ranks 1,2,4,8,16,24 for LDR-SD. Lower is better.

5.2 Replacing convolutional layers

Convolutional layers of CNNs are a prominent example of equivariant feature maps.444Convolutions are designed to be shift equivariant, i.e. shifting the input is equivalent to shifting the output. It has been noted that convolutions are a subcase of Toeplitz-like matrices with a particular sparsity pattern555E.g. a convolutional filter on an matrix has a Toeplitz weight matrix supported on diagonals . [8, 44]. As channels are simply block matrices666A layer consisting of in-channels and out-channels, each of which is connected by a weight matrix of class , is the same as a block matrix., the block closure property implies that multi-channel convolutional filters are simply a Toeplitz-like matrix of higher rank (see Appendix C, Corollary 1). In light of the interpretation of LDR of an approximately equivariant linear map (as discussed in Section 4), we investigate whether replacing convolutional layers with more general representations can recover similar performance, without needing the hand-crafted sparsity pattern.

Briefly, we test the simplest multi-channel CNN model on the CIFAR-10 dataset, consisting of one layer of convolutional channels (

in/out channels), followed by a FC layer, followed by the softmax layer. The final accuracies are listed in Table 

3. The most striking result is for the simple architecture consisting of two layers of a single structured matrix. This comes within 1.05 accuracy points of the highly specialized architecture consisting of convolutional channels + pooling + FC layer, while using fewer layers, hidden units, and parameters. The full details are in Appendix D.

First hidden layer(s) Last hidden layer Hidden units Parameters Test Acc.
3 Convolutional Channels (CC) FC 3072, 512 1573089 54.59

3CC + Max Pool

FC 3072, 768, 512 393441 55.14
4CC + Max Pool FC 4096, 1024, 512 524588 60.05
Toeplitz-like channels Toeplitz-like 3072, 512 393216 57.29
LDR-SD channels LDR-SD 3072, 512 417792 59.36
Toeplitz-like matrix Toeplitz-like 3072, 512 393216 55.29
LDR-SD matrix LDR-SD 3072, 512 405504 59.00
Table 3: Replacing a five-layer CNN consisting of convolutional channels, max pooling, and FC layers with two generic LDR matrices results in only slight test accuracy decrease while containing fewer layers, hidden units, and parameters. Rank () is in parentheses.

5.3 Generalization and sample complexity

Theorem 2 states that the theoretical sample complexity of neural networks with structured weight matrices scales almost linearly in the total number of parameters, matching the results for networks with fully-connected layers [4, 24]. As LDR matrices have far fewer parameters, the VC dimension bound for LDR networks are correspondingly lower than that of general unstructured networks. Though the VC dimension bounds are sufficient but not necessary for learnability, one might still expect to be able to learn over compressed networks with fewer samples than over unstructured networks. We empirically investigate this result using the same experimental setting as Table 1 and Figure 3, and show in Table LABEL:table:gen-error (Appendix LABEL:sec:additional-results) that the structured classes consistently have lower generalization error777As standardly measured by the difference between training and test error. than the unstructured baseline.

Reducing sample complexity

We investigate whether LDR models with learned displacement operators require fewer samples to achieve the same test error, compared to unstructured weights, in both the single hidden layer and CNN architectures. Tables LABEL:table:sample-complexity-shl and LABEL:table:sample-complexity-cnn in Appendix LABEL:sec:additional-results show our results. In the single hidden layer architecture, when using only 25% of the training data the LDR-TD class exceeds the performance of an unstructured model trained on the full MNIST-noise dataset. On the CNN model, only 50% of the training data is sufficient for the LDR-TD to exceed the performance of an unstructured layer trained on the full dataset.

5.4 Visualizing learned weights

Finally, we examine the actual structures that our models learn. Figure 4(a,b) shows the heat map of the weight matrix for the Toeplitz-like and LDR-SD classes, trained on MNIST-bg-rot with a single hidden layer model. As is convention, the input is flattened to a vector in . The Toeplitz-like class is unable to determine that the input is actually a image instead of a vector. In contrast, LDR-SD class is able to pick up regularity in the input, as the weight matrix displays grid-like periodicity of size 28.

Figure 4(c) reveals why the weight matrix displays this pattern. The equivariance interpretation (Section 4) predicts that should encode a meaningful transformation of the inputs. The entries of the learned subdiagonal are in fact recovering a latent invariant of the 2D domain: when visualized as an image, the pixel intensities correspond to how the inputs are centered in the dataset (Figure 4(d)). Figure LABEL:fig:visualization-NORB in Appendix LABEL:sec:additional-results shows a similar figure for the NORB dataset, which has smaller objects, and we found that the subdiagonal learns a correspondingly smaller circle.

(a) Toeplitz-like
(b) LDR-SD
(c) Subdiagonal of
(d) Input examples
Figure 4: The learned weight matrices (a,b) of models trained on MNIST-bg-rot. Unlike the Toeplitz-like matrix, the LDR-SD matrix displays grid-like periodicity corresponding to the 2D input. Figure (c) shows the values of the subdiagonal of , reshaped as an image. The size and location of the circle roughly corresponds to the location of objects of interest in the 2D inputs. A similar centering phenomenon was found on the NORB dataset, shown in Figure LABEL:fig:visualization-NORB in Appendix LABEL:sec:additional-results.

6 Conclusion

We substantially generalize the class of low displacement rank matrices explored in machine learning by considering classes of LDR matrices with displacement operators that can be learned from data. We show these matrices can improve performance on downstream tasks compared to compression baselines and, on some tasks, even general unstructured weight layers. We hope this work inspires additional ways of using structure to achieve both more compact and higher quality representations, especially for deep learning models which are commonly acknowledged to be overparameterized.

Acknowledgments

We thank Taco Cohen, Jared Dunnmon, Braden Hancock, Tatsunori Hashimoto, Fred Sala, Virginia Smith, James Thomas, Mary Wootters, Paroma Varma, and Jian Zhang for helpful discussions and feedback.

We gratefully acknowledge the support of DARPA under Nos. FA87501720095 (D3M) and FA86501827865 (SDH), NIH under No. N000141712266 (Mobilize), NSF under Nos. CCF1763315 (Beyond Sparsity) and CCF1563078 (Volume to Velocity), ONR under No. N000141712266 (Unifying Weak Supervision), the Moore Foundation, NXP, Xilinx, LETI-CEA, Intel, Google, NEC, Toshiba, TSMC, ARM, Hitachi, BASF, Accenture, Ericsson, Qualcomm, Analog Devices, the Okawa Foundation, and American Family Insurance, and members of the Stanford DAWN project: Intel, Microsoft, Teradata, Facebook, Google, Ant Financial, NEC, SAP, and VMWare. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views, policies, or endorsements, either expressed or implied, of DARPA, NIH, ONR, or the U.S. Government.

References

  • Agrawal et al. [2015] Pulkit Agrawal, Joao Carreira, and Jitendra Malik. Learning to see by moving. In

    Proceedings of the IEEE International Conference on Computer Vision

    , pages 37–45. IEEE, 2015.
  • Anselmi et al. [2016] Fabio Anselmi, Joel Z. Leibo, Lorenzo Rosasco, Jim Mutch, Andrea Tacchetti, and Tomaso Poggio. Unsupervised learning of invariant representations. Theor. Comput. Sci., 633(C):112–121, June 2016. ISSN 0304-3975. doi: 10.1016/j.tcs.2015.06.048. URL https://doi.org/10.1016/j.tcs.2015.06.048.
  • Anthony and Bartlett [2009] Martin Anthony and Peter L Bartlett. Neural network learning: theoretical foundations. Cambridge University Press, 2009.
  • Bartlett et al. [1999] Peter L Bartlett, Vitaly Maiorov, and Ron Meir. Almost linear VC dimension bounds for piecewise polynomial networks. In Advances in Neural Information Processing Systems, pages 190–196, 1999.
  • Bronstein et al. [2017] Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond Euclidean data. IEEE Signal Processing Magazine, 34(4):18–42, 2017.
  • Bürgisser et al. [2013] Peter Bürgisser, Michael Clausen, and Mohammad A Shokrollahi. Algebraic complexity theory, volume 315. Springer Science & Business Media, 2013.
  • Chen et al. [2015] Wenlin Chen, James Wilson, Stephen Tyree, Kilian Weinberger, and Yixin Chen. Compressing neural networks with the hashing trick. In Francis Bach and David Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 2285–2294, Lille, France, 07–09 Jul 2015. PMLR. URL http://proceedings.mlr.press/v37/chenc15.html.
  • Cheng et al. [2015] Yu Cheng, Felix X Yu, Rogerio S Feris, Sanjiv Kumar, Alok Choudhary, and Shi-Fu Chang. An exploration of parameter redundancy in deep networks with circulant projections. In Proceedings of the IEEE International Conference on Computer Vision, pages 2857–2865, 2015.
  • Chihara [2011] T.S. Chihara. An introduction to orthogonal polynomials. Dover Books on Mathematics. Dover Publications, 2011. ISBN 9780486479293. URL https://books.google.com/books?id=IkCJSQAACAAJ.
  • Choromanski and Sindhwani [2016] Krzysztof Choromanski and Vikas Sindhwani. Recycling randomness with structure for sublinear time kernel expansions. In Maria Florina Balcan and Kilian Q. Weinberger, editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 2502–2510, New York, New York, USA, 20–22 Jun 2016. PMLR. URL http://proceedings.mlr.press/v48/choromanski16.html.
  • Ciresan et al. [2011] Dan C Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, and Jürgen Schmidhuber.

    Flexible, high performance convolutional neural networks for image classification.

    In

    IJCAI Proceedings-International Joint Conference on Artificial Intelligence

    , volume 22, page 1237. Barcelona, Spain, 2011.
  • Cohen and Welling [2016] Taco Cohen and Max Welling. Group equivariant convolutional networks. In International Conference on Machine Learning, pages 2990–2999, 2016.
  • Cohen et al. [2018] Taco S. Cohen, Mario Geiger, Jonas Köhler, and Max Welling. Spherical CNNs. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=Hkbd5xZRb.
  • De Sa et al. [2018] Christopher De Sa, Albert Gu, Rohan Puttagunta, Christopher Ré, and Atri Rudra. A two-pronged progress in structured dense matrix vector multiplication. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1060–1079. SIAM, 2018.
  • Denil et al. [2013] Misha Denil, Babak Shakibi, Laurent Dinh, Nando De Freitas, et al. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems, pages 2148–2156, 2013.
  • Dieleman et al. [2016] Sander Dieleman, Jeffrey De Fauw, and Koray Kavukcuoglu. Exploiting cyclic symmetry in convolutional neural networks. In Maria Florina Balcan and Kilian Q. Weinberger, editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1889–1898, New York, New York, USA, 20–22 Jun 2016. PMLR. URL http://proceedings.mlr.press/v48/dieleman16.html.
  • Ding et al. [2017] Caiwen Ding, Siyu Liao, Yanzhi Wang, Zhe Li, Ning Liu, Youwei Zhuo, Chao Wang, Xuehai Qian, Yu Bai, Geng Yuan, et al. CirCNN: accelerating and compressing deep neural networks using block-circulant weight matrices. In Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture, pages 395–408. ACM, 2017.
  • Egner and Püschel [2001] Sebastian Egner and Markus Püschel. Automatic generation of fast discrete signal transforms. IEEE Transactions on Signal Processing, 49(9):1992–2002, 2001.
  • Egner and Püschel [2004] Sebastian Egner and Markus Püschel. Symmetry-based matrix factorization. Journal of Symbolic Computation, 37(2):157–186, 2004.
  • Gens and Domingos [2014] Robert Gens and Pedro M Domingos. Deep symmetry networks. In Advances in Neural Information Processing Systems, pages 2537–2545, 2014.
  • Giles and Maxwell [1987] C. Lee Giles and Tom Maxwell. Learning, invariance, and generalization in high-order neural networks. Appl. Opt., 26(23):4972–4978, Dec 1987. doi: 10.1364/AO.26.004972. URL http://ao.osa.org/abstract.cfm?URI=ao-26-23-4972.
  • Griewank and Walther [2008] Andreas Griewank and Andrea Walther. Evaluating derivatives: principles and techniques of algorithmic differentiation. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, second edition, 2008. ISBN 0898716594, 9780898716597.
  • Han et al. [2015] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems, pages 1135–1143, 2015.
  • Harvey et al. [2017] Nick Harvey, Christopher Liaw, and Abbas Mehrabian. Nearly-tight VC-dimension bounds for piecewise linear neural networks. In Satyen Kale and Ohad Shamir, editors, Proceedings of the 2017 Conference on Learning Theory, volume 65 of Proceedings of Machine Learning Research, pages 1064–1068, Amsterdam, Netherlands, 07–10 Jul 2017. PMLR. URL http://proceedings.mlr.press/v65/harvey17a.html.
  • Hinton et al. [2015] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. NIPS Deep Learning Workshop, 2015.
  • Kailath et al. [1979] Thomas Kailath, Sun-Yuan Kung, and Martin Morf. Displacement ranks of matrices and linear equations. Journal of Mathematical Analysis and Applications, 68(2):395–407, 1979.
  • Kondor and Trivedi [2018] Risi Kondor and Shubhendu Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, pages 2752–2760, 2018. URL http://proceedings.mlr.press/v80/kondor18a.html.
  • Krizhevsky and Hinton [2009] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master’s Thesis, Department of Computer Science, University of Toronto, 2009.
  • Larochelle et al. [2007] Hugo Larochelle, Dumitru Erhan, Aaron Courville, James Bergstra, and Yoshua Bengio. An empirical evaluation of deep architectures on problems with many factors of variation. In Proceedings of the 24th International Conference on Machine Learning, ICML ’07, pages 473–480, New York, NY, USA, 2007. ACM. ISBN 978-1-59593-793-3. doi: 10.1145/1273496.1273556. URL http://doi.acm.org/10.1145/1273496.1273556.
  • Le et al. [2013] Quoc Le, Tamas Sarlos, and Alexander Smola. Fastfood - computing Hilbert space expansions in loglinear time. In Sanjoy Dasgupta and David McAllester, editors, Proceedings of the 30th International Conference on Machine Learning, volume 28 of Proceedings of Machine Learning Research, pages 244–252, Atlanta, Georgia, USA, 17–19 Jun 2013. PMLR. URL http://proceedings.mlr.press/v28/le13.html.
  • LeCun et al. [2004] Yann LeCun, Fu Jie Huang, and Leon Bottou. Learning methods for generic object recognition with invariance to pose and lighting. In Proceedings of the IEEE International Conference on Computer Vision, volume 2, pages II–104. IEEE, 2004.
  • Lenc and Vedaldi [2015] Karel Lenc and Andrea Vedaldi. Understanding image representations by measuring their equivariance and equivalence. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 991–999, 2015.
  • Lu et al. [2016] Zhiyun Lu, Vikas Sindhwani, and Tara N Sainath. Learning compact recurrent neural networks. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 5960–5964. IEEE, 2016.
  • Marcos et al. [2017] Diego Marcos, Michele Volpi, Nikos Komodakis, and Devis Tuia. Rotation equivariant vector field networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 5058–5067, 2017.
  • Merity et al. [2017] Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=Byj72udxe.
  • Moczulski et al. [2016] Marcin Moczulski, Misha Denil, Jeremy Appleyard, and Nando de Freitas. ACDC: a structured efficient linear layer. In International Conference on Learning Representations, 2016.
  • Oymak [2018] Samet Oymak. Learning compact neural networks with regularization. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 3966–3975, Stockholmsmässan, Stockholm, Sweden, 10–15 Jul 2018. PMLR. URL http://proceedings.mlr.press/v80/oymak18a.html.
  • Pal and Savvides [2018] Dipan K Pal and Marios Savvides. Non-parametric transformation networks. arXiv preprint arXiv:1801.04520, 2018.
  • Pan [2012] Victor Y Pan. Structured matrices and polynomials: unified superfast algorithms. Springer Science & Business Media, 2012.
  • Pan and Wang [2003] Victor Y Pan and Xinmao Wang. Inversion of displacement operators. SIAM Journal on Matrix Analysis and Applications, 24(3):660–677, 2003.
  • Sainath et al. [2013] Tara N Sainath, Brian Kingsbury, Vikas Sindhwani, Ebru Arisoy, and Bhuvana Ramabhadran. Low-rank matrix factorization for deep neural network training with high-dimensional output targets. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 6655–6659. IEEE, 2013.
  • Schmidt and Roth [2012] Uwe Schmidt and Stefan Roth. Learning rotation-aware features: From invariant priors to equivariant descriptors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2050–2057. IEEE, 2012.
  • Simoncini [2016] Valeria Simoncini. Computational methods for linear matrix equations. SIAM Review, 58(3):377–441, 2016.
  • Sindhwani et al. [2015] Vikas Sindhwani, Tara Sainath, and Sanjiv Kumar. Structured transforms for small-footprint deep learning. In Advances in Neural Information Processing Systems, pages 3088–3096, 2015.
  • Sokolic et al. [2017] Jure Sokolic, Raja Giryes, Guillermo Sapiro, and Miguel Rodrigues.

    Generalization error of invariant classifiers.

    In Aarti Singh and Jerry Zhu, editors, Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pages 1094–1103, Fort Lauderdale, FL, USA, 20–22 Apr 2017. PMLR. URL http://proceedings.mlr.press/v54/sokolic17a.html.
  • Vapnik [1998] Vladimir Vapnik. Statistical learning theory. 1998. Wiley, New York, 1998.
  • Warren [1968] Hugh E Warren. Lower bounds for approximation by nonlinear manifolds. Transactions of the American Mathematical Society, 133(1):167–178, 1968.
  • Yang et al. [2015] Zichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu Wang. Deep fried convnets. In Proceedings of the IEEE International Conference on Computer Vision, pages 1476–1483, 2015.
  • Zhao et al. [2017] Liang Zhao, Siyu Liao, Yanzhi Wang, Zhe Li, Jian Tang, and Bo Yuan. Theoretical properties for neural networks with weight matrices of low displacement rank. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 4082–4090, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR. URL http://proceedings.mlr.press/v70/zhao17b.html.

Appendix A Symbols and abbreviations

Symbol Used For
LDR low displacement rank
LDR-SD matrices with low displacement rank with respect to subdiagonal operators
LDR-TD matrices with low displacement rank with respect to tridiagonal operators
displacement operators
Sylvester displacement,
(displacement) rank
parameters which define the rank residual matrix , where
unit-f-circulant matrix, defined as
Krylov matrix, with column
matrices of displacement rank with respect to
feature map
CC convolutional channels
FC fully-connected
Table 4: Symbols and abbreviations used in this paper.

Appendix B Related work

Our study of the potential for structured matrices for compressing deep learning pipelines was motivated by exciting work along these lines from Sindhwani et al. [44], the first to suggest the use of low displacement rank (LDR) matrices in deep learning. They specifically explored applications of the Toeplitz-like class, and empirically show that this class is competitive against many other baselines for compressing neural networks on image and speech domains. Toeplitz-like matrices were similarly found to be effective at compressing RNN and LSTM architectures on a voice search task [33]. Another special case of LDR matrices are the circulant (or block-circulant) matrices, which have also been used for compressing CNNs [8]; more recently, these have also been further developed and shown to achieve state-of-the-art results on FPGA and ASIC platforms [17]. Earlier works on compressing deep learning pipelines investigated the use of low-rank matrices [41, 15]—perhaps the most canonical type of dense structured matrix—which are also encompassed by our framework, as shown in Proposition 2. Outside of deep learning, Choromanski and Sindhwani [10] examined a structured matrix class that includes Toeplitz-like, circulant, and Hankel matrices (which are all LDR matrices) in the context of kernel approximation.

On the theoretical side, Zhao et al. [49] study properties of neural networks with LDR weight matrices, proving results including a universal approximation property and error bounds. However, they retain the standard paradigm of fixing the displacement operators and varying the low-rank portion. Another natural theoretical question that arises with these models is whether the resulting hypothesis class is still efficiently learnable, especially when learning the structured class (as opposed to these previous fixed classes). Recently, Oymak [37] proved a Rademacher complexity bound for one layer neural networks with low-rank weight matrices. To the best of our knowledge, Theorem 2 provides the first sample complexity bounds for neural networks with a broad class of structured weight matrices including low-rank, our LDR classes, and other general structured matrices [14].

In Section 3 we suggest that the LDR representation enforces a natural notion of approximate equivariance and satisfies closure properties that one would expect of equivariant representations. The study of equivariant feature maps is of broad interest for constructing more effective representations when known symmetries exist in underlying data. Equivariant linear maps have long been used in algebraic signal processing to derive efficient transform algorithms [18, 19]. The fact that convolutional networks induce equivariant representations, and the importance of this effect on sample complexity and generalization, has been well-analyzed [12, 2, 21, 45]. Building upon the observation that convolutional filters are simply linear maps constructed to be translation equivariant888Shifting the input to a convolutional feature map is the same as shifting the output., exciting recent progress has been made on crafting representations invariant to more complex symmetries such as the spherical rotation group [13] and egomotions [1]. Generally, however, underlying assumptions are made about the domain and invariances present in order to construct feature maps for each application. A few works have explored the possibility of learning invariances automatically from data, and design deep architectures that are in principle capable of modeling and learning more general symmetries [20, 38].

Appendix C Properties of displacement rank

Displacement rank has traditionally been used to describe the Toeplitz-like, Hankel-like, Vandermonde-like, and Cauchy-like matrices, which are ubiquitous in disciplines such as engineering, coding theory, and computer algebra. Their associated displacement representations are shown in Table 5.

Structured Matrix Displacement Rank
Toeplitz
Hankel
Vandermonde
Cauchy
Table 5: Traditional classes of structured matrices analyzed with displacement rank. In the Vandermonde and Cauchy cases, the displacement operators are parameterized by and respectively.
Proof of Proposition 1.

The following identities are easily verified:

Transpose
Inverse
Sum
Product
Block

The remainder

is the block matrix

This is the sum of matrices of rank and thus has rank .

Corollary 1.

A block matrix , where each block is a Toeplitz-like matrix of displacement rank , is Toeplitz-like with displacement rank .

Proof.

Apply Proposition 4 where each has the form . Let and . Note that and (of the same size as ) differ only in entries, and similarly and differ in entries. Since an -sparse matrix also has rank at most ,

has rank at most . ∎

Proof of Proposition 3.

First consider the rank one case, . It is easy to check that will only be non-empty in the first column, hence . Similarly, and Proposition 11 implies . Then Theorem 13 implies that . The rank case follows directly from Theorem 12. ∎

c.1 Expressiveness

Expanding on the claim in Section 3, we formally show that these structured matrices are contained in the tridiagonal (plus corners) LDR class. This includes several types previously used in similar works.

Rank
1 2 4 8 16
(a) Low-rank
Rank
1 2 4 8 16
(b) Toeplitz-like
Rank
1 2 4 8 16
(c) LDR-SD
Table 6: Acceleration of structured classes over unstructured matrix-vector multiply at inference time. Experimental details are in Appendix LABEL:sec:speed.
Figure 5: Acceleration of structured classes over unstructured matrix-vector multiply at inference time. At , LDR-SD () achieves a speedup of 3.34-46.06x over unstructured. Data for higher ranks are shown in Table 6. The comparison to the low-rank and Toeplitz-like classes illustrates a tradeoff involved in broadening the class of structured matrices we learn over. Though LDR-SD consistently outperforms these classes on downstream quality, its computational cost of multiplication is , compared to and for low-rank and Toeplitz-like respectively. Experimental details are in Appendix LABEL:sec:speed.

Appendix D Experimental details

d.1 Image classification

In Table 7, we provide details on the datasets we use for evaluation. For all our experiments, batch sizes were chosen to be 50. NORB was downsampled to

, and the left stereo image was used. Training was performed with stochastic gradient descent with momentum, with the number of epochs set to 50 on all datasets. 15% of the training data was used for the validation set on all experiments. We fixed momentum at 0.9 for all methods for all experiments, and performed a grid search over learning rate. Unless otherwise stated, for each method, we tested the learning rates {0.0002, 0.0005, 0.001, 0.002}, with three trials (with random initializations) per learning rate. For each trial, we test on the validation set at each epoch, and report the test accuracy of the model with the highest validation accuracy, over all learning rates, trials, and epochs.

In Figure 3, for each method and each of the four learning rates, we perform five trials with random initializations and report the average and standard deviation of the test accuracy of the learning rate with the highest average validation accuracy.

Dataset Training Examples Test Examples Number of Classes
MNIST-bg-rot [29] 12000 50000 10
MNIST-noise [29] 12000 2000 10
CIFAR-10 [28] 50000 10000 10
NORB [31] 291600 58320 6
Rectangles [29] 1200 50000 2
Table 7: Overview of the image classification datasets used in this work. For all datasets, 15% of the training set was used for the validation set.
Single hidden layer architecture

In these experiments, we used an architecture consisting of a fully-connected hidden layer, followed by a fully-connected softmax layer. In order to be consistent with the architecture used in Sindhwani et al. [44], we do not use a bias term in the hidden layer.

CNN architecture

In these experiments, shown in Table LABEL:table:images-extended-cnn in Appendix LABEL:sec:additional-results

, we tested on a LeNet-based architecture. The architecture has 2 convolution/pool layers with 6 and 16 channels respectively, followed by a fully-connected layer, followed by fully-connected logit/softmax layer. We replaced the second to last fully-connected layer, which was of dimensions

for the MNIST-bg-rot and MNIST-noise datasets, and for the CIFAR-10 and NORB experiments.

Replacing convolutional layers

This experiment corresponds to Table 3.

Here, we investigated whether the convolutional layers of CNNs can be learned automatically. For our experiments, we test on the simplest possible multi-channel CNN model on the CIFAR-10 dataset. The model consists of one layer of convolutional channels ( RGB in channels,

out channels, stride

), followed by a fully-connected layer and a final FC+softmax layer (total of 4 layers). We replace the convolutions with various structured matrices of the same dimensions, keeping the same channel structure (e.g. it would consist of square structured matrices) and number of hidden units.999

The convolutions are padded to ensure their input and output dimensions are equal.

The LDR classes benefit from being composed with LDR matrices of the same type (due to the composition property, Proposition 13), so we additionally replace the later FC layer with the same structured matrix type.

By Proposition 14, channels of Toeplitz-like matrices form a larger Toeplitz-like matrix of the same size. Using this insight, we consider replacing the channel structure of the convolutional layer with either channels of structured matrices or a single wide structured matrix. (Also, note that this is able to leverage the asymptotic fast nature of our structured classes.)

Because it seems that convolutional layers are strongly dependent on pooling – our structured matrices outperform them in isolation – we compare against a version of the CNN with an additional pooling layer after the convolutional channels. Note that this comparison is the same basic four layer model with a structured matrix vs. a five layer convolutional model with pooling. Since the architectures are quite different and difficult to directly compare, we also experimented with adding more hidden units to the pooling model.

d.2 Language modeling

For a language modeling application101010Code available at https://github.com/pytorch/examples/tree/master/word_language_model., we explored replacing weight matrices in a recurrent neural network with structured matrices. We evaluate on a single layer LSTM architecture, defined by the update equations:

In our experiments we replace the matrices

with structured matrices. We use a hidden layer of size 128, and word embedding size of 128. We evaluate on the Wikitext-2 dataset, which consists of Wikipedia articles (2,088,628 training, 217,646 validation, and 245,569 test tokens). The total vocabulary is of size 33,278. We use the default hyperparameters and train using stochastic gradient descent with an initial learning rate of 20. The learning rate is annealed 4x after each epoch if performance does not improve on the validation set. Results are shown in Table 

2.