Binary embeddings with structured hashed projections

11/16/2015 ∙ by Anna Choromanska, et al. ∙ 0

We consider the hashing mechanism for constructing binary embeddings, that involves pseudo-random projections followed by nonlinear (sign function) mappings. The pseudo-random projection is described by a matrix, where not all entries are independent random variables but instead a fixed "budget of randomness" is distributed across the matrix. Such matrices can be efficiently stored in sub-quadratic or even linear space, provide reduction in randomness usage (i.e. number of required random values), and very often lead to computational speed ups. We prove several theoretical results showing that projections via various structured matrices followed by nonlinear mappings accurately preserve the angular distance between input high-dimensional vectors. To the best of our knowledge, these results are the first that give theoretical ground for the use of general structured matrices in the nonlinear setting. In particular, they generalize previous extensions of the Johnson-Lindenstrauss lemma and prove the plausibility of the approach that was so far only heuristically confirmed for some special structured matrices. Consequently, we show that many structured matrices can be used as an efficient information compression mechanism. Our findings build a better understanding of certain deep architectures, which contain randomly weighted and untrained layers, and yet achieve high performance on different learning tasks. We empirically verify our theoretical findings and show the dependence of learning via structured hashed projections on the performance of neural network as well as nearest neighbor classifier.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

Speeding-Up-CNNs-Using-Random-Feature-Maps

Final project for big data and machine learning.


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The paradigm of binary embedding for data compression is of the central focus of this paper. The paradigm has been studied in some earlier works (see: (Weiss et al., 2008), (Gong et al., 2012), (Gong et al., 2013a), (Wang et al., 2012), (Gong et al., 2013b), (Plan & Vershynin, 2014), (Yu et al., 2014), (Yi et al., 2015)

), and in particular it was observed that by using linear projections and then applying sign function as a nonlinear map one does not loose completely the information about the angular distance between vectors, but instead the information might be approximately reconstructed from the Hamming distance between hashes. In this paper we are interested in using pseudo-random projections via structured matrices in the linear projection phase. The pseudo-random projection is described by a matrix, where not all the entries are independent random variables but instead a fixed “budget of randomness” is distributed across the matrix. Thus they can be efficiently stored in a sub-quadratic or even linear space and provide reduction in the randomness usage. Moreover, using them often leads to computational speed ups since they provide fast matrix-vector multiplications via Fast Fourier Transform. We prove an extension of the Johnson-Lindenstrauss lemma 

(Sivakumar, 2002)

for general pseudo-random structured projections followed by nonlinear mappings. We show that the angular distance between high-dimensional data vectors is approximately preserved in the hashed space. This result is also new compared to previous extensions 

(Hinrichs & Vybíral, 2011; Vybíral, 2011) of the Johnson-Lindenstrauss lemma, that consider special cases of our structured projections (namely: circulant matrices) and do not consider at all the action of the non-linear mapping. We give theoretical explanation of the approach that was so far only heuristically confirmed for some special structured matrices (see: (Yi et al., 2015), (Yu et al., 2014)).

Our theoretical findings imply that many types of matrices, such as circulant or Toeplitz Gaussian matrices, can be used as a preprocessing step in neural networks. Structured matrices were used before in different contexts also in deep learning, e.g. 

(Saxe et al., 2011; Sindhwani et al., 2015; Cheng et al., 2015; Mathieu et al., 2014)). Our theoretical results however extend to more general class of matrices.

Our work has primarily theoretical focus, but we also ask an empirical question: how the action of the random projection followed by non-linear transformation may influence learning? We focus on the deep learning setting, where the architecture contains completely random or pseudo-random structured layers that are not trained. Little is known from the theoretical point of view about these fast deep architectures, which achieve significant speed ups of computation and space usage reduction with simultaneous little or no loss in performance 

(Saxe et al., 2011; Cheng et al., 2015; Sindhwani et al., 2015; Jarrett et al., 2009; Pinto et al., 2009; Pinto & Cox, 2010; Huang et al., 2006). The high-level intuition justifying the success of these approaches is that not only does the performance of the deep learning system depend on learning, but also on the intrinsic properties of the architecture. These findings coincide with the notion of high redundancy in network parametrization (Denil et al., 2013; Denton et al., 2014; Choromanska et al., 2015)

. In this paper we consider a simple model of the fully-connected feed-forward neural network where the input layer is hashed by a structured pseudo-random projection followed by a point-wise nonlinear mapping. Thus the input is effectively hashed and learning is conducted in the fully connected subsequent layers that act in the hashed space. We empirically verify how the distortion introduced in the first layer by hashing (where we reduce the dimensionality of the data) affects the performance of the network (in the supervised learning setting). Finally, we show how our structured nonlinear embeddings can be used in the

-nn setting (Altman, 1992).

This article is organized as follows: Section 2 discusses related work, Section 3 explains the hashing mechanism, Section 4 provides theoretical results, Section 5 shows experimental results, and Section 6 concludes. Supplement contains additional proofs and experimental results.

2 Related work

The idea of using random projections to facilitate learning with high-dimensional data stems from the early work on random projections (Dasgupta, 1999)

showing in particular that learning of high-dimensional mixtures of Gaussians can be simplified when first projecting the data into a randomly chosen subspace of low dimension (this is a consequence of the curse of dimensionality and the fact that high-dimensional data often has low intrinsic dimensionality). This idea was subsequently successfully applied to both synthetic and real datasets 

(Dasgupta, 2000; Bingham & Mannila, 2001), and then adopted to a number of learning approaches such as random projection trees (Dasgupta & Freund, 2008)

, kernel and feature-selection techniques 

(Blum, 2006), clustering (Fern & Brodley, 2003)

, privacy-preserving machine learning 

(Liu et al., 2006; Choromanska et al., 2013), learning with large databases (Achlioptas, 2003), sparse learning settings (Li et al., 2006), and more recently - deep learning (see (Saxe et al., 2011) for convenient review of such approaches). Using linear projections with completely random Gaussian weights, instead of learned ones, was recently studied from both theoretical and practical point of view in (Giryes et al., 2015), but that work did not consider structured matrices which is a central point of our interest since structured matrices can be stored much more efficiently. Beyond applying methods that use random Gaussian matrix projections (Dasgupta, 1999, 2000; Giryes et al., 2015) and random binary matrix projections (Achlioptas, 2003), it is also possible to construct deterministic projections that preserve angles and distances (Jafarpour et al., 2009). In some sense these methods use structured matrices as well, yet they do not have the same projection efficiency of circulant matrices and projections explored in this article. Our hybrid approach, where a fixed “budget of randomness” is distributed across the entire matrix in the structured way enables us to take advantage of both: the ability of completely random projection to preserve information and the compactness that comes from the highly-organized internal structure of the linear mapping.

This work studies the paradigm of constructing a binary embedding for data compression, where hashes are obtained by applying linear projections to the data followed by the non-linear (sign function) mappings. The point-wise nonlinearity was not considered in many previous works on structured matrices (Haupt et al., 2010; Rauhut et al., 2010; Krahmer et al., 2014; Yap et al., 2011) (moreover note that these works also consider the set of structured matrices which is a strict subset of the class of matrices considered here). Designing binary embeddings for high dimensional data with low distortion is addressed in many recent works (Weiss et al., 2008; Wang et al., 2012; Gong et al., 2013b, a, 2012; Yu et al., 2014; Yi et al., 2015; Raginsky & Lazebnik, 2009; Salakhutdinov & Hinton, 2009). In the context of our work, one of the recent articles (Yi et al., 2015) is especially important since the authors introduce the pipeline of constructing hashes with the use of structured matrices in the linear step, instead of completely random ones. They prove several theoretical results regarding the quality of the produced hash, and extend some previous theoretical results (Jacques et al., 2011; Plan & Vershynin, 2014)

. Their pipeline is more complicated than ours, i.e. they first apply Hadamard transformation and then a sequence of partial Toeplitz Gaussian matrices. Some general results (unbiasedness of the angular distance estimator) were also known for short hashing pipelines involving circulant matrices (

(Yu et al., 2014)). These works do not provide guarantees regarding concentration of the estimator around its mean, which is crucial for all practical applications. Our results for general structured matrices, which include circulant Gaussian matrices and a larger class of Toeplitz Gaussian matrices as special subclasses, provide such concentration guarantees, and thus establish a solid mathematical foundation for using various types of structured matrices in binary embeddings. In contrast to (Yi et al., 2015), we present our theoretical results for simpler hashing models (our hashing mechanism is explained in Section 3 and consists of two very simple steps that we call preprocessing step and the actual hashing step, where the latter consists of pseudo-random projection followed by the nonlinear mapping). In (Yu et al., 2015)

theoretical guarantees regarding bounding the variance of the angle estimator in the circulant setting were presented. Strong concentration results regarding several structured matrices were given in

(Choromanski & Sindhwani, 2016; Choromanski et al., 2016), following our work.

In the context of deep learning, using random network parametrization, where certain layers have random and untrained weights, often accelerates training. Introducing randomness to networks was explored for various architectures, in example feedforward networks (Huang et al., 2006), convolutional networks (Jarrett et al., 2009; Saxe et al., 2011), and recurrent networks (Jaeger & Haas, 2004; White et al., 2004; Boedecker et al., 2009). We also refer the reader to (Ganguli & Sompolinsky, 2012), where the authors describe how neural systems cope with the challenge of processing data in high dimensions and discuss random projections. Hashing in neural networks that we consider in this paper is a new direction of research. Very recently (see: (Chen et al., 2015)) it was empirically showed that hashing in neural nets may achieve drastic reduction in model sizes with no significant loss of the quality, by heavily exploiting the phenomenon of redundancies in neural nets. HashedNets introduced in (Chen et al., 2015) do not give any theoretical guarantees regarding the quality of the proposed hashing. Our work aims to touch both grounds. We experimentally show the plausibility of the approach, but also explain theoretically why the hashing we consider compresses important information about the data that suffices for accurate classification. Dimensionality reduction techniques were used also to approximately preserve certain metrics defined on graph objects ((Shaw & Jebara, 2009)). Structured hashing was applied also in (Szlam et al., 2012), but in a very different context than ours.

3 Hashing mechanism

In this section we explain our hashing mechanism for dimensionality reduction that we next analyze.

3.1 Structured matrices

We first introduce the aforementioned family of structured matrices, that we call: -regular matrices . This is the key ingredient of the method.

Definition 3.1.

is a circulant Gaussian matrix if its first row is a sequence of independent Gaussian random variables taken from the distribution and next rows are obtained from the previous ones by either only one-left shifts or only one-right shifts.

Definition 3.2.

is a Toeplitz Gaussian matrix if each of its descending diagonals from left to right is of the form for some and different descending diagonals are independent.

Remark 3.1.

The circulant Gaussian matrix with right shifts is a special type of the Toeplitz Gaussian matrix.

Assume that is the size of the hash and is the dimensionality of the data.

Definition 3.3.

Let be the size of the pool of independent random Gaussian variables , where each . Assume that . We take to be a natural number, i.e. . is

-regular random matrix if it has the following form

where for , , for , for , , , and furthermore the following holds: for any two different rows of the number of random variables , where , such that is in the intersection of some column with both and is at most .

Remark 3.2.

Circulant Gaussian matrices and Toeplitz Gaussian matrices are special cases of the -regular matrices. Toeplitz Gaussian matrix is -regular, where subsets are singletons.

In the experimental section of this paper we consider six different kinds of structured matrices, which are examples of general structured matrices covered by our theoretical analysis. Those are:

  • Circulant (see: Definition 3.1),

  • BinCirc - a matrix, where the first row is partitioned into consecutive equal-length blocks of elements and each row is obtained by the right shift of the blocks from the first row,

  • HalfShift - a matrix, where next row is obtained from the previous one by swapping its halves and then performing right shift by one,

  • VerHorShift - a matrix that is obtained by the following two phase-procedure: first each row is obtained from the previous one by the right shift of a fixed length and then in the obtained matrix each column is shifted up by a fixed number of elements,

  • BinPerm - a matrix, where the first row is partitioned into consecutive equal-length blocks of elements and each row is obtained as a random permutation of the blocks from the first row,

  • Toeplitz (see: Definition 3.2).

Remark 3.3.

Computing hashes for structured matrices: Toeplitz, BinCirc, HalfShift, and VerHorShift can be done faster than in time (e.g. for Toeplitz one can use FFT to reduce computations to ). Thus our structured approach leads to speed-ups, storage compression (since many structured matrices covered by our theoretical model can be stored in linear space) and reduction in randomness usage. The goal of this paper is not to analyze in details fast matrix-vector product algorithms since that requires a separate paper. We however point out that well-known fast matrix-vector product algorithms are some of the key advantages of our structured approach.

3.2 Hashing methods

Let be a function satisfying and . We will consider two hashing methods, both of which consist of what we refer to as a preprocessing step followed by the actual hashing step, where the latter consists of pseudo-random projection followed by nonlinear (sign function) mapping. The first mechanism, that we call extended -regular hashing, applies first random diagonal matrix to the data point , then the -normalized Hadamard matrix , next another random diagonal matrix , then the -regular projection matrix and finally function (the latter one applied point-wise). The overall scheme is presented below:

(1)

The diagonal entries of matrices and are chosen independently from the binary set

, each value being chosen with probability

. We also propose a shorter pipeline, that we call short -regular hashing, where we avoid applying first random matrix and the Hadamard matrix , i.e. the overall pipeline is of the form:

(2)

The goal is to compute good approximation of the angular distance between given vectors , given their compact hashed versions: . To achieve this goal we consider the -distances in the -dimensional space of hashes. Let denote the angle between vectors and . We define the normalized approximate angle between and as:

(3)

In the next section we show that the normalized approximate angle between vectors and leads to a very precise estimation of the actual angle for if the chosen parameter is not too large. Furthermore, we show an intriguing connection between theoretical guarantees regarding the quality of the produced hash and the chromatic number of some specific undirected graph encoding the structure of . For many of the structured matrices under consideration this graph is induced by an algebraic group operation defining the structure of (for instance, for the circular matrix the group is a single shift and the underlying graph is a collection of pairwise disjoint cycles, thus its chromatic number is at most ).

4 Theoretical results

4.1 Unbiasedness of the estimator

We are ready to provide theoretical guarantees regarding the quality of the produced hash. Our guarantees will be given for a sign function, i.e for defined as: for , for . Using this nonlinearity will be important to preserve approximate information about the angle between vectors, while filtering out the information about their lengths. We first show that

is an unbiased estimator of

, i.e. .

Lemma 4.1.

Let be a -regular hashing model (either extended or a short one) and . Then is an unbiased estimator of , i.e.

Let us give a short sketch of the proof first. Note that the value of the particular entry in the constructed hash depends only on the sign of the dot product between the corresponding Gaussian vector representing the row of the -regular matrix and the given vector. Fix two vectors: and with angular distance . Note that considered dot products (and thus also their signs) are preserved when instead of taking the Gaussian vector representing the row one takes its projection onto a linear space spanned by and . The Hamming distance between hashes of and is built up by these entries for which one dot product is negative and the other one is positive. One can note that this happens if the projected vector is inside a -dimensional cone covering angle . The last observation that completes the proof is that the projection of the Gaussian vector is isotropic (since it is also Gaussian), thus the probability that the two dot products will have different signs is (also see: (Charikar, 2002)).

Proof.

Note first that the th row, call it , of the matrix is a -dimensional Gaussian vector with mean and where each element has variance for (). Thus, after applying matrix the new vector is still Gaussian and of the same distribution. Let us consider first the short -regular hashing model. Fix some vectors (without loss of generality we may assume that they are not collinear). Let , shortly called by us , be the

-dimensional hyperplane spanned by

. Denote by the projection of into and by the line in perpendicular to . Let be a sign function. Note that the contribution to the -sum comes from those for which divides an angle between and (see: Figure 1), i.e. from those for which is inside the union of two -dimensional cones bounded by two lines in perpendicular to and respectively. If the angle is not divided (see: Figure 2) then the two corresponding entries in the hash have the same value and thus do not contribute to the overall distance between hashes.

Observe that, from what we have just said, we can conclude that , where:

(4)
Figure 1: Two vectors: , spanning two-dimensional hyperplane and with the angular distance between them. We have: . Line is dividing and thus contributes to .
Figure 2: Similar setting to the one presented on Figure 1. Vector represents -normalized version of and is perpendicular to the two-dimensional plane . The intersection of that plane with the -dimensional plane spanned by is a line that this time is outside . Thus does not contribute to .

Now it suffices to note that vector is a

-dimensional Gaussian vector and thus its direction is uniformly distributed over all directions. Thus each

is nonzero with probability exactly and the theorem follows. For the extended -regular hashing model the analysis is very similar. The only difference is that data is preprocessed by applying linear mapping first. Both and

are orthogonal matrices though, thus their product is also an orthogonal matrix. Since orthogonal matrices do not change angular distance, the former analysis can be applied again and yields the proof. ∎

We next focus on the concentration of the random variable around its mean . We prove strong exponential concentration results for the extended -regular hashing method. Interestingly, the application of the Hadamard mechanism is not necessary and it is possible to get concentration results, yet weaker than in the former case, also for short -regular hashing.

4.2 The -chromatic number

The highly well organized structure of the projection matrix gives rise to the underlying undirected graph that encodes dependencies between different entries of . More formally, let us fix two rows of of indices respectively. We define a graph as follows:

  • ,

  • there exists an edge between vertices and iff .

The chromatic number of the graph is the minimal number of colors that can be used to color the vertices of the graph in such a way that no two adjacent vertices have the same color.

Definition 4.1.

Let be a -regular matrix. We define the -chromatic number as:

Figure 3: Left: matrix with two highlighted rows of indices: and respectively, where . Right: corresponding graph that consists of two cycles. If each cycle is even then this graph is -colorable, as indicated on the picture. Thus we have: .

The graph associated with each structured matrix that we have just described enables us to encode dependencies between entries of the structured matrix in the compact form and gives us quantitative ways to efficiently measure these dependencies by analyzing several core parameters of this graph such as its chromatic number. More dependencies that usually lead to more structured form mean more edges in the associated graph and often lead to higher chromatic number. On the other hand, fewer dependencies produce graphs with much lower chromatic number (see Figure 3, where the graph associated with the circulant matrix is a collection of vertex disjoint cycles and has chromatic number

if it contains an odd length cycle and

otherwise).

4.3 Concentration inequalities for structured hashing with sign function

We present now our main theoretical results. The proofs are deferred to the Supplementary material. We focus on the concentration results regarding produced hashes that are crucial for practical applications of the proposed scheme.

We first start with the short description of the methods used and then rigorously formulate all the results. If all the rows of the projection matrix are independent then standard concentration inequalities can be used. This is however not the case in our setting since the matrix is structured. We still want to say that any two rows are “close” to independent Gaussian vectors and that will give us bounds regarding the variance of the distance between the hashes (in general, we observe that any system of rows is “close” to the system of independent Gaussian vectors and get bounds involving

th moments). We proceed as follows:

  • We take two rows and project them onto the linear space spanned by given vectors: and .

  • We consider the four coordinates obtained in this way (two for each vector). They are obviously Gaussian, but what is crucial, they are “almost independent”.

  • The latter observation is implied by the fact that these are the coordinates of the projection of a fixed Gaussian vector onto “almost orthogonal’ directions’.

  • We use the property of the Gaussian vector that its projections onto orthogonal directions are independent.

  • To prove that directions considered in our setting are close to orthogonal with high probability, we compute their dot product. This is the place where the structure of the matrix, the chromatic number of the underlying graph and the fact that in our hashing scheme we use random diagonal matrices come into action. We decompose each dot product into roughly speaking components ( is the chromatic number), such that each component is a sum of independent random variables with mean . Now we can use standard concentration inequalities to get tight concentration results.

  • The Hadamard matrix used in the extended model preprocesses input vectors to distribute their mass uniformly over all the coordinates, while not changing

    distances (it is a unitary matrix). Balanced vectors lead to much stronger concentration results.

Now we are ready to rigorously state our results. By we denote a function for some . The following theorems guarantee strong concentration of around its mean and therefore justify theoretically the effectiveness of the structured hashing method.

Let us consider first the extended -regular hashing model.

Theorem 4.1.

Consider extended -regular hashing model with independent Gaussian random variables: , each of distribution . Let be the size of the dataset . Denote by the size of the hash and by the dimensionality of the data. Let be an arbitrary positive function. Let be the angular distance between vectors . Then for , , and large enough:

where and .

Note how the upper bound on the probability of failure depends on the -chromatic number. The theorem above guarantees strong concentration of around its mean and therefore justifies theoretically the effectiveness of the structured hashing method. It becomes more clear below.

As a corollary, we obtain the following result:

Corollary 4.1.

Consider extended -regular hashing model . Assume that the projection matrix is Toeplitz Gaussian. Let be as above and denote by be the angular distance between vectors . Then the following is true for large enough:

Figure 4: The dependence of the upper bound on the variance of the normalized approximate angle on (left:) an angle when the size of the hash is fixed (the upper bound scales as and is almost independent of ), (right:) the size of the hash when the true angular distance is fixed (the upper bound converges to as ).

Corollary 4.1 follows from Theorem 4.1 by taking: , , for small enough constant , noticing that every Toeplitz Gaussian matrix is -regular and the corresponding -chromatic number is at most .

Term is related to the balancedness property. To clarify, the goal of multiplying by in the preprocessing step is to make each input vector balanced, or in other words to spread out the mass of the vector across all the dimensions in approximately uniform way. This property is required to obtain theoretical results (also note it was unnecessary in the unstructured setting) and does not depend on the number of projected dimensions.

Let us consider now the short -regular hashing model. The theorem presented below is an application of the Chebyshev’s inequality preceded by the careful analysis of the variance of .

Theorem 4.2.

Consider short -regular hashing model , where is a Toeplitz Gaussian matrix. Denote by the size of the hash. Let be the angular distance between vectors , where is the dataset. Then the following is true

(5)

and thus for any and :

Figure 4 shows the dependence of the upper bound on the variance of the normalized approximate angle on resp. the true angular distance and the size of the hash when resp. and are fixed.

Rate that appears in the theoretical results we obtained and the non-linear with variance decay of Figure 4 (right) is a consequence of the structured setting, where the quality of the nonlinear embedding is affected by the existence of dependencies between entries of the structured matrix.

5 Numerical experiments

In this section we demonstrate that all considered structured matrices achieve reasonable performance in comparison to fully random matrices. Specifically we show: i) the dependence of the performance on the size of the hash and the reduction factor for different structured matrices and ii) the performance of different structured matrices when used with neural networks and -NN classifier. Experiments confirm our novel theoretical results.

1

2

Figure 5: Fully-connected network with randomized input layer (red edges correspond to structured matrix). . is a random diagonal matrix with diagonal entries chosen independently from the binary set , each value being chosen with probability , and is a structured matrix. The figure should be viewed in color.

We performed experiments on MNIST dataset downloaded from http://yann.lecun.com/exdb/mnist/. The data was preprocessed111Preprocessing is discussed in Section 3. according to the short hashing scheme (the extended hashing scheme gave results of no significant statistical difference) before being given to the input of the network. We first considered a simple model of the fully-connected feed-forward neural network with two hidden layers, where the first hidden layer had units that use sign non-linearity (we explored ), and the second hidden layer had

units that use ReLU non-linearity. The size of the second hidden layer was chosen as follows. We first investigated the dependence of the test error on this size in case when

and the inputs instead of being randomly projected are multiplied by identity (it is equivalent to eliminating first hidden layer). We then chose as a size the threshold below which test performance was rapidly deteriorating.

a)
b)

Figure 6: Mean test error versus a) the size of the hash () (zoomed plot333Original plot is in the Supplement.), b) the size of the reduction () for the network. Baseline corresponds to .
/ Circulant Random BinPerm BinCirc HalfShift Toeplitz VerHorShift
/
/
/
/
/
/
/
Table 1: Mean and std of the test error versus the size of the hash () / size of the reduction () for the network.
Matrix Random Circulant BinPerm HalfShift VerHorShift BinCirc Toeplitz
of random values
Memory complexity
Table 2: Memory complexity and number of required random values for structured matrices and Random matrix.

The first hidden layer contains random untrained weights, and we only train the parameters of the second layer and the output layer. The network we consider is shown in Figure 5. Each experiment was initialized from a random set of parameters sampled uniformly within the unit cube, and was repeated times. All networks were trained for epochs using SGD (Bottou, 1998). The experiments with constant learning rate are reported (we also explored learning rate decay, but obtained similar results), where the learning rate was chosen from the set
to minimize the test error. The weights of the first hidden layer correspond to the entries in the “preprocessed” structured matrix. We explored seven kinds of random matrices (first six are structured): Circulant, Toeplitz, HalfShift, VerHorShift, BinPerm, BinCirc, and Random

(entries are independent and drawn from Gaussian distribution

). All codes were implemented in Torch7.

a)
b)

Figure 7: Mean test error versus a) the size of the hash () (zoomed plot22footnotemark: 2), b) the size of the reduction () for -NN. Baseline corresponds to .

Figure 3a shows how the mean test error is affected by the size of the hash, and Figure 3b shows how the mean test error changes with the size of the reduction, where the size of the reduction is defined as the ratio . In Table 1

we report both the mean and the standard deviation (std) of the test error across our experiments. Training results are reported in the Supplementary material.

Baseline refers to the network with one hidden layer containing hidden units, where all parameters are trained.

Experimental results show how the performance is affected by using structured hashed projections to reduce data dimensionality. Figure 3b and Table 1 show close to linear dependence between the error and the size of the reduction. Simultaneously, this approach leads to computational savings and the reduction of memory storage. i.e. the reduction of the number of input weights for the hidden layer (in example for Circulant matrix this reduction is of the order 444The memory required for storing Circulant matrix is negligible compared to the number of weights.). Memory complexity, i.e. memory required to store the matrix, and the number of required random values for different structured matrices and Random matrix are summarized in Table 2.

Experiments show that using fully random matrix gives the best performance as predicted in theory. BinPerm matrix exhibits comparable performance to the Random matrix, which might be explained by the fact that applying permutation itself adds an additional source of randomness. The next best performer is HalfShift, whose generation is less random than the one of BinPerm or Random. Thus its performance, as expected, is worse than for these two other matrices. However, as opposed to BinPerm and Random matrices, HalfShift matrix can be stored in linear space. The results also show that in general all structured matrices perform relatively well for medium-size reductions. Finally, all structured matrices except for BinPerm lead to the biggest memory savings and require the smallest “budget of randomness”. Moreover, they often lead to computational efficiency, e.g. Toeplitz matrix-vector multiplications can be efficiently implemented via Fast Fourier Transform (Yu et al., 2014). But, as mentioned before, faster than naive matrix-vector product computations can be performed also for BinPerm, HalfShift, and VerHorShift.

Finally, we also report how the performance of -NN algorithm is affected by using structured hashed projections for the dimensionality reduction. We obtained similar plots as for the case of neural networks. They are captured in Figure 2. The table showing the mean and the standard deviation of the test error for experiments with -NN is enclosed in the Supplementary material.

6 Conclusions

This paper shows that structured hashed projections well preserve the angular distance between input data instances. Our theoretical results consider mapping the data to lower-dimensional space using various structured matrices, where the structured linear projections are followed by the sign nonlinearity. This non-linear operation was not considered for such a wide range of structured matrices in previous related theoretical works. The theoretical setting naturally applies to the multilayer network framework, where the basic components of the architecture perform matrix-vector multiplication followed by the nonlinear mapping. We empirically verify our theoretical findings and show how using structured hashed projections for dimensionality reduction affects the performance of neural network and nearest neighbor classifier.

References

  • Achlioptas (2003) Achlioptas, D. Database-friendly random projections: Johnson-lindenstrauss with binary coins. J. Comput. Syst. Sci., 66(4):671–687, 2003.
  • Altman (1992) Altman, N. S. An Introduction to Kernel and Nearest-Neighbor Nonparametric Regression. The American Statistician, 46(3):175–185, 1992.
  • Bingham & Mannila (2001) Bingham, E. and Mannila, H. Random projection in dimensionality reduction: Applications to image and text data. In KDD, 2001.
  • Blum (2006) Blum, A. Random projection, margins, kernels, and feature-selection. In SLSFS, 2006.
  • Boedecker et al. (2009) Boedecker, J., Obst, O., Mayer, N. M., and Asada, M.

    Initialization and self-organized optimization of recurrent neural network connectivity.

    HFSP journal, 3(5):340–9, 2009.
  • Bottou (1998) Bottou, L. Online algorithms and stochastic approximations. In Online Learning and Neural Networks. Cambridge University Press, 1998.
  • Charikar (2002) Charikar, Moses. Similarity estimation techniques from rounding algorithms. In

    Proceedings on 34th Annual ACM Symposium on Theory of Computing, May 19-21, 2002, Montréal, Québec, Canada

    , pp. 380–388, 2002.
    doi: 10.1145/509907.509965. URL http://doi.acm.org/10.1145/509907.509965.
  • Chen et al. (2015) Chen, W., Wilson, J. T., Tyree, S., Weinberger, K. Q., and Chen, Y. Compressing neural networks with the hashing trick. CoRR, abs/1504.04788, 2015.
  • Cheng et al. (2015) Cheng, Yu, Yu, Felix X., Feris, Rogério Schmidt, Kumar, Sanjiv, Choudhary, Alok N., and Chang, Shih-Fu. An exploration of parameter redundancy in deep networks with circulant projections. In

    2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015

    , pp. 2857–2865, 2015.
    doi: 10.1109/ICCV.2015.327. URL http://dx.doi.org/10.1109/ICCV.2015.327.
  • Choromanska et al. (2013) Choromanska, A., Choromanski, K., Jagannathan, G., and Monteleoni, C. Differentially-private learning of low dimensional manifolds. In ALT, 2013.
  • Choromanska et al. (2015) Choromanska, A., Henaff, M., Mathieu, M., Arous, G. Ben, and LeCun, Y. The loss surfaces of multilayer networks. In AISTATS, 2015.
  • Choromanski & Sindhwani (2016) Choromanski, Krzysztof and Sindhwani, Vikas. Recycling randomness with structure for sublinear time kernel expansions. ICML2016, abs/1605.09049, 2016. URL http://arxiv.org/abs/1605.09049.
  • Choromanski et al. (2016) Choromanski, Krzysztof, Fagan, Francois, Gouy-Pailler, Cédric, Morvan, Anne, Sarlós, Tamás, and Atif, Jamal. Triplespin - a generic compact paradigm for fast machine learning computations. CoRR, abs/1605.09046, 2016. URL http://arxiv.org/abs/1605.09046.
  • Dasgupta (1999) Dasgupta, S. Learning mixtures of gaussians. In FOCS, 1999.
  • Dasgupta (2000) Dasgupta, S. Experiments with random projection. In UAI, 2000.
  • Dasgupta & Freund (2008) Dasgupta, S. and Freund, Y. Random projection trees and low dimensional manifolds. In STOC, 2008.
  • Denil et al. (2013) Denil, M., Shakibi, B., Dinh, L., Ranzato, M., and Freitas, N. D. Predicting parameters in deep learning. In NIPS. 2013.
  • Denton et al. (2014) Denton, E., Zaremba, W., Bruna, J., LeCun, Y., and Fergus, R. Exploiting linear structure within convolutional networks for efficient evaluation. In NIPS. 2014.
  • Fern & Brodley (2003) Fern, X. Z. and Brodley, C. E. Random projection for high dimensional data clustering: A cluster ensemble approach. In ICML, 2003.
  • Ganguli & Sompolinsky (2012) Ganguli, S. and Sompolinsky, H.

    Compressed sensing, sparsity, and dimensionality in neuronal information processing and data analysis.

    Annual Review of Neuroscience, 35:485–508, 2012.
  • Giryes et al. (2015) Giryes, R., Sapiro, G., and Bronstein, A. M. Deep neural networks with random gaussian weights: A universal classification strategy? CoRR, abs/1504.08291, 2015.
  • Gong et al. (2013a) Gong, Y., Lazebnik, S., Gordo, A., and Perronnin, F.

    Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval.

    IEEE Trans. Pattern Anal. Mach. Intell., 35(12):2916–2929, 2013a.
  • Gong et al. (2012) Gong, Yunchao, Kumar, Sanjiv, Verma, Vishal, and Lazebnik, Svetlana. Angular quantization-based binary codes for fast similarity search. In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012, Lake Tahoe, Nevada, United States., pp. 1205–1213, 2012.
  • Gong et al. (2013b) Gong, Yunchao, Kumar, Sanjiv, Rowley, Henry A., and Lazebnik, Svetlana. Learning binary codes for high-dimensional data using bilinear projections. In

    2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, June 23-28, 2013

    , pp. 484–491, 2013b.
    doi: 10.1109/CVPR.2013.69. URL http://dx.doi.org/10.1109/CVPR.2013.69.
  • Haupt et al. (2010) Haupt, J., Bajwa, W. U., Raz, G., and Nowak, R. Toeplitz compressed sensing matrices with applications to sparse channel estimation. Information Theory, IEEE Transactions on, 56(11):5862–5875, 2010.
  • Hinrichs & Vybíral (2011) Hinrichs, A. and Vybíral, J. Johnson-lindenstrauss lemma for circulant matrices. Random Struct. Algorithms, 39(3):391–398, 2011.
  • Huang et al. (2006) Huang, G.-B., Zhu, Q.-Y., and Siew, C.-K. Extreme learning machine: Theory and applications. Neurocomputing, 70(1–3):489 – 501, 2006.
  • Jacques et al. (2011) Jacques, L., Laska, J. N., Boufounos, P., and Baraniuk, R. G. Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors. CoRR, abs/1104.3160, 2011.
  • Jaeger & Haas (2004) Jaeger, H. and Haas, H. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science, pp. 78–80, 2004.
  • Jafarpour et al. (2009) Jafarpour, S., Xu, W., Hassibi, B., and Calderbank, R. Efficient and Robust Compressed Sensing Using Optimized Expander Graphs. Information Theory, IEEE Transactions on, 55(9):4299–4308, 2009.
  • Jarrett et al. (2009) Jarrett, K., Kavukcuoglu, K., Ranzato, M., and LeCun, Y. What is the best multi-stage architecture for object recognition? In ICCV, 2009.
  • Krahmer et al. (2014) Krahmer, F., Mendelson, S., and Rauhut, H. Suprema of chaos processes and the restricted isometry property. Communications on Pure and Applied Mathematics, 67(11):1877–1904, 2014.
  • Li et al. (2006) Li, P., Hastie, T. J., and Church, K. W. Very sparse random projections. In KDD, 2006.
  • Liu et al. (2006) Liu, K., Kargupta, H., and Ryan, J. Random projection-based multiplicative data perturbation for privacy preserving distributed data mining. IEEE Trans. on Knowl. and Data Eng., 18(1):92–106, 2006.
  • Mathieu et al. (2014) Mathieu, M., Henaff, M., and LeCun, Y. Fast training of convolutional networks through ffts. In ICLR, 2014.
  • Pinto & Cox (2010) Pinto, N. and Cox, D. D.

    An Evaluation of the Invariance Properties of a Biologically-Inspired System for Unconstrained Face Recognition.

    In BIONETICS, 2010.
  • Pinto et al. (2009) Pinto, N., Doukhan, D., DiCarlo, J. J., and Cox, D. D. A high-throughput screening approach to discovering good forms of biologically inspired visual representation. PLoS Computational Biology, 5(11), 2009.
  • Plan & Vershynin (2014) Plan, Y. and Vershynin, R. Dimension reduction by random hyperplane tessellations. Discrete & Computational Geometry, 51(2):438–461, 2014.
  • Raginsky & Lazebnik (2009) Raginsky, M. and Lazebnik, S. Locality-sensitive binary codes from shift-invariant kernels. In NIPS. 2009.
  • Rauhut et al. (2010) Rauhut, H., Romberg, J. K., and Tropp, J. A. Restricted isometries for partial random circulant matrices. CoRR, abs/1010.1847, 2010.
  • Salakhutdinov & Hinton (2009) Salakhutdinov, R. and Hinton, G. Semantic hashing. Int. J. Approx. Reasoning, 50(7):969–978, 2009.
  • Saxe et al. (2011) Saxe, A., Koh, P. W., Chen, Z., Bhand, M., Suresh, B., and Ng, A. On random weights and unsupervised feature learning. In ICML, 2011.
  • Shaw & Jebara (2009) Shaw, Blake and Jebara, Tony. Structure preserving embedding. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML 2009, Montreal, Quebec, Canada, June 14-18, 2009, pp. 937–944, 2009. doi: 10.1145/1553374.1553494. URL http://doi.acm.org/10.1145/1553374.1553494.
  • Sindhwani et al. (2015) Sindhwani, V., Sainath, T., and Kumar, S. Structured transforms for small-footprint deep learning. In NIPS, 2015.
  • Sivakumar (2002) Sivakumar, D. Algorithmic derandomization via complexity theory. In STOC, 2002.
  • Szlam et al. (2012) Szlam, Arthur, Gregor, Karol, and LeCun, Yann. Fast approximations to structured sparse coding and applications to object classification. In Computer Vision - ECCV 2012 - 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part V, pp. 200–213, 2012. doi: 10.1007/978-3-642-33715-4˙15. URL http://dx.doi.org/10.1007/978-3-642-33715-4_15.
  • Vybíral (2011) Vybíral, J. A variant of the johnson–lindenstrauss lemma for circulant matrices. Journal of Functional Analysis, 260(4):1096 – 1105, 2011.
  • Wang et al. (2012) Wang, Jun, Kumar, Sanjiv, and Chang, Shih-Fu. Semi-supervised hashing for large-scale search. IEEE Trans. Pattern Anal. Mach. Intell., 34(12):2393–2406, 2012. doi: 10.1109/TPAMI.2012.48. URL http://dx.doi.org/10.1109/TPAMI.2012.48.
  • Weiss et al. (2008) Weiss, Y., Torralba, A., and Fergus, R. Spectral hashing. In NIPS, 2008.
  • White et al. (2004) White, O. L., Lee, D. D., and Sompolinsky, H. Short-term memory in orthogonal neural networks. Physical review letters, 92(14), 2004.
  • Yap et al. (2011) Yap, H.L., Eftekhari, A., Wakin, M.B., and Rozell, C.J. The restricted isometry property for block diagonal matrices. In CISS, 2011.
  • Yi et al. (2015) Yi, X., Caramanis, C., and Price, E. Binary embedding: Fundamental limits and fast algorithm. CoRR, abs/1502.05746, 2015.
  • Yu et al. (2014) Yu, F. X., Kumar, S., Gong, Y., and Chang, S.-F. Circulant binary embedding. In ICML, 2014.
  • Yu et al. (2015) Yu, Felix X., Bhaskara, Aditya, Kumar, Sanjiv, Gong, Yunchao, and Chang, Shih-Fu. On binary embedding using circulant matrices. CoRR, abs/1511.06480, 2015. URL http://arxiv.org/abs/1511.06480.

7 Proof of Theorem 4.1

We start with the following technical lemma:

Lemma 7.1.

Let be the set of independent random variables defined on such that each has the same distribution and . Let be the set of events, where each is in the -field defined by (in particular does not depend on the -field ). Assume that there exists such that: for . Let be the set of random variables such that and for , where stands for the random variable truncated to the event . Assume furthermore that for . Denote . Then the following is true.

(6)
Proof.

Let us consider the event = . Note that may be represented by the union of the so-called -blocks, i.e.

(7)

where stands for the complement of event . Let us fix now some . Denote

(8)

note that . It follows directly from the Bernoulli scheme.

Denote . From what we have just said and from the definition of we conclude that for any given the following holds:

(9)

Note also that from the assumptions of the lemma we trivially get: .

Let us consider now the expression .

We get: .

From 9 we get:

(10)

Let us consider now the expression:

(11)

We have:

(12)

From the Stirling’s formula we get: . Thus we obtain:

(13)

for large enough.

Now we will use the following version of standard Azuma’s inequality:

Lemma 7.2.

Let be independent random variables such that . Assume that for . Then the following is true:

Now, using Lemma 7.2 for and we obtain:

(14)

Combining 13 and 14, we obtain the statement of the lemma.

Our next lemma explains the role the Hadamard matrix plays in the entire extended -regular hashing mechanism.

Lemma 7.3.

Let denote data dimensionality and let be an arbitrary positive function. Let be the set of all -normalized data points, where no two data points are identical. Assume that . Consider the hyperplanes spanned by pairs of different vectors from . Then after applying linear transformation each hyperplane is transformed into another hyperplane . Furthermore, the probability that for every

there exist two orthonormal vectors

in such that: satisfies:

Proof.

We have already noted in the proof of Lemma 4.1 that is an orthogonal matrix. Thus, as an isometry, it clearly transforms each -dimensional hyperplane into another -dimensional hyperplane. For every pair , let us consider an arbitrary fixed orthonormal pair spanning . Denote . Let us denote by vector obtained from after applying transformation . Note that the coordinate of is of the form:

(15)

where are independent random variables satisfying:

(16)

The latter comes straightforwardly from the form of the -normalized Hadamard matrix (i.e a Hadamard matrix, where each row and column is -normalized).

But then, from Lemma 7.2, and the fact that , we get for any :

(17)

Similar analysis is correct for . Note that is orthogonal to since and are orthogonal. Furthermore, both and are -normalized. Thus is an orthonormal pair.

To complete the proof, it suffices to take and apply the union bound over all vectors , for all hyperplanes. ∎

From the lemma above we see that applying Hadamard matrix enables us to assume with high probability that for every hyperplane there exists an orthonormal basis consisting of vectors with elements of absolute values at most . We call this event . Note that whether holds or not is determined only by , and the initial dataset .

Let us proceed with the proof of Theorem 4.1. Let us assume that event holds. Without loss of generality we may assume that we have the short -regular hashing mechanism with an extra property that every has an orthonormal basis consisting of vectors with elements of absolute value at most . Fix two vectors from the dataset . Denote by the orthonormal basis of with the above property. Let us fix the th row of and denote it as . After being multiplied by the diagonal matrix we obtain another vector:

(18)

where:

(19)

We have already noted that in the proof of Lemma 4.1 that it is the projection of into that determines whether the value of the associated random variable is or . To be more specific, we showed that iff the projection is in the region . Let us write down the coordinates of the projection of into in the -coordinate system. The coordinates are the dot-products of with and respectively thus in the -coordinate system we can write as:

(20)

Note that both coordinates are Gaussian random variables and they are independent since they were constructed by projecting a Gaussian vector into two orthogonal vectors. Now note that from our assumption about the structure of we can conclude that both coordinates may be represented as sums of weighted Gaussian random variables for , i.e.: