1 Introduction
Learning representations that are invariant to irrelevant transformations of the input is an important step towards building recognition systems automatically. Invariance is a key property of some cells in the mammalian visual cortex. Cells in highlevel areas of the visual cortex respond to objects categories, and are invariant to a wide range of variations on the object (pose, illumination, confirmation, instance, etc). The simplest known example of invariant representations in visual cortex are the complex cells of V1 that respond to edges of a given orientation but are activated by a wide range of positions of the edge. Many artificial object recognition systems have builtin invariances, such as the translational invariance of convolutional network [1], or SIFT descriptors [2]. An important question is how can useful invariant representations of the visual world be learned from unlabeled samples.
In this paper we introduce an algorithm for learning features that are invariant (or robust) to common image transformations that typically occur between successive frames of a video or statistically within a single frame. While the method is quite simple, it is also computationally efficient, and possesses provable bounds on the speed of inference. The first component of the model is a layer of sparse coding. Sparse coding [3] constructs a dictionary matrix
so that input vectors can be represented by a linear combination of a small number of columns of the dictionary matrix. Inference of the feature vector
representing an input vector is performed by finding the that minimizes the following energy function(1) 
where is a positive constant. The dictionary matrix is learned by minimizing averaged over a set of training samples , while constraining the columns of to have norm 1.
The first idea of the proposed method is to accumulate sparse feature vectors representing successive frames in a video, or versions of an image that are distorted by transformations that do not affect the nature of its content.
(2) 
where the sum runs over the distorted images . The second idea is to connect a second sparse coding layer on top of the first one that will capture dependencies between components of the accumulated sparse code vector. This second layer models vector using an invariant code , which is the minimum of the following energy function
(3) 
where denotes the norm of , is a matrix, and is a positive constant controlling the sparsity of . Unlike with traditional sparse coding, in this method the dictionary matrix interacts multiplicatively with the input . As in traditional sparse coding, the matrix is trained by gradient descent to minimize the average energy for the optimal over a training set of vectors obtained as stated above. The columns of are constrained to be normalized to 1. Essentially, the matrix will connect a component of to a set of components of if these components of cooccur frequently. When a component of turns on, it has the effect of lowering the coefficients of the components of to which it is strongly connected through the matrix. To put it another way, if a set of components of often turn on together, the matrix will connect them to a component of . Turning on this component of will lower the overall energy (if is small enough) because the whole set of components of will see their coefficient being lowered (the exponential terms). Hence, each unit of will connect units of that often turn on together within a sequence of images. These units will typically represent distorted version of a feature.
The energies (1) and (5) can be naturally combined into a single combined model of and as explained in section 2. There the second layer is essentially modulating sparsity of the first layer . Single model of the image is more natural. For the invariance properties we didn’t find much qualitative difference and since the former has provable inference bounds we presented the results for separate training. However the a two layer model should capture the statistics of an image. To demonstrate this we compared the inpaining capability of one and two layer models and found that two layer model does better job. For these experiments, the combined two layer model is necessary. We also found that despite the assumptions of the fast inference are not satisfied for the two layer model, empirically the inference is fast in this case as well.
1.1 Prior work on Invariant Feature Learning
The first way to implement invariance is to take a known invariance, such as translational invariance in images, in put it directly into the architecture. This has been highly successful in convolutional neural networks
[1] and SIFT descriptors [2] and its derivatives. The major drawback of this approach is that it works for known invariances, but not unknown invariances such as invariance to instance of an object. A system that would discover invariance on its own would be desired.Second type of invariance implementation is considered in the framework of sparse coding or independent component analysis. The idea is to change a cost function on hidden units in a way that would prefer cooccurring units to be close together in some space
[4, 5]. This is achieved by pooling units close in space together. This groups different inputs together producing a form of invariance. The drawback of this approach is that it requires some sort of imbedding in space and that the filters have to arrange themselves.In the third approach, rather then forcing units to arrange themselves, we let them learn whatever representations they want to learn and instead figure out which to pool together. In [6, 7], this was achieved by modulating covariance of the simple units with complex units.
The fourth approach to invariance uses the following idea: If the inputs follow one another in time they are likely a consequence of the same cause. We would like to discover that cause and therefore look for representations that are common for all frames. This was achieved in several ways. In slow feature analysis [8, 9, 10] one forces the representation to change slowly. In temporal product network [11] one breaks the input into two representations  one that is common to all frames and one that is complementary. In [12] the idea is similar but in addition the complementary representation specifies movement. In the simplest instance of hierarchical temporal memory [13] one forms groups based on transition matrix between states. The [14] is a structured model of video.
A lot of the approaches for learning invariance are inspired by the fact that the neocortex learns to create invariant representations. Consequently these approaches are not focused on creating efficient algorithms. In this paper, we given an efficient learning algorithm that falls into the framework of third and fourth approaches. The basic idea is to modulate the sparsity of sparse coding units using higher level units that are also sparse. The fourth approach is implemented by using the same higher level representation for several consecutive time frames. In the form our model is similar to that of [6, 7] but a little simpler. In a sense comparing our model to [6, 7] is similar to comparing sparse coding to independent component analysis. Independent component analysis is a probabilistic model, whereas sparse coding attempts to reconstruct input in terms of few active hidden units. The advantage of sparse coding is that it is simpler and easier to optimize. There exist several very efficient inference and learning algorithms [15, 16, 17, 18, 19] and sparse coding has been applied to a large number problems. It is this simplicity that allows efficient training of our model. The inference algorithm is closely derived from the fast iterative shrinkagethresholding algorithm (FISTA) [15] and has a convergence rate of where is the number of iterations.
2 The Model
The model described above comprises two separately trained modules, whose inference is performed separately. However, one can devise a unified model with a single energy function that is conceptually simpler:
(5)  
Given a set of inputs , the goal of training is to minimize . We do this by choosing one input at a time, minimizing (5) over and with and fixed, then fixing the resulting , and taking step in a negative gradient direction of ,
(stochastic gradient descent). An algorithm for finding
, is given in section 4. It consists of taking step in and separately, each of which lowers the energy.Note: The functions in (5) is different from that of the simple (split) model. The reason is that, in our experiments, either units lower the sparsity of too much, not resulting in a sparse code or the units do not turn on at all.
2.1 A Toy Example
We now describe a toy example that illustrates the main idea of the model [20]. The input, with , is an image patch consisting of a subset of the set of parallel lines of four different orientations and ten different positions per orientation. However for any given input, only lines with the same orientation can be present, Figure 6
a (different orientations have equal probability and for a given orientation a line of this orientation is present with probability 0.2 independently of others). This is a toy example of a texture. Training sparse coding on this input results in filters similar to one in Figure
6b. We see that a given simple unit responds to one particular line. The noisy filters correspond to simple units that are inactive  this happens because there are only 40 discrete inputs. In realistic data such as natural images, we have a continuum and typically all units are used.Clearly, sparse coding cannot capture all the statistics present in the data. The simple units are not independent. We would like to learn that that units corresponding to lines of a given orientation usually turn on simultaneously. We trained (5) on this data resulting in the filters in the Figure 6b,c. The filters of the simple units of this full model are similar to those obtained by training just the sparse coding. The invariant units pool together simple units with filters corresponding to lines of the same orientation. This makes the invariant units invariant to the pattern of lines and dependent only on the orientation. Only four invariant units were active corresponding to the four groups. As in sparse coding, on a realistic data such as natural images, all invariant units become active and distribute themselves with overlapping filters as we will se below.
Let us now discuss the motivation behind introducing a sequence of inputs () in (5). Inputs that follow one another in time are usually a consequence of the same cause. We would like to discover that cause. This cause is something that is present at all frames and therefore we are looking for a single representation in (5) that is common to all the frames.
Another interesting point about the model (5) is a that nonzero lowers the sparsity coefficient of units of that belong to a group making them more likely to become activated. This means that the model can utilize higher level information (which group is present) to modulate the activity of the lower layer. This is a desirable property for multilayer systems because different parts of the system should propagate their belief to other parts. In our invariance experiments the results for the unified model were very similar to the results of the simple (split) model. Below we show the results of this simple model because it is simple and because we provably know an efficient inference algorithm. However in the section 4 we will revisit the full system, generalize it to an layer system, give an algorithm for training it, and prove that under some assumptions of convexity, the algorithm again has a provably efficient inference. In the final section we use the full system for inpaining and show that it generalizes better then a single layer system.
3 Efficient Inference for the Simplified Model
Here we discuss how to find efficiently and give the numerical results of the paper. The results for the full model (5) were similar.
3.1 FISTA training algorithm
The advantage of (3) compared to (5) is that the fast iterative shrinkagethresholding algorithm (FISTA) [15] applies to it directly. FISTA applies to problems of the form where:

is continuously differentiable, convex and Lipschitz, that is . The is the Lipschitz constant of .

is continuous, convex and possibly nonsmooth
The problem is assumed to have solution . In our case and which satisfies these assumptions ( is initialized with nonnegative entries which stay nonnegative during the algorithm without a need to force it). This solution converges with bound where is the value of at the  iteration and is a constant. The cost of each iteration is where is the input size and is the output size. More precisely the cost is one matrix multiplications by and by plus cost. We used the backtracking version of the algorithm to find which contains a fixed number of operations (independent of desired error). It is a standard knowledge and easy to see that the algorithm applies to the sparse coding (1) as well.
3.2 Results
The input to the network was prepared as follows. We converted all the images of the Berkeley dataset into grayscale images. We locally removed the mean for each pixel by subtracting a Gaussianweighted average of the nearby pixels. The width of the Gaussian was
pixels. Then, we locally normalized the contrast by dividing each pixel by Gaussianweighted standard deviation of the nearby pixels (with a small cutoff to prevent blowups). The width of the Gaussian was also
pixels. Then, we picked a window in the image and, for a randomly chosen direction and magnitude, we moved it for frames and extracted them. The magnitude of the displacement was random in the range of pixels. A very large collection of such triplets of frames was extracted. We trained the sparse coding algorithm with code units in on each individual frame (not on the concatenated frames). After training we found the sparse codes for each frame. There were units in the layer of invariant units . For larger a system with simple units and invariant units, see the supplementary material.The results are shown in the Figure 2, see caption for description. We see that many invariant cells learn to group together filters of similar orientation and frequency but at several positions and thus learn invariance with respect to translations. However there are other types of filters as well. Remember that the algorithm learns statistical cooccurrence between features, whether in time or in space.
The values of the weights give us important information about the properties of the system. However ultimately we are interested in how the system responds to an input. We study the response of these units to a commonly occurring input  an edge. Specifically the inputs are given by the following function.
(6)  
(7)  
(8) 
where is the position of a pixel from the center of a patch, a real number specifying distance of the edge from the center and is the orientation of the edge from the axis. This is not an edge function, but a function obtained on an edge after local mean subtraction.
The responses of the simple units and the invariant units are shown in the Figure. 3, see caption for description. As expected the sparse coding units respond to edges in a narrow range of positions and relatively narrow range of orientations. Invariant cells on the other hand are able to pool different sparse coding units together and become invariant to a larger range of positions. Thus the invariant units do indeed have the desired invariance properties.
Note that for large sparsities the regions have clear boundaries and are quite sharp. This is similar to the standard implementation of convolutional net, where the pooling regions are squares (with clear boundaries). It is probably more preferable to have regions that overlap as happens at lower sparsities since one would prefer smoother responses rather then jumps across boundaries.
4 Theoretical analysis of the full model and its multilayer generalization
In this section we return to the full model (5). We generalize it to an layer system, give an inference algorithm and outline the proof that under certain assumptions of convexity the algorithm has the fast convergence of FISTA, there is the iteration number.
The basic idea of minimizing over in (5) is to alternate between taking energylowering step in while fixing and taking energylowering step in while fixing . Note that both of the restricted problems (problem in fixing and problem in fixing ) satisfy conditions of the FISTA algorithm. This will allow us to take steps of appropriate size that are guaranteed to lower the total energy. Before that however, we generalize the problem, which will reveal its structure and which does not introduce any additional complexity.
Consider system consisting of layers with units in the  layer with . We define that is all the vectors concatenated. We define two sets of functions. Let be continuously differentiable, convex and Lipschitz functions. There can be several such functions per layer, which is denoted by index . Let be continuous and convex functions, not necessarily smooth. For convenience we define , , and .
We define the energy of the system to be
(9) 
where in the second equality we drop the from the notation for simplicity. We will omit writing the for the rest of the paper. The equation (5) is a special case of (9) with , ,, , , with and .
Now observe that given the problem in keeping other variables fixed satisfies the conditions of the FISTA algorithm. We can define a step in the variable to be (in analogy to [15] eq. (2.6)):
(10)  
where the later equality holds if . Here sh is the shrinkage function . In the case where is restricted to be nonnegative we need to use instead of the shrinkage function.
Let us describe the algorithm for minimizing (9) with respect to (we will write it explicitly below). In the order from to , take the step in (10). Repeat until desired accuracy. The ’s have to be chosen so that
(11) 
This can be assured by taking where the later denotes the Lipschitz constant of its argument. Otherwise, as used in our simulations, it can be found by backtracking, see below. This will assure that each steps lowers the total energy (9) and hence the overall procedure will keep lowering it. In fact the step with such chosen is in some sense a step with ideal step size. Let us now write the algorithm explicitly:
Hierarchical (F)ISTA.
Step 0. Take , some and . Set , ,
.
Step k. ().
Loop a=1:n { Backtracking
{
. Find smallest nonnegative integer such
that with
(13)  
Set }
. Compute
(14) 
(15) 
(16) 
The algorithm described above is this algorithm with the choice in the second last line.
Let us discuss the ’s. For single layer system () choice is called ISTA and has convergence bound of . The other choice of is the FISTA algorithm. The convergence rate of FISTA is much better than that of ISTA, having in the denominator.
For hierarchical system, the choice guarantees that each step lowers the energy. The question is whether introducing the other choice of would speed up convergence to the FISTA convergence rate. The trouble is that the in general the product is nonconvex, which is the case for (5). For example we can readily see that if the function has more then one local minima, this convergence would certainly not be guaranteed (imagine starting at a nonminimal point with zero derivative). The effect of is that of momentum and this momentum increases towards one as increases. With such a large momentum the system is in a danger of “running around”. It might be effective to introduce this momentum but to regularize it (say bound it by a number smaller then one). In any case one can always use the algorithm with .
In the special cases when all ’s are convex however, we give an outline of the proof that the algorithm converges with the FISTA rate of convergence. For this purpose we define the full step in , , to be the result of the sequence of steps eq. (10) from to . That is we have : . We assume that all the ’s are the same (this is always possible by making all ’s equal the largest value).
The core of the proof is to show the Lemma 2.3 of [15]:
Lemma 2.3: Assume that , is continuously differentiable, Lipschitz, is continuous, is convex and is defined by the sequence of ’s of (10) as described above. Then for any
(17) 
The proof in [15] shows that if the algorithm consists of applying the sequence of ’s and these ’s satisfy Lemma 2.3, then the algorithm converges with rate . Thus we need to prove Lemma 2.3. We start with the analog of Lemma 2.2 of ([15]).
Lemma 2.2: For any , one has if and only if there exist , the subdifferential of , such that
(18) 
This lemma follows trivially from the definition of as in [15].
Proof of Lemma 2.3: Define . From convexity we have
(19)  
Next we have the property (22). However the should be primed () because the has already been updated. Due to space limitations we won’t write out all the calculations but specify the sequence of operations. The details are written out in the supplementary material. We take the first term on the left side of (20), and express it in its terms (9). Then, replace the terms using the convexity equations and substitute ’s using the Lemma 2.2. Then we take the second term of the left side of (20), , again express it using (9), and use the inequalities (22). Putting it all together, all the gradient terms cancel and the other terms combine to give Lemma 2.3. This completes the proof.
5 Conclusions
We introduced simple and efficient algorithm from learning invariant representation from unlabelled data. The method takes advantage of temporal consistency in sequential image data. In the future we plan to use the invariant features discovered by the method to hierarchical vision architectures, and apply it to recognition problems.
References
 [1] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1(4):541–551, Winter 1989.

[2]
D.G. Lowe.
Distinctive image features from scaleinvariant keypoints.
International journal of computer vision
, 60(2):91–110, 2004.  [3] B.A. Olshausen and D. Field. Emergence of simplecell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607–609, 1996.
 [4] A. Hyvarinen and P.O. Hoyer. A twolayer sparse coding model learns simple and complex cell receptive fields and topography from natural images. Vision Research, 41(18):2413–2423, 2001.

[5]
Koray Kavukcuoglu, Marc’Aurelio Ranzato, Rob Fergus, and Yann LeCun.
Learning invariant features through topographic filter maps.
In
Proc. International Conference on Computer Vision and Pattern Recognition (CVPR’09)
. IEEE, 2009.  [6] Y. Karklin and M.S. Lewicki. Learning higherorder structures in natural images. Network: Computation in Neural Systems, 14(3):483–499, 2003.
 [7] Y. Karklin and M.S. Lewicki. Emergence of complex cell properties by learning to generalize in natural scenes. Nature, 457(7225):83–86, 2008.

[8]
L. Wiskott and T.J. Sejnowski.
Slow feature analysis: Unsupervised learning of invariances.
Neural computation, 14(4):715–770, 2002.  [9] P. Berkes and L. Wiskott. Slow feature analysis yields a rich repertoire of complex cell properties. Journal of Vision, 5(6):579–602, 2005.
 [10] J. Bergstra and Y. Bengio. Slow, decorrelated features for pretraining complex celllike networks. Advances in Neural Information Processing Systems 22 (NIPS 09), 2009.
 [11] Karol Gregor and Yann LeCun. Emergence of complexlike cells in temporal product network, arxiv:1006.0448. 2010.
 [12] Charles Cadieu and Bruno Olshausen. Learning transformational invariants from natural movies. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 209–216. 2009.
 [13] D. George and J. Hawkins. Towards a mathematical theory of cortical microcircuits. 2009.
 [14] P. Berkes, R.E. Turner, and M. Sahani. A structured model of video reproduces primary visual cortical organisation. 2009.
 [15] A. Beck and M. Teboulle. A fast iterative shrinkagethresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009.
 [16] Y. Li and S. Osher. Coordinate descent optimization for l1 minimization with application to compressed sensing; a greedy algorithm. Inverse Problems and Imaging, 3(3):487–503, 2009.
 [17] E.T. Hale, W. Yin, and Y. Zhang. Fixedpoint continuation for l1minimization: Methodology and convergence. SIAM J. on Optimization, 19:1107, 2008.
 [18] H. Lee, A. Battle, R. Raina, and A.Y. Ng. Efficient sparse coding algorithms. In NIPS’06, 2006.
 [19] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online dictionary learning for sparse coding. In ICML’09, 2009.
 [20] Example invented by geoffrey hinton before current authors.
Appendix A Supplementary material for Efficient Learning of Sparse Invariant Representations
1) We give details of the Proof of Lemma 2.3.
2) We show all of the invariant filters for system of: 20x20 patches input patches, 1000 simple units, 400 invariant units.
a.1 Lemma 2.3
Lemma 2.3: Assume that , is continuously differentiable, Lipschitz, is continuous, is convex and is defined by the sequence of ’s in the paper. Then for any
(20) 
Proof of Lemma 2.3:
Define . We first collect the inequalities that we will need.
From convexity we have
(21)  
Next we have the property for step that guarantees that the energy is lowered in each step.
(22) 
Finally we have the Lemma 2.2
(23) 
Now we can put these equations together. The steps are: Write out the left side of (20) in terms of the definition of E. Use inequalities (21) and (22). Eliminate ’s using (23). Simplify. Here are the details:
(24)  
which is the formula (22). Note that in the line 5 and in the first term of lines 6 we shifted by one. This completes the proof.
Comments
There are no comments yet.