Deep Predictive Coding Networks

01/16/2013 ∙ by Rakesh Chalasani, et al. ∙ University of Florida 0

The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise.



There are no comments yet.


page 8

page 9

page 13

Code Repositories


Code and models accompanying "Deep Predictive Coding Networks for Video Prediction and Unsupervised Learning"

view repo


Code and models accompanying "Deep Predictive Coding Networks for Video Prediction and Unsupervised Learning"

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The performance of machine learning algorithms is dependent on how the data is represented. In most methods, the quality of a data representation is itself dependent on prior knowledge imposed on the representation. Such prior knowledge can be imposed using domain specific information, as in SIFT

[1], HOG [2], etc., or in learning representations using fixed priors like sparsity [3], temporal coherence [4], etc. The use of fixed priors became particularly popular while training deep networks [5, 6, 7, 8]. In spite of the success of these general purpose priors, they are not capable of adjusting to the context in the data. On the other hand, there are several advantages to having a model that can “actively” adapt to the context in the data. One way of achieving this is to empirically alter the priors in a dynamic and context-sensitive manner. This will be the main focus of this work, with emphasis on visual perception.

Here we propose a predictive coding framework, where a deep locally-connected generative model uses “top-down” information to empirically alter the priors used in the lower layers to perform “bottom-up” inference. The centerpiece of the proposed model is extracting sparse features from time-varying observations using a linear dynamical model. To this end, we propose a novel procedure to infer sparse states

(or features) of a dynamical system. We then extend this feature extraction block to introduce a pooling strategy to learn invariant feature representations from the data. In line with other “deep learning” methods, we use these basic building blocks to construct a hierarchical model using greedy layer-wise unsupervised learning. The hierarchical model is built such that the output from one layer acts as an input to the layer above. In other words, the layers are arranged in a Markov chain such that the states at any layer are only dependent on the representations in the layer below and above, and are independent of the rest of the model. The overall goal of the dynamical system at any layer is to make the best

prediction of the representation in the layer below using the top-down information from the layers above and the temporal information from the previous states. Hence, the name deep predictive coding networks (DPCN).

1.1 Related Work

The DPCN proposed here is closely related to models proposed in [9, 10], where predictive coding is used as a statistical model to explain cortical functions in the mammalian brain. Similar to the proposed model, they construct hierarchical generative models that seek to infer the underlying causes of the sensory inputs. While Rao and Ballard [9]

use an update rule similar to Kalman filter for inference,

Friston [10]

proposed a general framework considering all the higher-order moments in a continuous time dynamic model. However, neither of the models is capable of extracting discriminative information, namely a sparse and invariant representation, from an image sequence that is helpful for high-level tasks like object recognition. Unlike these models, here we propose an efficient inference procedure to extract locally invariant representation from image sequences and progressively extract more abstract information at higher levels in the model.

Other methods used for building deep models, like restricted Boltzmann machine (RBM)

[11], auto-encoders [12, 8] and predictive sparse decomposition [13], are also related to the model proposed here. All these models are constructed on similar underlying principles: (1) like ours, they also use greedy layer-wise unsupervised learning to construct a hierarchical model and (2) each layer consists of an encoder and a decoder. The key to these models is to learn both encoding and decoding concurrently (with some regularization like sparsity [13], denoising [8] or weight sharing [11]), while building the deep network as a feed forward model using only the encoder. The idea is to approximate the latent representation using only the feed-forward encoder, while avoiding the decoder which typically requires a more expensive inference procedure. However in DPCN there is no encoder. Instead, DPCN relies on an efficient inference procedure to get a more accurate latent representation. As we will show below, the use of reciprocal top-down and bottom-up connections make the proposed model more robust to structured noise during recognition and also allows it to perform low-level tasks like image denoising.

To scale to large images, several convolutional models are also proposed in a similar deep learning paradigm [5, 7, 6]. Inference in these models is applied over an entire image, rather than small parts of the input. DPCN can also be extended to form a convolutional network, but this will not be discussed here.

2 Model

In this section, we begin with a brief description of the general predictive coding framework and proceed to discuss the details of the architecture used in this work. The basic block of the proposed model that is pervasive across all layers is a generalized state-space model of the form:


where is the data and and are some functions that can be parameterized, say by . The terms are called the unknown causes. Since we are usually interested in obtaining abstract information from the observations, the causes are encouraged to have a non-linear relationship with the observations. The hidden states, , then “mediate the influence of the cause on the output and endow the system with memory” [10]. The terms and are stochastic and model uncertainty. Several such state-space models can now be stacked, with the output from one acting as an input to the layer above, to form a hierarchy. Such an -layered hierarchical model at any time ’’ can be described as111When , i.e., at the bottom layer, , where the input data.:


The terms and form stochastic fluctuations at the higher layers and enter each layer independently. In other words, this model forms a Markov chain across the layers, simplifying the inference procedure. Notice how the causes at the lower layer form the “observations” to the layer above — the causes form the link between the layers, and the states link the dynamics over time. The important point in this design is that the higher-level predictions influence the lower levels’ inference. The predictions from a higher layer non-linearly enter into the state space model by empirically altering the prior on the causes. In summary, the top-down connections and the temporal dependencies in the state space influence the latent representation at any layer.

In the following sections, we will first describe a basic computational network, as in (1) with a particular form of the functions and . Specifically, we will consider a linear dynamical model with sparse states for encoding the inputs and the state transitions, followed by the non-linear pooling function to infer the causes. Next, we will discuss how to stack and learn a hierarchical model using several of these basic networks. Also, we will discuss how to incorporate the top-down information during inference in the hierarchical model.

(a) Shows a single layered dynamic network depicting a basic computational block.
(b) Shows the distributive hierarchical model formed by stacking several basic blocks.
Figure 1: (a) Shows a single layered network on a group of small overlapping patches of the input video. The green bubbles indicate a group of inputs (), red bubbles indicate their corresponding states () and the blue bubbles indicate the causes () that pool all the states within the group. (b) Shows a two-layered hierarchical model constructed by stacking several such basic blocks. For visualization no overlapping is shown between the image patches here, but overlapping patches are considered during actual implementation.

2.1 Dynamic network

To begin with, we consider a dynamic network to extract features from a small part of a video sequence. Let be a -dimensional sequence of a patch extracted from the same location across all the frames in a video222Here

is a vectorized form of

square patch extracted from a frame at time . . To process this, our network consists of two distinctive parts (see Figure.0(a)): feature extraction (inferring states) and pooling (inferring causes). For the first part, sparse coding is used in conjunction with a linear state space model to map the inputs at time onto an over-complete dictionary of -filters, , to get sparse states . To keep track of the dynamics in the latent states we use a linear function with state-transition matrix . More formally, inference of the features is performed by finding a representation that minimizes the energy function:


Notice that the second term involving the state-transition is also constrained to be sparse to make the state-space representation consistent.

Now, to take advantage of the spatial relationships in a local neighborhood, a small group of states , where represents a set of contiguous patches w.r.t. the position in the image space, are added (or sum pooled) together. Such pooling of the states may be lead to local translation invariance. On top this, a -dimensional causes are inferred from the pooled states to obtain representation that is invariant to more complex local transformations like rotation, spatial frequency, etc. In line with [14], this invariant function is learned such that it can capture the dependencies between the components in the pooled states. Specifically, the causes are inferred by minimizing the energy function:


where is some constant. Notice that here multiplicatively interacts with the accumulated states through , modeling the shape of the sparse prior on the states. Essentially, the invariant matrix is adapted such that each component connects to a group of components in the accumulated states that co-occur frequently. In other words, whenever a component in is active it lowers the coefficient of a set of components in , making them more likely to be active. Since co-occurring components typically share some common statistical regularity, such activity of typically leads to locally invariant representation [14].

Though the two cost functions are presented separately above, we can combine both to devise a unified energy function of the form:


where . As we will discuss next, both and can be inferred concurrently from (5) by alternatively updating one while keeping the other fixed using an efficient proximal gradient method.

2.2 Learning

To learn the parameters in (5), we alternatively minimize using a procedure similar to block co-ordinate descent. We first infer the latent variables while keeping the parameters fixed and then update the parameters while keeping the variables fixed. This is done until the parameters converge. We now discuss separately the inference procedure and how we update the parameters using a gradient descent method with the fixed variables.

2.2.1 Inference

We jointly infer both and from (5) using proximal gradient methods, taking alternative gradient descent steps to update one while holding the other fixed. In other words, we alternate between updating and using a single update step to minimize and , respectively. However, updating is relatively more involved. So, keeping aside the causes, we first focus on inferring sparse states alone from , and then go back to discuss the joint inference of both the states and the causes.

Inferring States: Inferring sparse states, given the parameters, from a linear dynamical system forms the crux of our model. This is performed by finding the solution that minimizes the energy function in (3) with respect to the states (while keeping the sparsity parameter fixed). Here there are two priors of the states: the temporal dependence and the sparsity term. Although this energy function is convex in , the presence of two non-smooth terms makes it hard to use standard optimization techniques used for sparse coding alone. A similar problem is solved using dynamic programming [15], homotopy [16] and Bayesian sparse coding [17]; however, the optimization used in these models is computationally expensive for use in large scale problems like object recognition.

To overcome this, inspired by the method proposed in [18] for structured sparsity, we propose an approximate solution that is consistent and able to use efficient solvers like fast iterative shrinkage thresholding alogorithm (FISTA) [19]. The key to our approach is to first use Nestrov’s smoothness method [20, 18] to approximate the non-smooth state transition term. The resulting energy function is a convex and continuously differentiable function in with a sparsity constraint, and hence, can be efficiently solved using proximal methods like FISTA.

To begin, let where . The idea is to find a smooth approximation to this function in . Notice that, since is a linear function on , the approximation will also be smooth w.r.t. . Now, we can re-write using the dual norm of as

where . Using the smoothing approximation from Nesterov [20] on :


where is a smoothing function and is a smoothness parameter. From Nestrov’s theorem [20], it can be shown that is convex and continuously differentiable in and the gradient of with respect to takes the form


where is the optimal solution to 333Please refer to the supplementary material for the exact form of .

. This implies, by using the chain rule, that

is also convex and continuously differentiable in and with the same gradient.

With this smoothing approximation, the overall cost function from (3) can now be re-written as


with the smooth part whose gradient with respect to is given by


Using the gradient information in (9), we solve for from (8) using FISTA [19].

Inferring Causes: Given a group of state vectors, can be inferred by minimizing , where we define a generative model that modulates the sparsity of the pooled state vector, . Here we observe that FISTA can be readily applied to infer , as the smooth part of the function :


is convex, continuously differentiable and Lipschitz in [21] 444The matrix is initialized with non-negative entries and continues to be non-negative without any additional constraints [21].. Following [19], it is easy to obtain a bound on the convergence rate of the solution.

Joint Inference: We showed thus far that both and can be inferred from their respective energy functions using a first-order proximal method called FISTA. However, for joint inference we have to minimize the combined energy function in (5) over both and . We do this by alternately updating and while holding the other fixed and using a single FISTA update step at each iteration. It is important to point out that the internal FISTA step size parameters are maintained between iterations. This procedure is equivalent to alternating minimization using gradient descent. Although this procedure no longer guarantees convergence of both and to the optimal solution, in all of our simulations it lead to a reasonably good solution. Please refer to Algorithm. 1 (in the supplementary material) for details. Note that, with the alternating update procedure, each is now influenced by the feed-forward observations, temporal predictions and the feedback connections from the causes.

2.2.2 Parameter Updates

With and fixed, we update the parameters by minimizing in (5) with respect to

. Since the inputs here are a time-varying sequence, the parameters are updated using dual estimation filtering

[22]; i.e., we put an additional constraint on the parameters such that they follow a state space equation of the form:


where is Gaussian transition noise over the parameters. This keeps track of their temporal relationships. Along with this constraint, we update the parameters using gradient descent. Notice that with a fixed and , each of the parameter matrices can be updated independently. Matrices and are column normalized after the update to avoid any trivial solution.

Mini-Batch Update: To get faster convergence, the parameters are updated after performing inference over a large sequence of inputs instead of at every time instance. With this “batch” of signals, more sophisticated gradient methods, like conjugate gradient, can be used and, hence, can lead to more accurate and faster convergence.

2.3 Building a hierarchy

So far the discussion is focused on encoding a small part of a video frame using a single stage network. To build a hierarchical model, we use this single stage network as a basic building block and arrange them up to form a tree structure (see Figure.0(b)). To learn this hierarchical model, we adopt a greedy layer-wise procedure like many other deep learning methods [11, 6, 8]. Specifically, we use the following strategy to learn the hierarchical model.

For the first (or bottom) layer, we learn a dynamic network as described above over a group of small patches from a video. We then take this learned network and replicate it at several places on a larger part of the input frames (similar to weight sharing in a convolutional network [23]). The outputs (causes) from each of these replicated networks are considered as inputs to the layer above. Similarly, in the second layer the inputs are again grouped together (depending on the spatial proximity in the image space) and are used to train another dynamic network. Similar procedure can be followed to build more higher layers.

We again emphasis that the model is learned in a layer-wise manner, i.e., there is no top-down information while learning the network parameters. Also note that, because of the pooling of the states at each layers, the receptive field of the causes becomes progressively larger with the depth of the model.

2.4 Inference with top-down information

With the parameters fixed, we now shift our focus to inference in the hierarchical model with the top-down information. As we discussed above, the layers in the hierarchy are arranged in a Markov chain, i.e., the variables at any layer are only influenced by the variables in the layer below and the layer above. Specifically, the states and the causes at layer are inferred from and are influenced by (through the prediction term ) 555The suffixes indicating the group are considered implicit here to simplify the notation.. Ideally, to perform inference in this hierarchical model, all the states and the causes have to be updated simultaneously depending on the present state of all the other layers until the model reaches equilibrium [10]. However, such a procedure can be very slow in practice. Instead, we propose an approximate inference procedure that only requires a single top-down flow of information and then a single bottom-up inference using this top-down information.

For this we consider that at any layer a group of input are encoded using a group of states and the causes by minimizing the following energy function:


where . Notice the additional term involving when compared to (5). This comes from the top-down information, where we call as the top-down prediction of the causes of layer using the previous states in layer . Specifically, before the “arrival” of a new observation at time , at each layer (starting from the top-layer) we first propagate the most likely causes to the layer below using the state at the previous time instance and the predicted causes . More formally, the top-down prediction at layer is obtained as


At the top most layer, , a “bias” is set such that , i.e., the top-layer induces some temporal coherence on the final outputs. From (13), it is easy to show that the predicted states for layer can be obtained as


These predicted causes are substituted in (12) and a single layer-wise bottom-up inference is performed as described in section 2.2.1 666Note that the additional term in the energy function only leads to a minor modification in the inference procedure, namely this has to be added to in (10).. The combined prior now imposed on the causes, , is similar to the elastic net prior discussed in [24], leading to a smoother and biased estimate of the causes.

3 Experiments

3.1 Receptive fields of causes in the hierarchical model

(a) Layer 1 invariant matrix,
(b) Layer 2 invariant matrix,
Figure 2: Visualization of the receptive fields of the invariant units learned in (a) layer 1 and (b) layer 2 when trained on natural videos. The receptive fields are constructed as a weighted combination of the dictionary of filters at the bottom layer.

Firstly, we would like to test the ability of the proposed model to learn complex features in the higher-layers of the model. For this we train a two layered network from a natural video. Each frame in the video was first contrast normalized as described in [13]. Then, we train the first layer of the model on overlapping contiguous pixel patches from this video; this layer has 400 dimensional states and 100 dimensional causes. The causes pool the states related to all the patches. The separation between the overlapping patches here was pixels, implying that the receptive field of the causes in the first layer is pixels. Similarly, the second layer is trained on causes from the first layer obtained from overlapping pixel patches from the video. The separation between the patches here is pixels, implying that the receptive field of the causes in the second layer is pixels. The second layer contains 200 dimensional states and 50 dimensional causes that pools the states related to all the patches.

Figure 2 shows the visualization of the receptive fields of the invariant units (columns of matrix ) at each layer. We observe that each dimension of causes in the first layer represents a group of primitive features (like inclined lines) which are localized in orientation or position 777Please refer to supplementary material for more results.. Whereas, the causes in the second layer represent more complex features, like corners, angles, etc. These filters are consistent with the previously proposed methods like Lee et al. [5] and Zeiler et al. [7].

3.2 Role of top-down information

In this section, we show the role of the top-down information during inference, particularly in the presence of structured noise. Video sequences consisting of objects of three different shapes (Refer to Figure 3

) were constructed. The objective is to classify each frame as coming from one of the three different classes. For this, several

pixel 100 frame long sequences were made using two objects of the same shape bouncing off each other and the “walls”. Several such sequences were then concatenated to form a 30,000 long sequence. We train a two layer network using this sequence. First, we divided each frame into patches with neighboring patches overlapping by 4 pixels; each frame is divided into 16 patches. The bottom layer was trained such the patches were used as inputs and were encoded using a 100 dimensional state vector. A contiguous neighboring patches were pooled to infer the causes that have 40 dimensions. The second layer was trained with first layer causes as inputs, which were itself inferred from contiguous overlapping blocks of the video frames. The states here are 60 dimensional long and the causes have only 3 dimensions. It is important to note here that the receptive field of the second layer causes encompasses the entire frame.

We test the performance of the DPCN in two conditions. The first case is with 300 frames of clean video, with 100 frames per shape, constructed as described above. We consider this as a single video without considering any discontinuities. In the second case, we corrupt the clean video with “structured” noise, where we randomly pick a number of objects from same three shapes with a Poisson distribution (with mean 1.5) and add them to each frame independently at a random locations. There is no correlation between any two consecutive frames regarding where the “noisy objects” are added (see Figure.


First we consider the clean video and perform inference with only bottom-up inference, i.e., during inference we consider . Figure 3(a) shows the scatter plot of the three dimensional causes at the top layer. Clearly, there are 3 clusters recognizing three different shape in the video sequence. Figure 3(b) shows the scatter plot when the same procedure is applied on the noisy video. We observe that 3 shapes here can not be clearly distinguished. Finally, we use top-down information along with the bottom-up inference as described in section 2.4 on the noisy data. We argue that, since the second layer learned class specific information, the top-down information can help the bottom layer units to disambiguate the noisy objects from the true objects. Figure 3(c) shows the scatter plot for this case. Clearly, with the top-down information, in spite of largely corrupted sequence, the DPCN is able to separate the frames belonging to the three shapes (the trace from one cluster to the other is because of the temporal coherence imposed on the causes at the top layer.).

(a) Clear Sequences
(b) Corrupted Sequences
Figure 3: Shows part of the (a) clean and (b) corrupted video sequences constructed using three different shapes. Each row indicates one sequence.
Figure 4: Shows the scatter plot of the 3 dimensional causes at the top-layer for (a) clean video with only bottom-up inference, (b) corrupted video with only bottom-up inference and (c) corrupted video with top-down flow along with bottom-up inference. At each point, the shape of the marker indicates the true shape of the object in the frame.

4 Conclusion

In this paper we proposed the deep predictive coding network, a generative model that empirically alters the priors in a dynamic and context sensitive manner. This model composes to two main components: (a) linear dynamical models with sparse states used for feature extraction, and (b) top-down information to adapt the empirical priors. The dynamic model captures the temporal dependencies and reduces the instability usually associated with sparse coding 888Please refer to the supplementary material for more details., while the task specific information from the top layers helps to resolve ambiguities in the lower-layer improving data representation in the presence of noise. We believe that our approach can be extended with convolutional methods, paving the way for implementation of high-level tasks like object recognition, etc., on large scale videos or images.


This work is supported by the Office of Naval Research (ONR) grant #N000141010375. We thank Austin J. Brockmeier and Matthew Emigh for their comments and suggestions.


  • Lowe [1999] David G. Lowe. Object recognition from local scale-invariant features. In

    Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2

    , ICCV ’99, pages 1150–, 1999.
    ISBN 0-7695-0164-8.
  • Dalal and Triggs [2005] Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In

    Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05) - Volume 1 - Volume 01

    , CVPR ’05, pages 886–893, 2005.
    ISBN 0-7695-2372-2.
  • Olshausen and Field [1996] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607–609, June 1996. ISSN 0028-0836.
  • Wiskott and Sejnowski [2002] L. Wiskott and T.J. Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural computation, 14(4):715–770, 2002.
  • Lee et al. [2009] Honglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng.

    Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations.

    In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, pages 609–616, 2009. ISBN 978-1-60558-516-1.
  • Kavukcuoglu et al. [2010a] K. Kavukcuoglu, P. Sermanet, Y.L. Boureau, K. Gregor, M. Mathieu, and Y. LeCun. Learning convolutional feature hierarchies for visual recognition. Advances in Neural Information Processing Systems, pages 1090–1098, 2010a.
  • Zeiler et al. [2010] M.D. Zeiler, D. Krishnan, G.W. Taylor, and R. Fergus. Deconvolutional networks. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 2528–2535. IEEE, 2010.
  • Vincent et al. [2010] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.A. Manzagol.

    Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.

    The Journal of Machine Learning Research, 11:3371–3408, 2010.
  • Rao and Ballard [1997] Rajesh P. N. Rao and Dana H. Ballard. Dynamic model of visual recognition predicts neural response properties in the visual cortex. Neural Computation, 9:721–763, 1997.
  • Friston [2008] Karl Friston. Hierarchical models in the brain. PLoS Comput Biol, 4(11):e1000211, 11 2008.
  • [11] Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh. A Fast Learning Algorithm for Deep Belief Nets. Neural Comp., (7):1527–1554, July .
  • Bengio et al. [2007] Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training of deep networks. In In NIPS, 2007.
  • Kavukcuoglu et al. [2010b] Koray Kavukcuoglu, Marc’Aurelio Ranzato, and Yann LeCun. Fast inference in sparse coding algorithms with applications to object recognition. CoRR, abs/1010.3467, 2010b.
  • Karklin and Lewicki [2005] Yan Karklin and Michael S. Lewicki. A hierarchical bayesian model for learning nonlinear statistical regularities in nonstationary natural signals. Neural Computation, 17:397–423, 2005.
  • Angelosante et al. [2009] D. Angelosante, G.B. Giannakis, and E. Grossi. Compressed sensing of time-varying signals. In Digital Signal Processing, 2009 16th International Conference on, pages 1 –8, july 2009.
  • Charles et al. [2011] A. Charles, M.S. Asif, J. Romberg, and C. Rozell. Sparsity penalties in dynamical system estimation. In Information Sciences and Systems (CISS), 2011 45th Annual Conference on, pages 1 –6, march 2011.
  • Sejdinovic et al. [2010] D. Sejdinovic, C. Andrieu, and R. Piechocki. Bayesian sequential compressed sensing in sparse dynamical systems. In Communication, Control, and Computing (Allerton), 2010 48th Annual Allerton Conference on, pages 1730 –1736, 29 2010-oct. 1 2010. doi: 10.1109/ALLERTON.2010.5707125.
  • Chen et al. [2012] X. Chen, Q. Lin, S. Kim, J.G. Carbonell, and E.P. Xing. Smoothing proximal gradient method for general structured sparse regression. The Annals of Applied Statistics, 6(2):719–752, 2012.
  • [19] Amir Beck and Marc Teboulle. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM Journal on Imaging Sciences, (1):183–202, March . ISSN 19364954. doi: 10.1137/080716542.
  • Nesterov [2005] Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127–152, 2005.
  • Gregor and LeCun [2011] Karol Gregor and Yann LeCun. Efficient Learning of Sparse Invariant Representations. CoRR, abs/1105.5307, 2011.
  • Nelson [2000] Alex Nelson. Nonlinear estimation and modeling of noisy time-series by dual Kalman filtering methods. PhD thesis, 2000.
  • LeCun et al. [1989] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Comput., 1(4):541–551, December 1989. ISSN 0899-7667. doi: 10.1162/neco.1989.1.4.541. URL
  • Zou and Hastie [2005] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301–320, 2005.

A Supplementary material for Deep Predictive Coding Networks

a.1 From section 2.2.1, computing

The optimal solution of in (6) is given by


where is a function projecting onto an -ball. This is of the form:

a.2 Algorithm for joint inference of the states and the causes.

1:Take , and some .
2:Initialize , and set , .
3:Set step-size parameters: .
4:while no convergence do
5:     Update
6:     for  do
7:         Line search: Find the best step size .
8:         Compute from (15)
9:         Update using the gradient from (9) with a soft-thresholding function.
10:         Update internal variables with step size parameter as in [19].
11:     end for
12:     Compute
13:     Line search: Find the best step size .
14:     Update using the gradient of (10) with a soft-thresholding function.
15:     Update internal variables with step size parameter as in [19].
16:     Update
17:     Check for convergence.
19:end while
20:return and
Algorithm 1 Updating , simultaneously using FISTA-like procedure [19].

a.3 Inferring sparse states with known parameters

Figure 5: Shows the performance of the inference algorithm with fixed parameters when compared with sparse coding and Kalman filtering. For this we first simulate a state sequence with only 20 non-zero elements in a 500-dimensional state vector evolving with a permutation matrix, which is different for every time instant, followed by a scaling matrix to generate a sequence of observations. We consider that both the permutation and the scaling matrices are known apriori

. The observation noise is Gaussian zero mean and variance

. We consider sparse state-transition noise, which is simulated by choosing a subset of active elements in the state vector (number of elements is chosen randomly via a Poisson distribution with mean 2) and switching each of them with a randomly chosen element (with uniform probability over the state vector). This resemble a sparse innovation in the states. We use these generated observation sequences as inputs and use the

apriori know parameters to infer the states from the dynamic model. Figure 5 shows the results obtained, where we compare the inferred states from different methods with the true states in terms of relative mean squared error (rMSE) (defined as ). The steady state error (rMSE) after 50 time instances is plotted versus with the dimensionality of the observation sequence. Each point is obtained after averaging over 50 runs. We observe that our model is able to converge to the true solution even for low dimensional observation, when other methods like sparse coding fail. We argue that the temporal dependencies considered in our model is able to drive the solution to the right attractor basin, insulating it from instabilities typically associated with sparse coding [24].

a.4 Visualizing first layer of the learned model

(a) Observation matrix (Bases)
(b) State-transition matrix
Figure 6: Visualization of the parameters. and , of the model described in section 3.1. (A) Shows the learned observation matrix . Each square block indicates a column of the matrix, reshaped as pixel block. (B) Shows the state transition matrix using its connections strength with the observation matrix . On the left are the basis corresponding to the single active element in the state at time and on the right are the basis corresponding to the five most “active” elements in the predicted state (ordered in decreasing order of the magnitude).
(a) Connections
(b) Centers and Orientations
(c) Orientations and Frequencies
Figure 7: Connections between the invariant units and the basis functions. (A) Shows the connections between the basis and columns of . Each row indicates an invariant unit. Here the set of basis that a strongly correlated to an invariant unit are shown, arranged in the decreasing order of the magnitude. (B) Shows spatially localized grouping of the invariant units. Firstly, we fit a Gabor function to each of the basis functions. Each subplot here is then obtained by plotting a line indicating the center and the orientation of the Gabor function. The colors indicate the connections strength with an invariant unit; red indicating stronger connections and blue indicate almost zero strength. We randomly select a subset of 25 invariant units here. We observe that the invariant unit group the basis that are local in spatial centers and orientations. (C) Similarly, we show the corresponding orientation and spatial frequency selectivity of the invariant units. Here each plot indicates the orientation and frequency of each Gabor function color coded according to the connection strengths with the invariant units. Each subplot is a half-polar plot with the orientation plotted along the angle ranging from to and the distance from the center indicating the frequency. Again, we observe that the invariant units group the basis that have similar orientation.