Pylearn2: a machine learning research library

Pylearn2 is a machine learning research library. This does not just mean that it is a collection of machine learning algorithms that share a common API; it means that it has been designed for flexibility and extensibility in order to facilitate research projects that involve new or unusual use cases. In this paper we give a brief history of the library, an overview of its basic philosophy, a summary of the library's architecture, and a description of how the Pylearn2 community functions socially.



There are no comments yet.


page 1

page 2

page 3

page 4


TensorNetwork: A Library for Physics and Machine Learning

TensorNetwork is an open source library for implementing tensor network ...

MLPACK: A Scalable C++ Machine Learning Library

MLPACK is a state-of-the-art, scalable, multi-platform C++ machine learn...

CSPLib: Twenty Years On

In 1999, we introduced CSPLib, a benchmark library for the constraints c...

COOLL: Controlled On/Off Loads Library, a Public Dataset of High-Sampled Electrical Signals for Appliance Identification

This paper gives a brief description of the Controlled On/Off Loads Libr...

Hydra: a C++11 framework for data analysis in massively parallel platforms

Hydra is a header-only, templated and C++11-compliant framework designed...

pyBKT: An Accessible Python Library of Bayesian Knowledge Tracing Models

Bayesian Knowledge Tracing, a model used for cognitive mastery estimatio...

Snap Machine Learning

We describe an efficient, scalable machine learning library that enables...

Code Repositories


Warning: This project does not have any current developer. See bellow.

view repo


Kaggle: National Data Science Bowl

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Pylearn2 is a machine learning research library developed by LISA at Université de Montréal. The goal of the library is to facilitate machine learning research. This means that the library has a focus on flexibility and extensibility, in order to make sure that nearly any research idea is feasible to implement in the library. The target user base is machine learning researchers. Being “user friendly” for a research user means that it should be easy to understand exactly what the code is doing and configure it very precisely for any desired experiment. Sometimes this may come at the cost of requiring the user to be an expert practitioner, who must understand how the algorithm works in order to accomplish basic data analysis tasks. This is different from other notable machine learning libraries, such as scikit-learn [39] or the learning algorithms provided as part of OpenCV [7], the STAIR Vision Library [23], etc. Such machine learning libraries aim to provide good performance to users who do not necessarily understand how the underlying algorithm works. Pylearn2 has a different user base, and thus different design goals.

In this paper, we give a general sense of the library’s design and how the community functions. We begin with a brief history of the library. Finally, we give an overview of the library’s philosophy, the architecture of the library itself, and the development workflow that the Pylearn2 community uses to improve the library.

GitHub repository
User mailing list
Developer mailing list
Table 1: Other Pylearn2 resources

2 History

Pylearn2 is LISA’s third major effort to design a flexible machine learning research library, the former two being PLearn and Pylearn. It is built on top of the lab’s mathematical expression compiler, Theano 

[5, 3]. In late 2010, a series of committees of LISA lab members met to plan how to fulfill LISA’s software development needs. These committees determined that no existing publicly available machine library had design goals that would satisfy the requirements imposed by the kind of research done at LISA. The committees decided to create Pylearn2, and drafted some basic design ideas and the guiding philosophy of the library. The first implementation work on Pylearn2 began as a class project in early 2011. The library was used for research work mostly within LISA over the next two years. In this time the structure of the library changed several times but eventually became stable. The addition of Continuous Integration from Travis-CI [2] with the development workflow from GitHub [1] helped to greatly improve the stability of the library. GitHub provides a useful interface for reviewing code before it is merged, and Travis-CI tells reviewers whether the code passes the tests.

In late 2011 Pylearn2 was used to win a transfer learning contest 


. After this, a handful of researchers outside LISA became interested in using it to reproduce the results from this challenge. However, the majority of Pylearn2 users were still LISA members. Pylearn2 first gained a significant user base outside LISA in the first half of 2013. This was in part due to the attention the library received after it was used to set the state of the art on several computer vision benchmark tasks 

[21], and in part due to many Kagglers starting to use the library after the baseline solutions to some Kaggle contests were provided in Pylearn2 format [20].

Today, over 250 GitHub users watch the repository, nearly 200 subscribe to the mailing list, and over 100 have made their own fork to work on new features. Over 30 GitHub users have contributed to the library.

3 License and citation information

Pylearn2 is released under the 3-claused BSD license, so it may be used for commercial purposes. The license does not require anyone to cite Pylearn2, but if you use Pylearn2 in published research work we encourage you to cite this article.

4 Philosophy

Development of Pylearn2 is guided by several principles:

  • Pylearn2 is a machine learning research library–its users are researchers. This means the Pylearn2 framework should not impose many restrictions on what is possible to do with the library, and it is acceptable to assume that the user has some technical sophistication and knowledge of machine learning.

  • Pylearn2 is built from re-usable parts, that can be used in many combinations or independently. In particular, no user should be forced to learn all parts of the library. If a user wants only to use a Pylearn2 Model, it should be possible to do so without learning about Pylearn2 TrainingAlgorithms, Costs, etc.

  • Pylearn2 avoids over-planning. Each feature is designed with an eye toward allowing more features to be developed in the future. We do enough planning to ensure that our designs are modular and easy to extend. We generally do not do much more planning than that. This avoids paralysis and over-engineering.

  • Pylearn2 provides a domain-specific language that provides a compact way of specifying all hyperparameters for an experiment.

    Pylearn2 accomplishes this using YAML with a few extensions. A brief YAML file can instantiate a complex experiment without exposing any implementation-specific detail. This makes it easier for researchers who do not use Pylearn2 to read the specification of a Pylearn2 experiment and reproduce it using their own software.

5 Library overview

Pylearn2 consists of several components that can be combined to form complete learning algorithms. Most components do not actually execute any numerical code–they just provide symbolic expressions. This is possible because Pylearn2 is built on top of Theano [5, 3]. Theano provides a language for describing expressions independent of how they are actually implemented, so a single Pylearn2 class provides both CPU and GPU functionality. Another advantage of using symbolic representations as the main arguments to Pylearn2 methods is that it is possible to compute many functions of a symbolic expression that can not be computed from a numerical value alone. For example, it is possible to compute the derivative of a Theano expression, while it is not possible to compute the derivative of the process that generated a numerical value given only the value itself. This means that many interfaces are simpler–few expressions need to be passed between objects, because the recipient can create its own modifications of the expression it is passed, rather than needing an interface for each modified value it requires.

5.1 Core components

The main way the Pylearn2 achieves flexibility and extensibility is decomposition into reusable parts. The three key components used to implement most features are the Dataset, Model, and TrainingAlgorithm classes. A Dataset provides the data to be trained on. A Model

stores parameters and can generate Theano expressions that perform some useful task given input data (e.g., estimate the probability density at a point in space, infer a class label given input features, etc.). A

TrainingAlgorithm adapts a Model to a particular Dataset. Generally each of these objects is in turn modular (Datasets have modular preprocessing, many Model classes are organized into Layers, TrainingAlgorithms can minimize a modular Cost and can have their behavior modified by various modular callbacks and a modular TerminationCriterion, etc.). This modularity means that if a researcher has an innovative idea to test out, and that idea only affects one component, the researcher can simply replace or subclass the component in question. The vast majority of the learning system can still be used with the new idea.

This modularity is in contrast to most other machine learning libraries, where the Model generally does most of the work. A scikit-learn model is generally accompanied by a fit

method that is a complete training algorithm and that can’t be applied to any other model. Some libraries are more modular but don’t entirely divide the labor between models and training algorithms as sharply as Pylearn2 does. For example, in Torch 

[9] or DistBelief [14] the Models are modular, but to train a layer, the layer needs to implement at the very least a backpropogation method for computing derivatives. In Pylearn2, the Model is only responsible for creating symbolic expressions, which the TrainingAlgorithm may or may not symbolically differentiate at a later time. (Individual Theano ops must still implement a grad method, but a comparatively small number of basic ops can be used to implement the comparatively large number of more complex models that appear in most machine learning libraries).

However, another aspect of Pylearn2’s design philosophy is that no user should be forced to learn the entire framework. It’s possible to just implement a train_batch method and have the DefaultTrainingAlgorithm do nothing but serve the Model batches of data from the Dataset, or to ignore TrainingAlgorithms altogether and just pass a Dataset to the Model’s train_all method.

To facilitate code reuse, whenever possible, individual components that are shipped with the library aim to be as modular and orthogonal as possible, relying on other existing components – e.g., most Models that become part of the library will defer their learning functionality to an existing TrainingAlgorithm and/or Cost unless sufficiently specialized as to be infeasible.

5.2 The Dataset class

Datasets are essentially interfaces between sources of data in arbitrary format and the in-memory array formats that Pylearn2 expects. All Pylearn2 Datasets provide the same interface but can be implemented to use any back-end format. Currently, all Pylearn2 Datasets just read data from disk, but in principle a Dataset could access live streaming data from the network or a peripheral device like a webcam.

Datasets allow the data to be presented in many formats, regardless of how it is stored. For example, a minibatch of 64 different pixel RGB images could be presented as a element design matrix, or it could be presented as a tensor. The choice of which attribute to put on which axis can change to support different pieces of software too (for example, Theano convolution prefers batch size channels rows columns, while cuda-convnet [30] (which is wrapped in Pylearn2) prefers channels rows columns batch size). The data can also be presented in many different orders, allowing iteration in sequential or different types of random order. Datasets can implement as many or as few iteration schemes as the implementor wants to–depending on the back end of the data, not all iteration schemes are efficient or even possible (for example, if the iterator needs to read from a network drive, random iteration may be very slow, and if the iterator reads live video from a webcam, there is no way for it to visit the future).

Most Dataset

s used in the deep learning community can be represented as a design matrix stored in dense matrix format. For these datasets, implementing an appropriate

Dataset object is very easy. The implementer only needs to subclass the DenseDesignMatrix class and implement a constructor that loads the desired data. If the data is already stored in NumPy [37] or pickle format, it is not even necessary to implement any new Python code to use the dataset.

Some datasets can be described as dense design matrices but are too big to fit into memory. Pylearn2 supports this use case via the DenseDesignMatrixPyTables. To use this class, the data must be stored in HDF5 format on disk.

Most Datasets also support some kind of preprocessing that can modify the data after it has been loaded.

5.2.1 Implemented Datasets

Pylearn2 currently contains wrappers for several datasets. These include the datasets used for DARPA’s unsupervised and transfer learning challenge [25], the dataset used for the NIPS 2011 workshops challenge in representation learning [32]

, the CIFAR-10 and CIFAR-100 datasets 


, the MNIST dataset 

[34], some of the MNIST variations datasets [31], the NORB dataset [35], the Street View House Numbers dataset [36], the Toronto Faces Database [50], some of the UCI repository datasets [17, 16], and a dataset of 3D animations of fish [18]. Additionally, there are many kinds of preprocessing, such as PCA [38], ZCA [4], various kinds of local contrast normalization [46], as well as helper functions to set up the entire preprocessing pipeline from some well-known successful and documented systems [8, 45].

5.3 The Model class

A Model

is any object that stores parameters (for the purpose of Pylearn2, a “non-parametric” model is just one with a variable number of parameters). The basic

Model class has very few interface elements. Subclasses of the Model class define richer interfaces. These interfaces define different quantities that the Model can compute. For example, the MLP class provides an fprop

method that provides a symbolic expression for forward propagation in a multilayer perceptron. If the final layer of the MLP is a softmax layer representing distributions over classes

, and the fprop method is passed a Theano variable representing inputs , the output will be a Theano variable representing .

The Model class is not required to know how to train itself, though many models do.

5.3.1 Linear operators, spaces, and convolution

Linear operations are key parts of many machine learning models. The distinction between many important classes of machine learning models is often nothing more than what specific structure of linear transformation they use. For example, both MLPs and convolutional networks apply linear operators followed by a nonlinearity to transform inputs into outputs. In the MLP, the linear operation is multiplication by a dense matrix. In a convolutional network, the linear operation is discrete convolution with finite support. This operation can be viewed as matrix multiplication by a sparse matrix with several elements of the matrix constrained to be equal to each other. The point is that both use a linear transformation. Pylearn2’s

LinearTransform class provides a generic representation of linear operators. Pylearn2 functionality written using this class can thus be written once and then extended to do dense matrix multiplication, convolution, tiled convolution, local connections, etc. simply by providing different implementations of the linear operator. This idea grew out of James Bergstra’s Theano-linear module which has since been incorporated into Pylearn2.

Different linear operator implementations require their inputs to be formatted in different ways. For example, convolution applied to an image requires a format that indicates the 2D position of each element of the input, while dense matrix multiplication just requires a linearized vector representaiton of the image. In Pylearn2, classes called

Spaces represent these different views of the same underlying data. Dense matrix multiplication acts on data that lives in a VectorSpace, while 2D convolution acts on data that lives in a Conv2DSpace. Spaces generally know how to convert between each other, when possible. For example, an image in a Conv2DSpace can be flattened into a vector in a VectorSpace.

Several linear operators (and related convolutional network operations like spatial max pooling) in Pylearn2 are implemented as wrappers that add Theano semantics on top of the extremely fast cuda-convnet library 

[30], making Pylearn2 a very practical library to use for convolutional network research.

5.3.2 Implemented models

Because the philosophy that Pylearn2 developers should write features when they are needed, and because most Pylearn2 developers so far have been deep learning researchers, Pylearn2 mostly contains deep learning models or models that are used as building blocks for deep architectures. This includes autoencoders 

[6], RBMs [47] including RBMs with Gaussian visible units [54], DBMs [45], MLPs [44], convolutional networks [33], and local coordinate coding [56]. However, Pylearn2 is not restricted to deep learning functionality. We encourage submissions of other machine learning models. Often, LISA researchers work on problems whose scale exceeds that of the typical machine learning library user, so we occasionally implement features for simpler algorithms, such as SVMs [10]

with reduced memory consumption or k-means 

[49] with fast multicore training.

Pylearn2 has implementations of several models that were developed at LISA, including denoising auto-encoders (DAEs) [53], contractive auto-encoders (CAEs) [42] including higher-order CAEs [43], spike-and-slab RBMs (ssRBMs) [11] including ssRBMs with pooling [12], reconstruction sampling autoencoders [13], and deep sparse rectifier nets [19]. Pylearn2 also contains models that were developed not just at LISA but originally developed using Pylearn2, such as spike-and-slab sparse coding with parallel variational inference [22] and maxout units for neural nets [21].

5.4 The TrainingAlgorithm class

The role of the TrainingAlgorithm class is to adjust the parameters stored in a Model in order to adapt the model to a given Dataset. The TrainingAlgorithm is also responsible for a few less important tasks, such as setting up the Monitor to record various values throughout training (to make learning curves, essentially). The TrainingAlgorithm is one of the very few Pylearn2 classes that actually performs numerical computation. It gathers Theano expressions assembled by the Model and other classes, synthesizes them into expressions for learning rules, compiles the learning rules into Theano functions that accomplish the learning, and executes the Theano expressions. In fact, Pylearn2 TrainingAlgorithms are not even required to use Theano at all. Some use a mixture of Theano and generic Python code; for example to perform line searches with the control logic done with basic Python loops and branching but the numerical computation done by Theano.

Most TrainingAlgorithms support constrained optimization by asking the Model

to project the result of each learning update back into an allowed region. Many Pylearn2 models impose non-negativity constraints on parameters that represent, for example, the conditional variance of some random variable, and most Pylearn2 neural network layers support max norm constraints on the weights 


5.4.1 The Cost class

Many training algorithms can be expressed as procedures for iteratively minimizing a cost function. This provides another opportunity for sharing code between algorithms. The Cost class represents a cost function independent of the algorithm used to minimize it, and each TrainingAlgorithm is free to use this representation or not, depending on what is most appropriate. A Cost

is essentially just a class for generating a Theano expression describing the cost function, but it has a few extra pieces of functionality. For example, it can add monitoring channels that are relevant to the cost being minimized (for example, one popular cost is the negative log likelihood of class labels under a softmax model–this cost automatically adds a monitoring channel that tracks the misclassification rate of the classifier). One extremely important aspect of the

Cost is that it has a get_gradients method. Unlike Theano’s grad

method, this method is not guaranteed to return accurate gradients. This allows many algorithms that use approximate gradients to be implemented using the same machinery as algorithms that use the exact gradient. For example, the persistent contrastive divergence 

[55, 51]

algorithm minimizes an intractable cost with intractable gradients–the log likelihood of a Boltzmann machine. The Pylearn2

Cost for persistent contrastive divergence returns None for the value of the cost function itself, to express that the cost function can’t be computed. However, the standard SGD

class is still able to perform stochastic gradient descent on the cost, because the

get_gradients method for the cost returns a sampling-based approximation to the gradient. No special optimization class is needed to handle this seemingly exotic case.

Other costs implemented in the library include dropout [27], contrastive divergence [26]

, noise contrastive estimation 

[24], score matching [28], denoising score matching [52], softmax log likelihood for classification with MLPs, and Gaussian log likelihood for regression with MLPs. Many additional simpler costs can be added together with the SumOfCosts class to combine these primary costs with secondary costs to add regularization, such as weight decay or sparsity regularization.

5.4.2 Implemented TrainingAlgorithms

Currently, Pylearn2 contains three main TrainingAlgorithm classes. The DefaultTrainingAlgorithm does nothing but serve minibatches of data to the Model’s default minibatch learning rule. The SGD class does stochastic gradient descent on a Cost. This class supports extensions including Polyak averaging [41], momentum [44], and early stopping. The BGD class does batch gradient descent (in practice, large minibatches) aka the method of steepest descent [15]. The BGD class is able to accumulate contributions to the gradient from several minibatches before making an update, thereby enabling it to use batches that are too large to fit in memory. Optional flags enable the BGD class to implement other similar algorithms such as nonlinear conjugate gradient descent [40].

6 Development workflow and user community

Pylearn2 has many kinds of users and developers. One need not be a Pylearn2 developer to do research with Pylearn2. Pylearn2 is a valuable research tool even for people who do not need to develop any new algorithms. The wide array of reference implementations available in Pylearn2 make it useful for studying how existing algorithms behave under various conditions, or for obtaining baseline results on new tasks.

Researchers who wish to implement new algorithms with Pylearn2 do not necessarily need to become Pylearn2 developers either. It’s common to develop experimental features privately in an offline repository. It’s also perfectly fine to share Pylearn2 classes as part of a 3rd party repository rather than having them merged to the main Pylearn2 repository.

For those who do wish to contribute to Pylearn2, thank you! The process is designed to make sure the library is as stable as possible. Developers should first write to to plan how to implement their feature. If the feature requires a change to existing APIs, it’s important to follow the best practices guide111 Once a plan is in place, developers should write the feature in their own fork of Pylearn2 on GitHub, then submit a pull request to the main repository. Our automated test suite will run on the pull request and indicate whether it is safe to merge. Pylearn2 developers will also review the pull request. When both the automatic tests and the reviewers are satisfied, one of us will merge the pull request. Be sure to write to pylearn-dev to find a reviewer for your pull request.

All kinds of pull requests are welcome–new features (provided that they have tests), config files for important results, bug fixes, and tests for existing features.

7 Conclusion

This article has described the Pylearn2 library, including its history, design philosophy and goals, basic architecture, and developer workflow. We hope you find Pylearn2 useful in your research and welcome your potential contributions to it.


  • git [2013] (2013). GitHub.
  • tra [2013] (2013). Travis CI.
  • Bastien et al. [2012] Bastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I., Bergeron, A., Bouchard, N., and Bengio, Y. (2012). Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop.
  • Bell and Sejnowski [1997] Bell, A. and Sejnowski, T. J. (1997). The independent components of natural scenes are edge filters. Vision Research, 37, 3327–3338.
  • Bergstra et al. [2010] Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., Warde-Farley, D., and Bengio, Y. (2010). Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy). Oral Presentation.
  • Bourlard and Kamp [1988] Bourlard, H. and Kamp, Y. (1988).

    Auto-association by multilayer perceptrons and singular value decomposition.

    Biological Cybernetics, 59, 291–294.
  • Bradski [2000] Bradski, G. (2000). The OpenCV Library. Dr. Dobb’s Journal of Software Tools.
  • Coates et al. [2011] Coates, A., Lee, H., and Ng, A. Y. (2011). An analysis of single-layer networks in unsupervised feature learning. In

    Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2011)

  • Collobert et al. [2011] Collobert, R., Kavukcuoglu, K., and Farabet, C. (2011). Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop.
  • Cortes and Vapnik [1995] Cortes, C. and Vapnik, V. (1995). Support vector networks. Machine Learning, 20, 273–297.
  • Courville et al. [2011a] Courville, A., Bergstra, J., and Bengio, Y. (2011a).

    A spike and slab restricted Boltzmann machine.

    In G. Gordon, D. Dunson, and M. Dudìk, editors, Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, volume 15 of JMLR W&CP. Recipient of People’s Choice Award.
  • Courville et al. [2011b] Courville, A., Bergstra, J., and Bengio, Y. (2011b). Unsupervised models of images by spike-and-slab RBMs. In Proceedings of theTwenty-eight International Conference on Machine Learning (ICML’11).
  • Dauphin et al. [2011] Dauphin, Y., Glorot, X., and Bengio, Y. (2011). Large-scale learning of embeddings with reconstruction sampling. In Proceedings of theTwenty-eight International Conference on Machine Learning (ICML’11).
  • Dean et al. [2012] Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Le, Q., Mao, M., Ranzato, M., Senior, A., Tucker, P., Yang, K., and Ng, A. Y. (2012). Large scale distributed deep networks. In NIPS’2012.
  • Debye [1954] Debye, P. (1954). The collected papers of Peter J.W. Debye. Ox Bow Press.
  • Diaconis and Efron [1983] Diaconis, P. and Efron, B. (1983). Computer-intensive methods in statistics. 248(5), 116–126, 128, 130.
  • Fisher [1936] Fisher, R. A. (1936). The use of multiple measurements in taxonomic problems. Annals of Eugenics, 7, 179–188.
  • Franzius et al. [2008] Franzius, M., Wilbert, N., and Wiskott, L. (2008). Invariant object recognition with slow feature analysis. In Proceedings of the 18th international conference on Artificial Neural Networks, Part I, ICANN ’08, pages 961–970, Berlin, Heidelberg. Springer-Verlag.
  • Glorot et al. [2011] Glorot, X., Bordes, A., and Bengio, Y. (2011). Deep sparse rectifier neural networks. In JMLR W&CP: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2011).
  • Goodfellow et al. [2013a] Goodfellow, I., Erhan, D., Carrier, P.-L., Courville, A., Mirza, M., Hamner, B., Cukierski, W., Tang, Y., Thaler, D., Lee, D.-H., Zhou, Y., Ramaiah, C., Feng, F., Li, R., Wang, X., Athanasakis, D., Shawe-Taylor, J., Milakov, M., Park, J., Ionescu, R., Popescu, M., Grozea, C., Bergstra, J., Xie, J., Romaszko, L., Xu, B., Chuang, Z., and Bengio, Y. (2013a). Challenges in representation learning: A report on three machine learning contests. In International Conference On Neural Information Processing.
  • Goodfellow et al. [2013b] Goodfellow, I., Warde-Farley, D., Mirza, M., Courville, A., and Bengio, Y. (2013b). Maxout networks. In S. Dasgupta and D. McAllester, editors, ICML’13, page 1319–1327.
  • Goodfellow et al. [2013c] Goodfellow, I., Courville, A., and Bengio, Y. (2013c). Scaling up spike-and-slab models for unsupervised feature learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1902–1914.
  • Gould et al. [2010] Gould, S., Russakovsky, O., Goodfellow, I., Baumstarck, P., Ng, A. Y., and Koller, D. (2010). The stair vision library.
  • Gutmann and Hyvarinen [2010] Gutmann, M. and Hyvarinen, A. (2010). Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of The Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS’10).
  • Guyon et al. [2011] Guyon, I., Dror, G., Lemaire, V., Taylor, G., and Aha, D. W. (2011). Unsupervised and transfer learning challenge. In Proc. Int. Joint Conf. on Neural Networks.
  • Hinton [2000] Hinton, G. E. (2000). Training products of experts by minimizing contrastive divergence. Technical Report GCNU TR 2000-004, Gatsby Unit, University College London.
  • Hinton et al. [2012] Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2012). Improving neural networks by preventing co-adaptation of feature detectors. Technical report, arXiv:1207.0580.
  • Hyvärinen [2005] Hyvärinen, A. (2005). Estimation of non-normalized statistical models using score matching. Journal of Machine Learning Research, 6, 695–709.
  • Krizhevsky and Hinton [2009] Krizhevsky, A. and Hinton, G. (2009). Learning multiple layers of features from tiny images. Technical report, University of Toronto.
  • Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., and Hinton, G. (2012). ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25 (NIPS’2012).
  • Larochelle et al. [2007] Larochelle, H., Erhan, D., Courville, A., Bergstra, J., and Bengio, Y. (2007). An empirical evaluation of deep architectures on problems with many factors of variation. In ICML’07, pages 473–480. ACM.
  • Le et al. [2011] Le, Q. V., Ranzato, M., Salakhutdinov, R., Ng, A., and Tenenbaum, J. (2011). NIPS Workshop on Challenges in Learning Hierarchical Models: Transfer Learning and Optimization.
  • LeCun and Bengio [1995] LeCun, Y. and Bengio, Y. (1995). Convolutional networks for images, speech, and time-series. In M. A. Arbib, editor, The Handbook of Brain Theory and Neural Networks, pages 255–257. MIT Press.
  • LeCun et al. [1998] LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.
  • LeCun et al. [2004] LeCun, Y., Huang, F.-J., and Bottou, L. (2004). Learning methods for generic object recognition with invariance to pose and lighting. In

    Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR’04)

    , volume 2, pages 97–104, Los Alamitos, CA, USA. IEEE Computer Society.
  • Netzer et al. [2011] Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. (2011). Reading digits in natural images with unsupervised feature learning. Deep Learning and Unsupervised Feature Learning Workshop, NIPS.
  • Oliphant [2007] Oliphant, T. E. (2007). Python for scientific computing. Computing in Science and Engineering, 9, 10–20.
  • Pearson [1901] Pearson, K. (1901). On lines and planes of closest fit to systems of points in space. Philosophical Magazine, 2(6), 559–572.
  • Pedregosa et al. [2011] Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., and Duchesnay, E. (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12, 2825–2830.
  • Polak and Ribiere [1969] Polak, E. and Ribiere, G. (1969). Note sur la convergence de méthodes de directions conjuguées. Revue Française d’Informatique et de Recherche Opérationnelle, 16, 35–43.
  • Polyak and Juditsky [1992] Polyak, B. and Juditsky, A. (1992). Acceleration of stochastic approximation by averaging. SIAM J. Control and Optimization, 30(4), 838–855.
  • Rifai et al. [2011a] Rifai, S., Vincent, P., Muller, X., Glorot, X., and Bengio, Y. (2011a).

    Contractive auto-encoders: Explicit invariance during feature extraction.

    In Proceedings of theTwenty-eight International Conference on Machine Learning (ICML’11).
  • Rifai et al. [2011b] Rifai, S., Mesnil, G., Vincent, P., Muller, X., Bengio, Y., Dauphin, Y., and Glorot, X. (2011b). Higher order contractive auto-encoder. In European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD).
  • Rumelhart et al. [1986] Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning internal representations by error propagation. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing, volume 1, chapter 8, pages 318–362. MIT Press, Cambridge.
  • Salakhutdinov and Hinton [2009] Salakhutdinov, R. and Hinton, G. (2009). Deep Boltzmann machines. In AISTATS’2009, pages 448–455.
  • Sermanet et al. [2012] Sermanet, P., Chintala, S., and LeCun, Y. (2012). Convolutional neural networks applied to house numbers digit classification. In International Conference on Pattern Recognition (ICPR 2012).
  • Smolensky [1986] Smolensky, P. (1986). Information processing in dynamical systems: Foundations of harmony theory. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing, volume 1, chapter 6, pages 194–281. MIT Press, Cambridge.
  • Srebro and Shraibman [2005] Srebro, N. and Shraibman, A. (2005). Rank, trace-norm and max-norm. In Proceedings of the 18th Annual Conference on Learning Theory, pages 545–560. Springer-Verlag.
  • Steinhaus [1957] Steinhaus, H. (1957). Sur la division des corps matériels en parties. In Bull. Acad. Polon. Sci., pages 801–804.
  • Susskind et al. [2010] Susskind, J., Anderson, A., and Hinton, G. E. (2010). The Toronto face dataset. Technical Report UTML TR 2010-001, U. Toronto.
  • Tieleman [2008] Tieleman, T. (2008). Training restricted Boltzmann machines using approximations to the likelihood gradient. In W. W. Cohen, A. McCallum, and S. T. Roweis, editors, ICML 2008, pages 1064–1071. ACM.
  • Vincent [2011] Vincent, P. (2011).

    A connection between score matching and denoising autoencoders.

    Neural Computation, 23(7), 1661–1674.
  • Vincent et al. [2008] Vincent, P., Larochelle, H., Bengio, Y., and Manzagol, P.-A. (2008). Extracting and composing robust features with denoising autoencoders. In ICML’08, pages 1096–1103. ACM.
  • Welling et al. [2005] Welling, M., Rosen-Zvi, M., and Hinton, G. E. (2005). Exponential family harmoniums with an application to information retrieval. In NIPS’04, volume 17, Cambridge, MA. MIT Press.
  • Younes [1999] Younes, L. (1999). On the convergence of Markovian stochastic algorithms with rapidly decreasing ergodicity rates. Stochastics and Stochastic Reports, 65(3), 177–228.
  • Yu et al. [2009] Yu, K., Zhang, T., and Gong, Y. (2009). Nonlinear learning using local coordinate coding. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 2223–2231.