
Efficient Learning in NonStationary Linear Markov Decision Processes
We study episodic reinforcement learning in nonstationary linear (a.k.a...
read it

Implicit Regularization in Deep Learning: A View from Function Space
We approach the problem of implicit regularization in deep learning from...
read it

Stochastic Hamiltonian Gradient Methods for Smooth Games
The success of adversarial formulations in machine learning has brought ...
read it

Sharp Analysis of Smoothed Bellman Error Embedding
The Smoothed Bellman Error Embedding algorithm <cit.>, known as SBEED, w...
read it

Adversarial Example Games
The existence of adversarial examples capable of fooling trained neural ...
read it

Revisiting Loss Modelling for Unstructured Pruning
By removing parameters from deep neural networks, unstructured pruning m...
read it

Do sequencetosequence VAEs learn global features of sentences?
A longstanding goal in NLP is to compute global sentence representations...
read it

Stable Policy Optimization via OffPolicy Divergence Regularization
Trust Region Policy Optimization (TRPO) and Proximal Policy Optimization...
read it

An Empirical Study of Batch Normalization and Group Normalization in Conditional Computation
Batch normalization has been widely used to improve optimization in deep...
read it

A Closer Look at the Optimization Landscapes of Generative Adversarial Networks
Generative adversarial networks have been very successful in generative ...
read it

Stochastic Neural Network with Kronecker Flow
Recent advances in variational inference enable the modelling of highly ...
read it

SVRG for Policy Evaluation with Fewer Gradient Evaluations
Stochastic variancereduced gradient (SVRG) is an optimization method or...
read it

Reducing Uncertainty in Undersampled MRI Reconstruction with Active Acquisition
The goal of MRI reconstruction is to restore a high fidelity image from ...
read it

fastMRI: An Open Dataset and Benchmarks for Accelerated MRI
Accelerating Magnetic Resonance Imaging (MRI) by taking fewer measuremen...
read it

Fast Approximate Natural Gradient Descent in a Kroneckerfactored Eigenbasis
Optimization algorithms that leverage gradient covariance information, s...
read it

Randomized Value Functions via Multiplicative Normalizing Flows
Randomized value functions offer a promising approach towards the challe...
read it

A Variational Inequality Perspective on Generative Adversarial Nets
Stability has been a recurrent issue in training generative adversarial ...
read it

Improving Landmark Localization with SemiSupervised Learning
We present two techniques to improve landmark localization from partiall...
read it

Parametric Adversarial Divergences are Good Task Losses for Generative Modeling
Generative modeling of high dimensional data like images is a notoriousl...
read it

Learning to Compute Word Embeddings On the Fly
Words in natural language follow a Zipfian distribution whereby some wor...
read it

Convergent TreeBackup and Retrace with Function Approximation
Offpolicy learning is key to scaling up reinforcement learning as it al...
read it

Learning to Generate Samples from Noise through Infusion Training
In this work, we investigate a novel training procedure to learn a gener...
read it

A Cheap Linear Attention Mechanism with Fast Lookups and FixedSize Representations
The softmax contentbased attention mechanism has proven to be very bene...
read it

Exact gradient updates in time independent of output size for the spherical loss family
An important class of problems involves training deep neural networks wi...
read it

Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component th...
read it

Theano: A Python framework for fast computation of mathematical expressions
Theano is a Python library that allows to define, optimize, and evaluate...
read it

The Zloss: a shift and scale invariant classification loss belonging to the Spherical Family
Despite being the standard loss function to train multiclass neural net...
read it

Recombinator Networks: Learning CoarsetoFine Feature Aggregation
Deep neural networks with alternating convolutional, maxpooling and dec...
read it

An Exploration of Softmax Alternatives Belonging to the Spherical Loss Family
In a multiclass classification problem, it is standard to model the out...
read it

Artificial Neural Networks Applied to Taxi Destination Prediction
We describe our firstplace solution to the ECML/PKDD discovery challeng...
read it

Clustering is Efficient for Approximate Maximum Inner Product Search
Efficient Maximum Inner Product Search (MIPS) is an important task that ...
read it

Dropout as data augmentation
Dropout is typically interpreted as bagging a large number of models sha...
read it

GSNs : Generative Stochastic Networks
We introduce a novel training principle for probabilistic models that is...
read it

EmoNets: Multimodal deep learning approaches for emotion recognition in video
The task of the emotion recognition in the wild (EmotiW) Challenge is to...
read it

Efficient Exact Gradient Update for training Deep Networks with Very Large Sparse Targets
An important class of problems involves training deep neural networks wi...
read it

Generalized Denoising AutoEncoders as Generative Models
Recent work has shown how denoising and contractive autoencoders implici...
read it

Highdimensional sequence transduction
We investigate the problem of transforming an input sequence into a high...
read it

A Generative Process for Sampling Contractive AutoEncoders
The contractive autoencoder learns a representation of the input data t...
read it

Modeling Temporal Dependencies in HighDimensional Sequences: Application to Polyphonic Music Generation and Transcription
We investigate the problem of modeling symbolic sequences of polyphonic ...
read it

Representation Learning: A Review and New Perspectives
The success of machine learning algorithms generally depends on data rep...
read it

Learning invariant features through local space contraction
We present in this paper a novel approach for training deterministic aut...
read it

Adding noise to the input of a model trained with a regularized objective
Regularization is a well studied problem in the context of neural networ...
read it
Pascal Vincent
is this you? claim profile
Associate Professor Montreal Institute for Learning Algorithms (MILA) Department of Computer Science and Operational Research University of Montreal, Research Scientist at Facebook A.I. Research