
Understanding the Role of Training Regimes in Continual Learning
Catastrophic forgetting affects the training of neural networks, limitin...
read it

Pointer Graph Networks
Graph neural networks (GNNs) are typically applied to static graphs that...
read it

A Deep Neural Network's Loss Surface Contains Every Lowdimensional Pattern
The work "Loss Landscape Sightseeing with MultiPoint Optimization" (Sko...
read it

Continual Unsupervised Representation Learning
Continual learning aims to improve the ability of modern learning system...
read it

Improving the Gating Mechanism of Recurrent Neural Networks
Gating mechanisms are widely used in neural network models, where they a...
read it

Stabilizing Transformers for Reinforcement Learning
Owing to their ability to both effectively integrate information over lo...
read it

MetaLearning with Warped Gradient Descent
A versatile and effective approach to metalearning is to infer a gradie...
read it

Task Agnostic Continual Learning via Meta Learning
While neural networks are powerful function approximators, they suffer f...
read it

Metalearning of Sequential Strategies
In this report we review memorybased metalearning as a tool for buildi...
read it

Information asymmetry in KLregularized RL
Many real world tasks exhibit rich structure that is repeated across dif...
read it

Ray Interference: a Source of Plateaus in Deep Reinforcement Learning
Rather than proposing a new method, this paper investigates an issue pre...
read it

A RAD approach to deep mixture models
Flow based models such as Real NVP are an extremely powerful approach to...
read it

Exploiting Hierarchy for Learning and Transfer in KLregularized RL
As reinforcement learning agents are tasked with solving more challengin...
read it

Distilling Policy Distillation
The transfer of knowledge from one policy to another is an important too...
read it

Functional Regularisation for Continual Learning using Gaussian Processes
We introduce a novel approach for supervised continual learning based on...
read it

Adapting Auxiliary Losses Using Gradient Similarity
One approach to deal with the statistical inefficiency of neural network...
read it

MetaLearning with Latent Embedding Optimization
Gradientbased metalearning techniques are both widely applicable and p...
read it

Relational Deep Reinforcement Learning
We introduce an approach for deep reinforcement learning (RL) that impro...
read it

Relational recurrent neural networks
Memorybased neural networks model temporal data by leveraging an abilit...
read it

Mix&Match  Agent Curricula for Reinforcement Learning
We introduce Mix&Match (M&M)  a training framework designed to facilita...
read it

Relational inductive biases, deep learning, and graph networks
Artificial intelligence (AI) has undergone a renaissance recently, makin...
read it

Hyperbolic Attention Networks
We introduce hyperbolic attention networks to endow neural networks with...
read it

Been There, Done That: MetaLearning with Episodic Recall
Metalearning agents excel at rapidly learning new tasks from openended...
read it

Progress & Compress: A scalable framework for continual learning
We introduce a conceptually simple and scalable framework for continual ...
read it

Lowpass Recurrent Neural Networks  A memory architecture for longerterm correlation discovery
Reinforcement learning (RL) agents performing complex tasks must be able...
read it

Block Mean Approximation for Efficient Second Order Optimization
Advanced optimization algorithms such as Newton method and AdaGrad benef...
read it

Learning Deep Generative Models of Graphs
Graphs are fundamental data structures which concisely capture the relat...
read it

Memorybased Parameter Adaptation
Deep neural networks have excelled on a wide range of problems, from vis...
read it

Model compression via distillation and quantization
Deep neural networks (DNNs) continue to make significant advances, solvi...
read it

ImaginationAugmented Agents for Deep Reinforcement Learning
We introduce ImaginationAugmented Agents (I2As), a novel architecture f...
read it

Learning modelbased planning from scratch
Conventional wisdom holds that modelbased planning is a powerful approa...
read it

Distral: Robust Multitask Reinforcement Learning
Most deep reinforcement learning algorithms are data inefficient in comp...
read it

Visual Interaction Networks
From just a glance, humans can make rich predictions about the future st...
read it

A simple neural network module for relational reasoning
Relational reasoning is a central component of generally intelligent beh...
read it

Metacontrol for Adaptive ImaginationBased Optimization
Many machine learning systems are built to solve the hardest examples of...
read it

Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectu...
read it

Overcoming catastrophic forgetting in neural networks
The ability to learn tasks in a sequential fashion is crucial to the dev...
read it

Interaction Networks for Learning about Objects, Relations and Physics
Reasoning about objects, relations, and physics is central to human inte...
read it

Local minima in training of neural networks
There has been a lot of recent interest in trying to characterize the er...
read it

Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an...
read it

Theano: A Python framework for fast computation of mathematical expressions
Theano is a Python library that allows to define, optimize, and evaluate...
read it

Natural Neural Networks
We introduce Natural Neural Networks, a novel family of algorithms that ...
read it

On the saddle point problem for nonconvex optimization
A central challenge to many fields of science and engineering involves m...
read it

On the Number of Linear Regions of Deep Neural Networks
We study the complexity of functions computable by deep feedforward neur...
read it

On the number of response regions of deep feed forward networks with piecewise linear activations
This paper explores the complexity of deep feedforward networks with lin...
read it

How to Construct Deep Recurrent Neural Networks
In this paper, we explore different ways to extend a recurrent neural ne...
read it

LearnedNorm Pooling for Deep Feedforward and Recurrent Neural Networks
In this paper we propose and investigate a novel nonlinear unit, called ...
read it

Pylearn2: a machine learning research library
Pylearn2 is a machine learning research library. This does not just mean...
read it

Revisiting Natural Gradient for Deep Networks
We evaluate natural gradient, an algorithm originally proposed in Amari ...
read it

MetricFree Natural Gradient for JointTraining of Boltzmann Machines
This paper introduces the MetricFree Natural Gradient (MFNG) algorithm ...
read it
Razvan Pascanu
is this you? claim profile
Research Scientist at Google DeepMind, Phd Student in Machine Learning at Université de Montréal, Developer at Université de Montréal