Informational parsimony – i.e., using the minimal information required f...
Data efficiency is a key challenge for deep reinforcement learning. We
a...
Self-supervised learning has made unsupervised pretraining relevant agai...
While deep reinforcement learning excels at solving tasks where large am...
Our work is based on the hypothesis that a model-free agent whose
repres...
We propose an approach to self-supervised representation learning based ...
While recent progress has spawned very powerful machine learning systems...
We introduce a deep generative model for functions. Our model provides a...
Learning inter-domain mappings from unpaired data can improve performanc...
In recent years, significant progress has been made in solving challengi...
We propose a recurrent neural model that generates natural-language ques...
We present NewsQA, a challenging machine comprehension dataset of over
1...
Natural language generation plays a critical role in spoken dialogue sys...
We propose a novel neural attention architecture to tackle machine
compr...
We connect a broad class of generative models through their shared relia...
We formalize the notion of a pseudo-ensemble, a (possibly infinite)
coll...
Locally adapted parameterizations of a model (such as locally weighted
r...