
MLPMixer: An allMLP Architecture for Vision
Convolutional Neural Networks (CNNs) are the goto model for computer vi...
read it

What Do Neural Networks Learn When Trained With Random Labels?
We study deep neural networks (DNNs) trained on natural image data with ...
read it

Predicting Neural Network Accuracy from Weights
We study the prediction of the accuracy of a neural network given only i...
read it

When can unlabeled data improve the learning rate?
In semisupervised classification, one is given access both to labeled a...
read it

Practical and Consistent Estimation of fDivergences
The estimation of an fdivergence between two probability distributions ...
read it

GeNet: Deep Representations for Metagenomics
We introduce GeNet, a method for shotgun metagenomic classification from...
read it

Clustering Meets Implicit Generative Models
Clustering is a cornerstone of unsupervised learning which can be though...
read it

On the Latent Space of Wasserstein AutoEncoders
We study the role of latent space dimensionality in Wasserstein autoenc...
read it

Wasserstein AutoEncoders
We propose the Wasserstein AutoEncoder (WAE)a new algorithm for buil...
read it

Differentially Private Database Release via Kernel Mean Embeddings
We lay theoretical foundations for new database release mechanisms that ...
read it

Probabilistic Active Learning of Functions in Structural Causal Models
We consider the problem of learning the functions computing children fro...
read it

From optimal transport to generative modeling: the VEGAN cookbook
We study unsupervised generative modeling in terms of the optimal transp...
read it

AdaGAN: Boosting Generative Models
Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) are an e...
read it

Consistent Kernel Mean Estimation for Functions of Random Variables
We provide a theoretical foundation for nonparametric estimation of fun...
read it

Minimax Lower Bounds for Realizable Transductive Classification
Transductive learning considers a training set of m labeled samples and ...
read it

Permutational Rademacher Complexity: a New Complexity Measure for Transductive Learning
Transductive learning considers situations when a learner observes m lab...
read it

Towards a Learning Theory of CauseEffect Inference
We pose causal inference as the problem of learning to classify probabil...
read it

Localized Complexities for Transductive Learning
We show two novel concentration inequalities for suprema of empirical pr...
read it
Ilya Tolstikhin
is this you? claim profile
Postdoc at Max Planck Institute for Intelligent Systems since 2014, Senior Research Developer at Kaspersky Lab from 20102014, Ph.D student, Junior Researcher at Computing Centre of Russian Academy of Sciences (CCRAS) from 20102014, Short visit at Université Laval 2013