
Adversarial score matching and improved sampling for image generation
Denoising score matching with Annealed Langevin Sampling (DSMALS) is a ...
read it

Stochastic Hamiltonian Gradient Methods for Smooth Games
The success of adversarial formulations in machine learning has brought ...
read it

Accelerating Smooth Games by Manipulating Spectral Shapes
We use matrix iteration theory to characterize acceleration in smooth ga...
read it

Adversarial targetinvariant representation learning for domain generalization
In many applications of machine learning, the training and test set data...
read it

Connections between Support Vector Machines, Wasserstein distance and gradientpenalty GANs
We generalize the concept of maximummargin classifiers (MMCs) to arbitr...
read it

Lower Bounds and Conditioning of Differentiable Games
Many recent machine learning tools rely on differentiable game formulati...
read it

A Tight and Unified Analysis of Extragradient for a Whole Spectrum of Differentiable Games
We consider differentiable games: multiobjective minimization problems,...
read it

Reducing the variance in online optimization by transporting past gradients
Most stochastic optimization methods use gradients once before discardin...
read it

StateReification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations
Machine learning promises methods that generalize well from finite label...
read it

SysML: The New Frontier of Machine Learning Systems
Machine learning (ML) techniques are enjoying rapidly increasing adoptio...
read it

Multiobjective training of Generative Adversarial Networks with multiple discriminators
Recent literature has demonstrated promising results for training Genera...
read it

A Modern Take on the BiasVariance Tradeoff in Neural Networks
We revisit the biasvariance tradeoff for neural networks in light of mo...
read it

hdetach: Modifying the LSTM Gradient Towards Better Optimization
Recurrent neural networks are known for their notorious exploding and va...
read it

Negative Momentum for Improved Game Dynamics
Games generalize the optimization paradigm by introducing different obje...
read it

Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations
Deep networks have achieved impressive results across a variety of impor...
read it

Deep Learning at 15PF: Supervised and SemiSupervised Classification for Scientific Data
This paper presents the first, 15PetaFLOP Deep Learning system for solv...
read it

Improving Gibbs Sampler Scan Quality with DoGS
The pairwise influence matrix of Dobrushin has long been used as an anal...
read it

Accelerated Stochastic Power Iteration
Principal component analysis (PCA) is one of the most powerful tools in ...
read it

Learning Representations and Generative Models for 3D Point Clouds
Threedimensional geometric data offer an excellent domain for studying ...
read it

YellowFin and the Art of Momentum Tuning
Hyperparameter tuning is one of the big costs of deep learning. Stateof...
read it

Parallel SGD: When does averaging help?
Consider a number of workers running SGD independently on the same pool ...
read it

Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much
Gibbs sampling is a Markov Chain Monte Carlo sampling technique that ite...
read it

Asynchrony begets Momentum, with an Application to Deep Learning
Asynchronous methods are widely used in deep learning, but have limited ...
read it

Memory Limited, Streaming PCA
We consider streaming, onepass principal component analysis (PCA), in t...
read it
Ioannis Mitliagkas
verfied profile
I work on topics in optimization, dynamics and learning, with a focus on modern machine learning. I have done work in the intersection of systems and theory. Some recent topics:
 Minmax optimization and the dynamics of games
 Generalization and domain adaptation
 Optimization for deep learning
 Statistical learning and inference