
MultiSource Causal Inference Using Control Variates
While many areas of machine learning have benefited from the increasing ...
read it

On the Stability of Nonlinear Receding Horizon Control: A Geometric Perspective
The widespread adoption of nonlinear Receding Horizon Control (RHC) stra...
read it

A Variational Inequality Approach to Bayesian Regression Games
Bayesian regression games are a special class of twoplayer generalsum ...
read it

Representation Matters: Assessing the Importance of Subgroup Allocations in Training Data
Collecting more diverse and representative training data is often touted...
read it

Interleaving Computational and Inferential Thinking: Data Science for Undergraduates at Berkeley
The undergraduate data science curriculum at the University of Californi...
read it

MultiStage Decentralized Matching Markets: Uncertain Preferences and Strategic Behaviors
Matching markets are often organized in a multistage and decentralized ...
read it

Private Prediction Sets
In realworld settings involving consequential decisionmaking, the depl...
read it

DistributionFree, RiskControlling Prediction Sets
While improving prediction accuracy has been the focus of machine learni...
read it

Stochastic Approximation for Online Tensorial Independent Component Analysis
Independent component analysis (ICA) has been a popular dimension reduct...
read it

Online Learning Demands in Maxmin Fairness
We describe mechanisms for the allocation of a scarce resource among mul...
read it

Bandit Learning in Decentralized Matching Markets
We study twosided matching markets in which one side of the market (the...
read it

Optimal Mean Estimation without a Variance
We study the problem of heavytailed mean estimation in settings where t...
read it

PostSelection Inference via Algorithmic Stability
Modern approaches to data analysis make extensive use of datadriven mod...
read it

On Function Approximation in Reinforcement Learning: Optimism in the Face of Large State Spaces
The classical theory of reinforcement learning (RL) has focused on tabul...
read it

Do Offline Metrics Predict Online Performance in Recommender Systems?
Recommender systems operate in an inherently dynamical setting. Past rec...
read it

Efficient Methods for Structured NonconvexNonconcave MinMax Optimization
The use of minmax optimization in adversarial training of deep neural n...
read it

Resource Allocation in Multiarmed Bandit Exploration: Overcoming Nonlinear Scaling with Adaptive Parallelism
We study exploration in stochastic multiarmed bandits when we have acce...
read it

Learning Strategies in Decentralized Matching Markets under Uncertain Preferences
We study twosided decentralized matching markets in which participants ...
read it

Uncertainty Sets for Image Classifiers using Conformal Prediction
Convolutional image classifiers can achieve high predictive accuracy, bu...
read it

Learning from eXtreme Bandit Feedback
We study the problem of batch learning from bandit feedback in the setti...
read it

Exploration in twostage recommender systems
Twostage recommender systems are widely adopted in industry due to thei...
read it

ROOTSGD: Sharp Nonasymptotics and Asymptotic Efficiency in a Single Algorithm
The theory and practice of stochastic optimization has focused on stocha...
read it

On Localized Discrepancy for Domain Adaptation
We propose the discrepancybased generalization theories for unsupervise...
read it

Covariance estimation with nonnegative partial correlations
We study the problem of highdimensional covariance estimation under the...
read it

Transferable Calibration with Lower Bias and Variance in Domain Adaptation
Domain Adaptation (DA) enables transferring a learning machine from a la...
read it

Optimal Robust Linear Regression in Nearly Linear Time
We study the problem of highdimensional robust linear regression where ...
read it

Finding Equilibrium in MultiAgent Games with Payoff Uncertainty
We study the problem of finding equilibrium strategies in multiagent ga...
read it

Manifold Learning via Manifold Deflation
Nonlinear dimensionality reduction methods provide a valuable means to v...
read it

Accelerated Message Passing for EntropyRegularized MAP Inference
Maximum a posteriori (MAP) inference in discretevalued Markov random fi...
read it

On Projection Robust Optimal Transport: Sample Complexity and Model Misspecification
Optimal transport (OT) distances are increasingly used as loss functions...
read it

On the Theory of Transfer Learning: The Importance of Task Diversity
We provide new statistical guarantees for transfer learning via represen...
read it

Active Learning for Nonlinear System Identification with Guarantees
While the identification of nonlinear dynamical systems is a fundamental...
read it

Projection Robust Wasserstein Distance and Riemannian Optimization
Projection robust Wasserstein (PRW) distance, or Wasserstein projection ...
read it

Instability, Computational Efficiency and Statistical Accuracy
Many statistical estimators are defined as the fixed point of a datadep...
read it

Lower bounds in multiple testing: A framework based on derandomized proxies
The large bulk of work in multiple testing has focused on specifying pro...
read it

Mechanism Design with Bandit Feedback
We study a multiround welfaremaximising mechanism design problem, wher...
read it

On Learning Rates and Schrödinger Operators
The learning rate is perhaps the single most important parameter in the ...
read it

On Dissipative Symplectic Integration with Applications to GradientBased Optimization
Continuoustime dynamical systems have proved useful in providing concep...
read it

On Linear Stochastic Approximation: Finegrained PolyakRuppert and NonAsymptotic Concentration
We undertake a precise study of the asymptotic and nonasymptotic proper...
read it

Is Temporal Difference Learning Optimal? An InstanceDependent Analysis
We address the problem of policy evaluation in discounted Markov decisio...
read it

PostEstimation Smoothing: A Simple Baseline for Learning with Side Information
Observational data are often accompanied by natural structural indices, ...
read it

Robustness Guarantees for Mode Estimation with an Application to Bandits
Mode estimation is a classical problem in statistics with a wide range o...
read it

Optimization with Momentum: Dynamical, ControlTheoretic, and Symplectic Perspectives
We analyze the convergence rate of various momentumbased optimization a...
read it

Provable MetaLearning of Linear Representations
Metalearning, or learningtolearn, seeks to design algorithms that can...
read it

On Thompson Sampling with Langevin Algorithms
Thompson sampling is a methodology for multiarmed bandit problems that ...
read it

FiniteTime LastIterate Convergence for MultiAgent Learning in Games
We consider multiagent learning via online gradient descent (OGD) in a ...
read it

Robust Optimization for Fairness with Noisy Protected Groups
Many existing fairness criteria for machine learning involve equalizing ...
read it

DecisionMaking with AutoEncoding Variational Bayes
To make decisions based on a model fit by AutoEncoding Variational Baye...
read it

Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Adaptivity is an important yet understudied property in modern optimiza...
read it

Revisiting Fixed Support Wasserstein Barycenter: Computational Hardness and Efficient Algorithms
We study the fixedsupport Wasserstein barycenter problem (FSWBP), whic...
read it
Michael I. Jordan
is this you? claim profile
Michael Irwin Jordan is an american scientist, professor in machine learning, statistical science and artificial intelligence at the University of California, and researcher in Berkeley. He is one of the leading figures in machine learning, and Science has reported him as the most important computer scientist in the world in 2016.
In 1978, Jordan received his BS magna cum laude degree in Psychology from Louisiana State University, his MS degree in Mathematics from Arizona State University in 1980 and his PhD in cognitive science from the University of California in San Diego in 1985. Jordan was a student of David Rumelhart and a member of the PDP Group in the 1980s at the University of California, San Diego.
Jordan currently is a full professor, working in the Department of Statistics and the Department of EECS at the University of California, Berkeley. From 1988 to 1998 he was professor in the Brain and Cognitive Sciences Department at MIT.
Jordan began to develop recurrent neural networks as a cognitive model in the 1980s. In recent years, his work has been less driven by a cognitive point of view and more by traditional statistics.
In the machinelearning community, Jordan popularized Bayesian networks and is known for pointing out links between machine learning and statistics. He was also prominent in formalizing variation methods for approximate inference and popularizing the machine learning expectative maximization algorithm.
In 2001, Jordan and others resigned from the Machine Learning editorial board. They advocated less restrictive access in a public letter and committed support to a new open access newspaper, The Journal of Machine Learning Research, created by Leslie Kaelbling to support the development of machine learning.
Jordan has earned numerous awards, including the ACM  AAAI Allen Newell Award, the IEEE Pioneer Award for Neural Networks, and the NSF Young Investigator Award. This is a prize for the best paper award at the International Conference on Machine Learn. In 2010 he was appointed a Fellow for “contributions to the theory and application of machine training” in the Association for Machinery for Computing Machinery. Jordan belongs to the National Academy of Science, to the National Academy of Engineering and to the Academy of Arts and Sciences in the US.
He was named a Neyman lecturer and an Institute of Mathematical Statistics medallion lecturer. In 2015 he was awarded the David E. Rumelhart Prize and in 2009 received the ACM/AAAI Allen Newell Award.
In 2016 Jordan was identified by an analysis of published literature by the Semantic Scholar Project as the “most influential computer scientist.”