We consider the sampling problem from a composite distribution whose
pot...
Continual learning on sequential data is critical for many machine learn...
Quasar convexity is a condition that allows some first-order methods to
...
We study the Inexact Langevin Algorithm (ILA) for sampling using estimat...
Distributed machine learning (DML) can be an important capability for mo...
We consider a setting that a model needs to adapt to a new domain under
...
Hamiltonian Monte Carlo (HMC) is a popular method in sampling. While the...
Heavy Ball (HB) nowadays is one of the most popular momentum methods in
...
In this paper we study two-player bilinear zero-sum games with constrain...
We study the proximal sampler of Lee, Shen, and Tian (2021) and obtain n...
Distributed machine learning (DML) over time-varying networks can be an
...
The technique of modifying the geometry of a problem from Euclidean to
H...
We consider the dynamics of two-player zero-sum games, with the goal of
...
We study the Proximal Langevin Algorithm (PLA) for sampling from a
proba...
We study the problem of finding min-max solutions for smooth two-input
o...
We prove a convergence guarantee on the unadjusted Langevin algorithm fo...
We study the convexity of mutual information as a function of time along...
We study sampling as optimization in the space of measures. We focus on
...
We study the convexity of mutual information along the evolution of the ...
Accelerated gradient methods play a central role in optimization, achiev...
We analyze a reweighted version of the Kikuchi approximation for estimat...
We consider derivative-free algorithms for stochastic and non-stochastic...
We present SDA-Bayes, a framework for (S)treaming, (D)istributed,
(A)syn...