There is an increasing concern that generative AI models may produce out...
We examine the relationship between the mutual information between the o...
We study best-of-both-worlds algorithms for bandits with switching cost,...
We consider the question of adaptive data analysis within the framework ...
We study to what extent may stochastic gradient descent (SGD) be underst...
We consider linear prediction with a convex Lipschitz loss, or more
gene...
We study the generalization performance of full-batch optimization
algor...
We consider the problem of online classification under a privacy constra...
Which classes can be learned properly in the online model? – that is, by...
We give a new separation result between the generalization performance o...
PAC-Bayes is a useful framework for deriving generalization bounds which...
The notion of implicit bias, or implicit regularization, has been sugges...
We prove that every concept class with finite Littlestone dimension can ...
We revisit the fundamental problem of prediction with expert advice, in ...
A basic question in learning theory is to identify if two distributions ...
We study the expressive power of kernel methods and the algorithmic
feas...
We introduce two mathematical frameworks for foolability in the context ...
We show that every approximately differentially private learning algorit...
This work introduces a model of distributed learning in the spirit of Ya...
It is well-known that neural networks are computationally hard to train....
We consider deep neural networks, in which the output of each node is a
...