Errors of machine learning models are costly, especially in safety-criti...
Finetuning image-text models such as CLIP achieves state-of-the-art
accu...
As the scope of machine learning broadens, we observe a recurring theme ...
SGD (with momentum) and AdamW are the two most used optimizers for
fine-...
Language models (LMs) are becoming the foundation for almost all major
l...
A common approach to transfer learning under distribution shift is to
fi...
Recent work has observed that pre-trained models have higher
out-of-dist...
We often see undesirable tradeoffs in robust machine learning where
out-...
Contrastive learning is a highly effective method which uses unlabeled d...
We consider unsupervised domain adaptation (UDA), where labeled data fro...
When transferring a pretrained model to a downstream task, two popular
m...
Machine learning systems deployed in the wild are often trained on a sou...
Out-of-distribution detection is an important component of reliable ML
s...
Consider a prediction setting where a few inputs (e.g., satellite images...
Selective classification, in which models are allowed to abstain on unce...
In unsupervised domain adaptation, existing theory focuses on situations...
Machine learning systems must adapt to data distributions that evolve ov...
Applications such as weather forecasting and personalized medicine deman...
This paper addresses the problem of evaluating learning systems in safet...
Stochastic video prediction is usually framed as an extrapolation proble...
Given a finite set of points P ⊆R^d, we would like to
find a small subse...