We introduce OpenFlamingo, a family of autoregressive vision-language mo...
Pre-training has been widely adopted in deep learning to improve model
p...
Massive web datasets play a key role in the success of large vision-lang...
Natural language processing and 2D vision models have attained remarkabl...
Large language models are now tuned to align with the goals of their
cre...
We propose Neural Priming, a technique for adapting large pretrained mod...
Large multimodal datasets have been instrumental in recent breakthroughs...
We introduce new methods for 1) accelerating and 2) stabilizing training...
Weird, unusual, and uncanny images pique the curiosity of observers beca...
The transfer learning paradigm of model pre-training and subsequent
fine...
“Effective robustness” measures the extra out-of-distribution (OOD)
robu...
Does progress on ImageNet transfer to real-world datasets? We investigat...
Massive data corpora like WebText, Wikipedia, Conceptual Captions,
WebIm...
Scaling up neural networks has led to remarkable performance across a wi...
Changing how pre-trained models behave – e.g., improving their performan...
Researchers have proposed many methods for fair and robust machine learn...
We conduct a large empirical evaluation to investigate the landscape of
...
When fine-tuning large neural networks, it is common to use multiple nod...
Groundbreaking language-vision architectures like CLIP and DALL-E proved...
We investigate the ability of language models to perform compositional
r...
Open-vocabulary models like CLIP achieve high accuracy across many image...
Web-crawled datasets have enabled remarkable generalization capabilities...
The U.S. criminal legal system increasingly relies on software output to...
Contrastively trained image-text models such as CLIP, ALIGN, and BASIC h...
Households across the world contain arbitrary objects: from mate gourds ...
The conventional recipe for maximizing model accuracy is to (1) train
mu...
Large pre-trained models such as CLIP offer consistent accuracy across a...
Although the fairness community has recognized the importance of data,
r...
Deep Metric Learning (DML) aims to find representations suitable for
zer...
For machine learning systems to be reliable, we must understand their
pe...
Recent work has shown that the performance of machine learning models ca...
In the past few years, we have witnessed remarkable breakthroughs in
sel...
We study how robust current ImageNet models are to distribution shifts
a...
We build four new test sets for the Stanford Question Answering Dataset
...
We investigate the connections between neural networks and simple buildi...
We introduce a systematic framework for quantifying the robustness of
cl...
We demonstrate, theoretically and empirically, that adversarial robustne...
Excessive reuse of test data has become commonplace in today's machine
l...
We show through theory and experiment that gradient-based explanations o...
Machine learning is currently dominated by largely experimental work foc...
Machine learning models are often susceptible to adversarial perturbatio...
We study the problem of robustly learning multi-dimensional histograms. ...
Sparsity-based methods are widely used in machine learning, statistics, ...
We introduce Graph-Sparse Logistic Regression, a new algorithm for
class...
Recent work has shown that neural network-based vision classifiers exhib...
A fundamental, and still largely unanswered, question in the context of
...
Recent work has demonstrated that neural networks are vulnerable to
adve...
Empirical risk minimization (ERM) is ubiquitous in machine learning and
...