People form judgments and make decisions based on the information that t...
The goal of data attribution is to trace model predictions back to train...
Distribution shifts are a major source of failure of deployed machine
le...
We study the problem of (learning) algorithm comparison, where the goal ...
Existing methods for isolating hard subpopulations and spurious correlat...
We present a conceptual framework, datamodeling, for analyzing the behav...
We identify properties of universal adversarial perturbations (UAPs) tha...
We present a methodology for modifying the behavior of a classifier by
d...
We give an O(m^3/2 - 1/762log (U+W)) time algorithm for
minimum cost flo...
To improve model generalization, model designers often restrict the feat...
We show how fitting sparse linear models over learned deep feature
repre...
As machine learning systems grow in scale, so do their training data
req...
We develop a methodology for assessing the robustness of models to
subpo...
We assess the tendency of state-of-the-art object recognition models to
...
We study the roots of algorithmic progress in deep policy gradient algor...
Building rich machine learning datasets in a scalable manner often
neces...
Dataset replication is a useful tool for assessing whether improvements ...
We present an m^11/8+o(1)log W-time algorithm for solving the minimum
co...
Learning rate schedule has a major impact on the performance of deep lea...
Adaptive attacks have (rightfully) become the de facto standard for
eval...
Deep neural networks have been demonstrated to be vulnerable to backdoor...
We show that the basic classification framework alone can be used to tac...
We show that the basic classification framework alone can be used to tac...
Many applications of machine learning require models that are human-alig...
Adversarial examples have attracted significant attention in machine
lea...
Correctly evaluating defenses against adversarial examples has proven to...
We study how the behavior of deep policy gradient algorithms reflects th...
A recent line of work has uncovered a new form of data poisoning: so-cal...
We explore the concept of co-design in the context of neural network
ver...
We introduce a framework that unifies the existing work on black-box
adv...
We provide a new understanding of the fundamental nature of adversariall...
Batch Normalization (BatchNorm) is a widely adopted technique that enabl...
Machine learning models are often susceptible to adversarial perturbatio...
Sparsity-based methods are widely used in machine learning, statistics, ...
Recent work has shown that neural network-based vision classifiers exhib...
We present an O(( k)^2)-competitive randomized algorithm for the
k-serve...
A fundamental, and still largely unanswered, question in the context of
...
Recent work has demonstrated that neural networks are vulnerable to
adve...
We study theoretical runtime guarantees for a class of optimization prob...