
Federated Hyperparameter Tuning: Challenges, Baselines, and Connections to WeightSharing
Tuning hyperparameters is a crucial but arduous part of the machine lear...
read it

Finding and Fixing Spurious Patterns with Explanations
Machine learning models often use spurious patterns such as "relying on ...
read it

Sanity Simulations for Saliency Methods
Saliency methods are a popular class of feature attribution tools that a...
read it

Rethinking Neural Operations for Diverse Tasks
An important goal of neural architecture search (NAS) is to automateawa...
read it

Towards Connecting Use Cases and Methods in Interpretable Machine Learning
Despite increasing interest in the field of Interpretable Machine Learni...
read it

Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability
We empirically demonstrate that fullbatch gradient descent on neural ne...
read it

On Data Efficiency of Metalearning
Metalearning has enabled learning statistical models that can be quickl...
read it

A Learning Theoretic Perspective on Local Explainability
In this paper, we explore connections between interpretable machine lear...
read it

GeometryAware Gradient Algorithms for Neural Architecture Search
Many recent stateoftheart methods for neural architecture search (NAS...
read it

ModelAgnostic Characterization of Fairness Tradeoffs
There exist several inherent tradeoffs in designing a fair model, such ...
read it

Explaining Groups of Points in LowDimensional Representations
A common workflow in data exploration is to learn a lowdimensional repr...
read it

FedDANE: A Federated NewtonType Method
Federated learning aims to jointly learn statistical models over massive...
read it

Differentially Private MetaLearning
Parametertransfer is a wellknown and versatile approach for metalearn...
read it

Federated Learning: Challenges, Methods, and Future Directions
Federated learning involves training statistical models over remote devi...
read it

Learning Fair Representations for Kernel Models
Fair representations are a powerful tool for establishing criteria like ...
read it

Adaptive GradientBased MetaLearning Methods
We build a theoretical framework for understanding practical metalearni...
read it

Regularizing Blackbox Models for Improved Interpretability (HILL 2019 Version)
Most of the work on interpretable machine learning has focused on design...
read it

SysML: The New Frontier of Machine Learning Systems
Machine learning (ML) techniques are enjoying rapidly increasing adoptio...
read it

On the support recovery of marginal regression
Leading methods for support recovery in highdimensional regression, suc...
read it

Exploiting Reuse in PipelineAware Hyperparameter Tuning
Hyperparameter tuning of multistage pipelines introduces a significant ...
read it

Provable Guarantees for GradientBased MetaLearning
We study the problem of metalearning through the lens of online convex ...
read it

Random Search and Reproducibility for Neural Architecture Search
Neural architecture search (NAS) is a promising research direction that ...
read it

Regularizing Blackbox Models for Improved Interpretability
Most work on interpretability in machine learning has focused on designi...
read it

Expanding the Reach of Federated Learning by Reducing Client Resource Requirements
Communication on heterogeneous edge networks is a fundamental bottleneck...
read it

On the Convergence of Federated Optimization in Heterogeneous Networks
The burgeoning field of federated learning involves training machine lea...
read it

LEAF: A Benchmark for Federated Settings
Modern federated networks, such as those comprised of wearable devices, ...
read it

Massively Parallel Hyperparameter Tuning
Modern learning models are characterized by large hyperparameter spaces....
read it

Supervised Local Modeling for Interpretability
Model interpretability is an increasingly important component of practic...
read it

Parle: parallelizing stochastic gradient descent
We propose a new algorithm called Parle for parallel training of deep ne...
read it

Federated MultiTask Learning
Federated learning poses new statistical and systems challenges in train...
read it

Hyperband: A Novel BanditBased Approach to Hyperparameter Optimization
Performance of machine learning algorithms depends critically on identif...
read it

MLlib: Machine Learning in Apache Spark
Apache Spark is a popular opensource platform for largescale data proc...
read it

Nonstochastic Best Arm Identification and Hyperparameter Optimization
Motivated by the task of hyperparameter optimization, we introduce the n...
read it

Matrix Coherence and the Nystrom Method
The Nystrom method is an efficient technique used to speed up largescal...
read it

Distributed Lowrank Subspace Segmentation
Vision problems ranging from image clustering to motion segmentation to ...
read it

The Big Data Bootstrap
The bootstrap provides a simple and powerful means of assessing the qual...
read it

A Scalable Bootstrap for Massive Data
The bootstrap provides a simple and powerful means of assessing the qual...
read it

Distributed Matrix Completion and Robust Factorization
If learning methods are to scale to the massive sizes of modern datasets...
read it

On the Estimation of Coherence
Lowrank matrix approximations are often used to help scale standard mac...
read it
Ameet Talwalkar
is this you? claim profile
Assistant professor in the Machine Learning Department at Carnegie Mellon University. Also the cofounder and Chief Scientist at Determined AI. Before was an Assistant Professor at University of California, Los Angeles.