
Label Noise SGD Provably Prefers Flat Global Minimizers
In overparametrized models, the noise in stochastic gradient descent (SG...
read it

Joint SystemWise Optimization for Pipeline GoalOriented Dialog System
Recent work (Takanobu et al., 2020) proposed the systemwise evaluation ...
read it

Provable Guarantees for SelfSupervised Deep Learning with Spectral Contrastive Loss
Recent works in selfsupervised learning have advanced the stateofthe...
read it

Why Do Local Methods Solve Nonconvex Problems?
Nonconvex optimization is ubiquitous in modern machine learning. Resear...
read it

FineGrained GapDependent Bounds for Tabular MDPs via Adaptive MultiStep Bootstrap
This paper presents a new modelfree algorithm for episodic finitehoriz...
read it

Provable Modelbased Nonlinear Bandit and Reinforcement Learning: Shelve Optimism, Embrace Virtual Curvature
This paper studies modelbased bandit and reinforcement learning (RL) wi...
read it

InNOut: PreTraining and SelfTraining using Auxiliary Information for OutofDistribution Robustness
Consider a prediction setting where a few inputs (e.g., satellite images...
read it

Metalearning Transferable Representations with a Single Target Domain
Recent works found that finetuning and joint training—two popular appro...
read it

Beyond Lazy Training for Overparameterized Tensor Decomposition
Overparametrization is an important technique in training neural networ...
read it

DocumentLevel Relation Extraction with Adaptive Thresholding and Localized Context Pooling
Documentlevel relation extraction (RE) poses new challenges compared to...
read it

Theoretical Analysis of SelfTraining with Deep Networks on Unlabeled Data
Selftraining algorithms, which train a model to fit pseudolabels predic...
read it

Entity and Evidence Guided Relation Extraction for DocRED
Documentlevel relation extraction is a challenging task which requires ...
read it

Learning OverParametrized TwoLayer ReLU Neural Networks beyond NTK
We consider the dynamic of gradient descent for learning a twolayer neu...
read it

Simplifying Models with Unlabeled Output Data
We focus on prediction problems with highdimensional outputs that are s...
read it

Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization
Realworld largescale datasets are heteroskedastic and imbalanced – lab...
read it

Active Online Domain Adaptation
Online machine learning systems need to adapt to domain shifts. Meanwhil...
read it

Individual Calibration with Randomized Forecasting
Machine learning applications often require calibrated predictions, e.g....
read it

Selftraining Avoids Using Spurious Features Under Domain Shift
In unsupervised domain adaptation, existing theory focuses on situations...
read it

Federated Accelerated Stochastic Gradient Descent
We propose Federated Accelerated Stochastic Gradient Descent (FedAc), a ...
read it

Modelbased Adversarial MetaReinforcement Learning
Metareinforcement learning (metaRL) aims to learn from multiple traini...
read it

Shape Matters: Understanding the Implicit Bias of the Noise Covariance
The noise in stochastic gradient descent (SGD) provides a crucial implic...
read it

MOPO: Modelbased Offline Policy Optimization
Offline reinforcement learning (RL) refers to the problem of learning po...
read it

Robust and Onthefly Dataset Denoising for Image Classification
Memorization in overparameterized neural networks could severely hurt g...
read it

Optimal Regularization Can Mitigate Double Descent
Recent empirical and theoretical studies have shown that many learning a...
read it

The Implicit and Explicit Regularization Effects of Dropout
Dropout is a widelyused regularization technique, often required to obt...
read it

Understanding SelfTraining for Gradual Domain Adaptation
Machine learning systems must adapt to data distributions that evolve ov...
read it

VariableViewpoint Representations for 3D Object Recognition
For the problem of 3D object recognition, researchers using deep learnin...
read it

Bootstrapping the Expressivity with Modelbased Planning
We compare the modelfree reinforcement learning with the modelbased ap...
read it

Improved Sample Complexities for Deep Networks and Robust Classification via an AllLayer Margin
For linear classifiers, the relationship between (normalized) output mar...
read it

Verified Uncertainty Calibration
Applications such as weather forecasting and personalized medicine deman...
read it

Learning SelfCorrectable Policies and Value Functions from Demonstrations with Negative Sampling
Imitation learning, followed by reinforcement learning algorithms, is a ...
read it

A Modelbased Approach for Sampleefficient Multitask Reinforcement Learning
The aim of multitask reinforcement learning is twofold: (1) efficientl...
read it

Towards Explaining the Regularization Effect of Initial Large Learning Rate in Training Neural Networks
Stochastic gradient descent with a large initial learning rate is a wide...
read it

Learning Imbalanced Datasets with LabelDistributionAware Margin Loss
Deep learning algorithms can fare poorly when the training dataset suffe...
read it

On the Performance of Thompson Sampling on Logistic Bandits
We study the logistic bandit, in which rewards are binary with success p...
read it

Datadependent Sample Complexity of Deep Neural Networks via Lipschitz Augmentation
Existing Rademacher complexity bounds for neural networks rely only on n...
read it

Fixup Initialization: Residual Learning Without Normalization
Normalization layers are a staple in stateoftheart deep neural networ...
read it

On the Margin Theory of Feedforward Neural Networks
Past works have shown that, somewhat surprisingly, overparametrization ...
read it

Algorithmic Framework for Modelbased Reinforcement Learning with Theoretical Guarantees
While modelbased reinforcement learning has empirically been shown to s...
read it

Approximability of Discriminators Implies Diversity in GANs
While Generative Adversarial Networks (GANs) have empirically produced i...
read it

Seeing Neural Networks Through a Box of Toys: The Toybox Dataset of Visual Object Transformations
Deep convolutional neural networks (CNNs) have enjoyed tremendous succes...
read it

Optimal Design of Process Flexibility for General Production Systems
Process flexibility is widely adopted as an effective strategy for respo...
read it

A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors
Motivations like domain adaptation, transfer learning, and feature learn...
read it

Algorithmic Regularization in Overparameterized Matrix Sensing and Neural Networks with Quadratic Activations
We show that the (stochastic) gradient descent algorithm provides an imp...
read it

Algorithmic Regularization in Overparameterized Matrix Recovery
We study the problem of recovering a lowrank matrix X^ from linear meas...
read it

Learning Onehiddenlayer Neural Networks with Landscape Design
We consider the problem of learning a onehiddenlayer neural network: w...
read it

On the Optimization Landscape of Tensor Decompositions
Nonconvex optimization with local search heuristics has been widely use...
read it

Generalization and Equilibrium in Generative Adversarial Nets (GANs)
We show that training of generative adversarial network (GAN) may not ha...
read it

Provable learning of Noisyor Networks
Many machine learning applications use latent variable models to explain...
read it

Identity Matters in Deep Learning
An emerging design principle in deep learning is that each layer of a de...
read it
Tengyu Ma
verfied profile
Assistant Professor of Computer Science and Statistics at Stanford University