Distributionally Robust Learning in Heterogeneous Contexts

05/18/2021
by   Muhammad Osama, et al.
0

We consider the problem of learning from training data obtained in different contexts, where the test data is subject to distributional shifts. We develop a distributionally robust method that focuses on excess risks and achieves a more appropriate trade-off between performance and robustness than the conventional and overly conservative minimax approach. The proposed method is computationally feasible and provides statistical guarantees. We demonstrate its performance using both real and synthetic data.

READ FULL TEXT
research
05/24/2021

Robust Fairness-aware Learning Under Sample Selection Bias

The underlying assumption of many machine learning algorithms is that th...
research
06/03/2020

Learning Robust Decision Policies from Observational Data

We address the problem of learning a decision policy from observational ...
research
10/29/2022

Robust Distributed Learning Against Both Distributional Shifts and Byzantine Attacks

In distributed learning systems, robustness issues may arise from two so...
research
02/24/2017

Fast and robust curve skeletonization for real-world elongated objects

We consider the problem of extracting curve skeletons of three-dimension...
research
10/17/2020

Causal Transfer Random Forest: Combining Logged Data and Randomized Experiments for Robust Prediction

It is often critical for prediction models to be robust to distributiona...
research
10/20/2018

Learning Models with Uniform Performance via Distributionally Robust Optimization

A common goal in statistics and machine learning is to learn models that...

Please sign up or login with your details

Forgot password? Click here to reset