Distributionally Robust Learning in Heterogeneous Contexts

05/18/2021
by   Muhammad Osama, et al.
0

We consider the problem of learning from training data obtained in different contexts, where the test data is subject to distributional shifts. We develop a distributionally robust method that focuses on excess risks and achieves a more appropriate trade-off between performance and robustness than the conventional and overly conservative minimax approach. The proposed method is computationally feasible and provides statistical guarantees. We demonstrate its performance using both real and synthetic data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset