Model-Based Domain Generalization

02/25/2021
by   George J. Pappas, et al.
2

We consider the problem of domain generalization, in which a predictor is trained on data drawn from a family of related training domains and tested on a distinct and unseen test domain. While a variety of approaches have been proposed for this setting, it was recently shown that no existing algorithm can consistently outperform empirical risk minimization (ERM) over the training domains. To this end, in this paper we propose a novel approach for the domain generalization problem called Model-Based Domain Generalization. In our approach, we first use unlabeled data from the training domains to learn multi-modal domain transformation models that map data from one training domain to any other domain. Next, we propose a constrained optimization-based formulation for domain generalization which enforces that a trained predictor be invariant to distributional shifts under the underlying domain transformation model. Finally, we propose a novel algorithmic framework for efficiently solving this constrained optimization problem. In our experiments, we show that this approach outperforms both ERM and domain generalization algorithms on numerous well-known, challenging datasets, including WILDS, PACS, and ImageNet. In particular, our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset