Cost-effective Framework for Gradual Domain Adaptation with Multifidelity
In domain adaptation, when there is a large distance between the source and target domains, the prediction performance will degrade. Gradual domain adaptation is one of the solutions to such an issue, assuming that we have access to intermediate domains, which shift gradually from the source to target domains. In previous works, it was assumed that the number of samples in the intermediate domains is sufficiently large; hence, self-training was possible without the need for labeled data. If access to an intermediate domain is restricted, self-training will fail. Practically, the cost of samples in intermediate domains will vary, and it is natural to consider that the closer an intermediate domain is to the target domain, the higher the cost of obtaining samples from the intermediate domain is. To solve the trade-off between cost and accuracy, we propose a framework that combines multifidelity and active domain adaptation. The effectiveness of the proposed method is evaluated by experiments with both artificial and real-world datasets. Codes are available at https://github.com/ssgw320/gdamf.
READ FULL TEXT