Do We Need to Directly Access the Source Datasets for Domain Generalization?
Domain generalization (DG) aims to learn a generalizable model from multiple known source domains for unknown target domains. Tremendous data distributed across lots of places/devices nowadays that can not be directly accessed due to privacy protection, especially in some crucial areas like finance and medical care. However, most of the existing DG algorithms assume that all the source datasets are accessible and can be mixed for domain-invariant semantics extraction, which may fail in real-world applications. In this paper, we introduce a challenging setting of training a generalizable model by using distributed source datasets without directly accessing them. We propose a novel method for this setting, which first trains a model on each source dataset and then conduct data-free model fusion that fuses the trained models layer-by-layer based on their semantic similarities, which aggregates different levels of semantics from the distributed sources indirectly. The fused model is then transmitted and trained on each dataset, we further introduce cross-layer semantic calibration for domain-invariant semantics enhancement, which aligns feature maps between the fused model and a fixed local model with an attention mechanism. Extensive experiments on multiple DG datasets show the significant performance of our method in tackling this challenging setting, which is even on par or superior to the performance of the state-of-the-art DG approaches in the standard DG setting.
READ FULL TEXT