Generic network for domain adaptation based on self-supervised learning and deep clustering

03/23/2022
by   Adu Asare Baffour, et al.
0

Domain adaptation methods train a model to find similar feature representations between a source and target domain. Recent methods leverage self-supervised learning to discover the analogous representations of the two domains. However, prior self-supervised methods have three significant drawbacks: (1) leveraging pretext tasks that are susceptible to learning low-level representations, (2) aligning the two domains using adversarial loss without considering if the extracted features are low-level representations, (3) the models are not flexible to accommodate various proportions of target labels, i.e., they assume target labels are always available. This paper presents a Generic Domain Adaptation Network (GDAN) to address these issues. First, we introduce a criterion based on instance discrimination to select appropriate pretext tasks to learn high-level domain invariant representations. Then, we propose a semantic neighbor cluster to align the two domain features. The semantic neighbor cluster implements a clustering technique in a feature embedding space to form clusters according to high-level semantic similarities. Finally, we present a weighted target loss function to balance the model weights according to the target labels. This loss function makes GDAN flexible for semi-supervised scenarios, i.e., partly labeled target data. We evaluate the proposed methods on four domain adaptation benchmark datasets. The experiment findings show that the proposed methods align the two domains well and achieve competitive results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro