-
Multi-source Domain Adaptation for Visual Sentiment Classification
Existing domain adaptation methods on visual sentiment classification ty...
read it
-
Multi-source Domain Adaptation for Semantic Segmentation
Simulation-to-real domain adaptation for semantic segmentation has been ...
read it
-
Pixel and Feature Level Based Domain Adaption for Object Detection in Autonomous Driving
Annotating large scale datasets to train modern convolutional neural net...
read it
-
Semantic-Aware Generative Adversarial Nets for Unsupervised Domain Adaptation in Chest X-ray Segmentation
In spite of the compelling achievements that deep neural networks (DNNs)...
read it
-
Learning to Cluster under Domain Shift
While unsupervised domain adaptation methods based on deep architectures...
read it
-
Batch weight for domain adaptation with mass shift
Unsupervised domain transfer is the task of transferring or translating ...
read it
-
Integrating Categorical Semantics into Unsupervised Domain Translation
While unsupervised domain translation (UDT) has seen a lot of success re...
read it
Emotional Semantics-Preserved and Feature-Aligned CycleGAN for Visual Emotion Adaptation
Thanks to large-scale labeled training data, deep neural networks (DNNs) have obtained remarkable success in many vision and multimedia tasks. However, because of the presence of domain shift, the learned knowledge of the well-trained DNNs cannot be well generalized to new domains or datasets that have few labels. Unsupervised domain adaptation (UDA) studies the problem of transferring models trained on one labeled source domain to another unlabeled target domain. In this paper, we focus on UDA in visual emotion analysis for both emotion distribution learning and dominant emotion classification. Specifically, we design a novel end-to-end cycle-consistent adversarial model, termed CycleEmotionGAN++. First, we generate an adapted domain to align the source and target domains on the pixel-level by improving CycleGAN with a multi-scale structured cycle-consistency loss. During the image translation, we propose a dynamic emotional semantic consistency loss to preserve the emotion labels of the source images. Second, we train a transferable task classifier on the adapted domain with feature-level alignment between the adapted and target domains. We conduct extensive UDA experiments on the Flickr-LDL Twitter-LDL datasets for distribution learning and ArtPhoto FI datasets for emotion classification. The results demonstrate the significant improvements yielded by the proposed CycleEmotionGAN++ as compared to state-of-the-art UDA approaches.
READ FULL TEXT
Comments
There are no comments yet.