When computer vision systems are deployed in the real world, they are exposed to changing environments and non-stationary input distributions that pose major challenges. For example, a deep network optimized using images collected on sunny days with clear skies may fail drastically at night under different lighting conditions. In fact, it has been recently observed that deep networks demonstrate severe instability even under small changes to the input distribution, let alone when confronted with dynamically changing streams of information.
The problem of domain shift can be avoided by collecting sufficient training data to cover all possible input distributions that occur at test time. However, the expense of collecting and manually annotating data makes this infeasible in many applications. This is particularly true for detailed visual understanding tasks like object detection and semantic segmentation where image annotation is labor-intensive. It is worth noting that humans are capable of “lifelong learning,” in which new tasks and environments are analyzed using accumulated knowledge from the past. However, achieving the same goal in deep neural networks is non-trivial as (i) new data domains come in at real time without labels, and (ii) deep networks suffer from catastrophic forgetting , in which performance drops on previously learned tasks when optimizing for new tasks.
We consider the lifelong learning problem of adapting a pre-trained model to dynamically changing environments, whose distributions reflect disparate lighting and weather conditions. In particular, we assume access to image-label pairs from an original source environment, and only unlabeled images from new target environments that are not observed in the training data. Furthermore, we consider the difficulties posed by learning over time, in which target environments appear sequentially.
We focus on the specific task of semantic segmentation due to its practical applications in autonomous driving, where a visual recognition system is expected to deal with changing weather and illumination conditions. This application enables us to leverage the convenience of collecting data from different distributions using graphic rendering tools [43, 42].
To this end, we introduce ACE, a framework which adapts a pre-trained segmentation model to a stream of new tasks that arrive in a sequential manner, while storing historical style information in a compact memory to avoid forgetting. In particular, given a new task, we use an image generator to align the distribution of (labeled) source data with the distribution of (unlabeled) incoming target data at the pixel-level. This produces labeled images with color and texture properties that closely reflect the target domain, which are then directly used for training the segmentation network on the new target domain. Style transfer is achieved by renormalizing feature maps of source images so they have first- and second-order feature statistics that match target images [19, 60]. These renormalized feature maps are then fed into a generator network that produces stylized images.
What makes ACE unique is its ability to learn over a lifetime. To prevent forgetting, ACE contains a compact and light-weight memory that stores the feature statistics of different styles. These statistics are sufficient to regenerate images in any of the historical styles without the burden of storing a library of historical images. Using the memory, historical images can be re-generated and used for training throughout time, thus stopping the deleterious effects of catastrophic forgetting. The entire generation and segmentation framework can be trained in a joint end-to-end manner with SGD. Finally, we consider the topic of using adaptive meta-learning to facilitate faster adaptation to new environments when they are encountered.
Our main contributions are summarized as follows: (1) we present a lightweight framework for semantic segmentation, which is able to adapt to a stream of incoming distributions using simple and fast optimization; (2) we introduce a memory that stores feature statistics for efficient style replay, which facilitates generalization on new tasks without forgetting knowledge from previous tasks; (3) we consider meta-learning strategies to speed up the rate of adaptation to new problem domains; (4) we conduct extensive experiments on two subsets of Synthia  and the experiments demonstrate the effectiveness of our method when adapting to a sequence of tasks with different weather and lighting conditions.
2 Related Work
Unsupervised Domain Adaptation. Our work relates to unsupervised domain adaptation, which aims to improve the generalization of a pre-trained model when testing on novel distributions without accessing labels. Existing approaches along this line of research to reduce domain differences at either the feature or pixel level. In particular, feature-level adaptation focuses on aligning feature representations used for the target task (, classification or segmentation) by minimizing a notion of distance between source and target domains. Such notion of distance can be explicit metrics in the forms of Maximum Mean Discrepancies (MMD) [31, 4], covariances 8, 9], domain confusion , or Generative Adversarial Network [58, 16, 17, 45, 18], .
On the other hand, pixel-level adaptation transforms images from different domains to look as if they were drawn from the same distribution by using a mapping that reduces inconsistencies in texture and lighting [3, 52, 55, 29]. There are also recent methods seeking to align both pixel-level and feature-level representations simultaneously [15, 62, 69]. In addition, Zhang introduce a curriculum strategy that uses global label distributions and local super-pixel distributions for adaptation. Saleh handle foreground classes using detection methods when addressing domain shift . Our framework differs from previous work as we are adapting to a stream of testing domains that arrive sequentially rather than a single fixed one, which is challenging as it requires the network to perform well on both current and all previous domains. Note that although we mainly focus on pixel-level alignment, our method can further benefit from feature-level alignment in the segmentation network, but at the cost of saving raw images as opposed to only feature statistics. Further, our approach is also related to [63, 2, 14] that perform sequential adaptation for classification tasks by aligning at feature-level, while ours focuses on semantic segmentation with alignment at pixel-level.
Image Synthesis and Stylization. There is a growing interest in synthesizing images with Generative Adversarial Networks (GANs) [65, 38, 29], which is formulated as a minimax game between a generator and a discriminator . To control the generation process, a multitude of additional information has been incorporated including labels , text , attributes , and images [21, 25]
. GANs have also been used in the context of image-to-image translation, which transfers the style of an image to that of a reference image using either cycle-consistency or mapping into a shared feature space [28, 20]
. Without knowing joint distributions of domains, these approaches attempt to learn conditional distributions with marginal distributions from each domain. However, generating high resolution images with GANs still remains a difficult problem and is computationally intensive. In contrast, methods for neural style transfer [10, 19, 59, 37, 22] usually avoid the difficulties of generative modeling, and simply match the feature statistics of Gram matrices [10, 22]
or perform channel-wise alignment of mean and variance[27, 19]. In our work, we build upon style transfer to synthesize new images in the style of images from the current task while preserving the contents of the source image.
Lifelong Learning. Our work is also related to lifelong learning, or continual learning, which learns progressively and adapts to new tasks using knowledge accumulated throughout the past. Most existing work focuses on mitigating catastrophic forgetting when learning new tasks [24, 67, 40, 50, 51, 32, 5]. Several recent approaches propose to dynamically increase model capacities when new tasks arrive [66, 64]. Our work focuses on how to adapt a learned segmentation model in an unsupervised manner to a stream of new tasks, each with image distributions different from those originally used for training. In addition, to avoid forgetting knowledge learned in the past, styles are represented and cataloged using their feature statistics. Because this representation is much smaller than raw images, the framework is scalable.
Meta-Learning. Meta-learning [48, 56], also known as learning to learn, is a setting where an agent ingests a set of tasks, each a learning problem on its own, and then establishes a model that can be quickly adapted to unseen tasks from the same distribution. There are three categories of meta-learners: (i) model-based with an external memory [47, 34]; (ii) metric-based ; and (iii) optimization-based [7, 35]
. Existing approaches mainly focus on few shot classification, regression, and reinforcement learning problems, while our approach focuses on how to adapt segmentation models efficiently.
The goal of ACE is to adapt a segmentation model from a source task to a number of sequentially presented target tasks with disparate image distributions. The method transfers labeled source images into target domains to create synthetic training data for the segmentation model, while memorizing style information to be used for style replay to prevent forgetting.
More formally, let denote the source task and represent target tasks that arrive sequentially. We further use 111We omit here for the ease of notation. to represent the images and their corresponding labels used for the source task. The label
contains a ones-hot label vector for each pixel in the image; we denote the -th image sample as and represents the corresponding label maps, with and being the height and width respectively and denoting the number of classes.
For each subsequent target task, we assume access to only images rather than image-label pairs as in the source task. We further denote the number of target tasks as and use for to represent the image set for the -th incoming task, which has images of the same resolution as the source data.
ACE contains four key components: an encoder, a generator, a memory, and a segmentation network. The encoder network converts a source image into a feature representation (in our case, a stack of 512 output feature maps). The generator network converts feature representations
into images. The style of the resulting image can be controlled/manipulated by modifying the statistics (mean and standard deviation of each feature map) ofbefore it is handed to the generator. The memory unit remembers the feature statistics (1024 scalar values per style, corresponding to the mean and standard deviation of each of the 512 feature maps) for each image style/domain. A source image can be stylized into any previously seen domain by retrieving the relevant style statistics from the memory unit, renormalizing the feature maps of the source image to have the corresponding statistics, and then handing the renormalized features to the generator to create an image.
Stylization via the encoder and generator. When a new task is presented, labeled images are created in the new task domain by transferring source images (and their accompanying labels) to the target domain. To do this, we jointly train a generator network for producing target-stylized images, and a segmentation network for processing images in the target domain.
The image generation pipeline begins with an encoder that extracts feature maps from images. We use a pre-trained VGG19 network  as the encoder, taking the output from relu4 to define . Following [26, 19], the weights of the encoder are frozen during training to extract fixed representations and from images and , respectively.
The image generator , parameterized by weights , de-convolves feature maps into images. The style of the output image can be borrowed from a target image with AdaIN , which re-normalizes the feature maps (, channels) of source images to have the same mean and standard deviation as the maps of a selected target image :
Here, and compute the mean and variance of each channel of , respectively. The normalized feature maps can be stuffed into the generator to synthesize a new image . If the parameters are properly tuned, the resulting image will have the contents of but in the style of .
We train the generator so that it acts as an inverse for the encoder; the encoder should map the decoded image (approximately) onto the features that produced it. We enforce this by minimize the following loss function:
Here, the first term (the content loss) measures the differences between features of the generated image and the aligned features of the source image with an aim to preserve the contents the source images. The remaining two terms force the generated image into the style of by matching the mean and variance of feature maps per-channel. Note that some authors match gram matrices [10, 62] to make styles consistent. We match mean and variances of feature maps as in [27, 59] since these statistics are simple to optimize and contain enough information to get a good stylization. In contrast to using several layers for alignment [27, 19], we simply match one layer of feature maps from the VGG encoder, which is faster yet sufficient. More importantly, this facilitates lightweight style replay as will be described below.
The segmentation network. The newly synthesized image is handed to the segmentation network , parameterized by weights . This network produces a map of label vectors and is trained by minimizing a multi-class cross-entropy loss summed over pixels. In addition, since the synthesized image might lose certain details of the original images that could degrade the performance of the segmentation network, we further constrain outputs of the synthetic image from the segmentation network to be close to the predictions of the original image before stylization. This is achieved by measuring the KL-divergence between these two outputs, which is similar in spirit to knowledge distillation  with the outputs from the original image serving as the teacher. The segmentation loss takes the following form:
where denotes the parameters of the network. Note that the segmentation loss implicitly depends on the generator parameters because segmentation is performed on the output of the generator.
Memory unit and style replay. Optimizing Eqn 4 reduces the discrepancies between the source task and the target task, yet it is unclear how to continually adapt the model to a sequence of incoming tasks containing potentially different image distributions without forgetting knowledge learned in the past. A simple way is to store a library of historical images from previous tasks, and then randomly sample images from the library for replay when learning new tasks. However, this requires large working memory which might not be viable, particularly for segmentation tasks, where images are usually of high resolutions (, for images in Cityscapes ).
Fortunately, the proposed alignment process in Eqn. 1 synthesizes images from the target distribution using only a source image, and the mean and variance of each channel in the feature maps from a target image. Therefore, we only need to save the feature statistics (-D for both mean and variance) in the memory for efficient replay. When learning the -th task , we select a sample of test images and store their -D feature statistics in the memory. When adapting to the next task , in addition to sampling from , we also randomly access the memory which contains style information from previous tasks, to synthesize images that resemble seen tasks on-the-fly for replay.
Faster adaptation via adaptive meta-learning. Recent methods in meta-learning [7, 35, 39] produce flexible models having meta-parameters with the property that they can be quickly adapted to a new task using just a few SGD updates. While standard SGD offers good performance when optimizing Eqn. 4
for a sufficient number of epochs, we now explore whether adaptive meta-learning can produce models that speed up adaptation.
For this purpose, we use Reptile , which is an inexpensive approximation of the MAML  method. Reptile updates parameters of a meta-model by first selecting a task at random, and performing multiple steps of SGD to fine-tune the model for that task. Then a “meta-gradient” step is taken in the direction of the fine-tuned parameters. The next iteration proceeds with a different task, and so on, to generate a meta-model with parameters that are only a small perturbation away from the optimal parameters for a wide range of tasks.
To be precise, the Reptile meta gradient is defined as:
Here denotes steps of standard SGD for a randomly selected task. To achieve fast adaptation, we sample from the current task as well as the memory to perform meta-updates using meta-gradients from the whole history of tasks. The meta-gradients are then fine-tuned on the current task to evaluate performance. The algorithm is summarized in Alg. 1.
In this section, we first introduce the experimental setup and implementation details. Then, we report results of our proposed framework on two datasets and provide some discussions.
4.1 Experimental Setup
Datasets and evaluation metrics
Datasets and evaluation metrics. Since our approach is designed to process different input distributions sharing the same label space for segmentation tasks, we use data with various weather and lighting conditions from Synthia , a large-scale synthetic dataset generated with rendering engines for semantic segmentation of urban scenes. We use Synthia-Seqs, a subset of Synthia showing the viewpoint of a virtual car captured across different seasons. This dataset can be broken down into various weather and illumination conditions including “summer”, “winter”, “rain”, “winter-night”, (See Table 1). We consider two places from Synthia-Seqs for evaluation, Highway and NYC-like City, which contain 9 and 10 video sequences with different lighting conditions, respectively. We treat each sequence as a task, with around images on average, and each task is further split evenly into a training set and a validation set.
We first train a segmentation model using labeled images in the “dawn” scenario, and then adapt the learned model to the remaining tasks in each of the sequences in an unsupervised setting. During the adaptation process, following [68, 16], we only access labeled images from the first task (, “dawn”), and unlabeled images from the current task. To evaluate the performance of the segmentation model, we report mean intersection-over-union (mIoU) on the validation set of each task as well as the mean mIoU across all tasks.
Network architectures. We use a pretrained VGG19 network as the encoder, and the architecture of the decoder is detailed in the supplemental material. We evaluate the performance of our framework with three different segmentation architectures, FCN-8s-ResNet101, DeepLab V3 , and ResNet50-PSPNet , which have demonstrated great success on standard benchmarks. FCN-8s-ResNet101 is an extension of FCN-8s-VGG network  that uses ResNet101 with dilations as the backbone, rather than VGG19. ResNet50-PSPNet contains a pyramid pooling module to derive representations at different levels that encompass sufficient context information . DeepLab V3  introduces a decoder to refine the segmentation results along object boundaries.
. We use PyTorch for implementation and use SGD as the optimizer with a weight decay ofand a momentum of . We set the learning rate to and optimize for iterations using standard SGD for training both source and target tasks. For fast adaptation with meta-gradients, we perform steps of meta updates. We sample three source images in a mini-batch for training, and for each of these images from the source task we randomly sample two reference images, one from the current target task and one from the memory, as style references for generating new images. For style replay, the memory caches feature vectors per task representing style information from 100 target images.
4.2 Results and Discussion
Effectiveness of adapting to new tasks. Table 1 presents the results of ACE and comparisons with source only methods, which directly apply the model trained on the source task to target tasks without any adaptation. We can observe that the performance of the source model degrades drastically when the distributions of the target task are significantly different from the source task (, drop from “dawn” to “’winter” and drop from “dawn” to “winter night” with FCN-8s-ResNet101). On the other hand, ACE can effectively align feature distributions between the source task and target tasks, outperforming source only methods by clear margins. For example, ACE achieves a and (absolute percentage points) gain with FCN-8s-ResNet101 on Highway and NYC-like City, respectively. In addition, we can see similar trends using both ResNet50-PSPNet and DeepLab V3, confirming the fact that the framework is applicable to different top-performing networks for segmentation. Comparing across different networks, ResNet50-PSPNet offers the best mean mIoUs on both datasets after adaptation. Although DeepLab V3 achieves the best results on the source task, its generalization ability is limited with more than performance drop when applying to the “winter night” task. However, ACE can successfully bring back the performance with adaptation. Furthermore, we also observe that the performance on Highway is higher than that on NYC-like City using different networks, which results from the fact the scenes are more cluttered with small objects like “traffic signs” in a city in contrast to highways. Figure 4 further visualizes the prediction maps generated by ACE and source only methods using ResNet50-PSPNet on Highway.
|Method||Styles per task||Highway||NYC-like City|
Effectiveness of style replay. We now investigate the performance of style replay using different numbers of feature vectors per task in the memory. Table 2 presents the results. The accuracy of ACE degrades by 2.4% and 2.9% on Highway and NYC-like City respectively when no samples are used for replay, which confirms that style replay can indeed help revisiting previously learned knowledge to prevent forgetting. ACE without reply is still better than source only methods due to the fact the segmentation network is still being updated with inputs in different styles. When storing more exemplar feature vectors (, 200 per task) into the memory, ACE can be slightly improved by and on Highway and NYC-like City, respectively. Here we simply use a random sampling approach to regenerate images in any of the historical styles, and we believe the sampling approach could be further improved with more advanced strategies .
Comparisons with prior art. We now compare with several recently proposed approaches based on FCN-8s-ResNet101: (1) Source-Reverse transfers testing images to the style of source images and then directly applies the segmentation model; (2) IADA aligns the feature distributions of the current task to those of a source task  in a sequential manner using adversarial loss functions  such that the feature distributions can no longer be differentiated by a trained critic; (3) ADDA-Replay stores previous samples and prediction scores and uses a matching loss to constrain the segmentation outputs from previous tasks to remain constant as adaptation progresses. The results are summarized in Table 3. We can see that ACE achieves the best results, outperforming other methods by clear margins, particularly on NYC-like City where a gain is achieved.
Although Source-Reverse is a straightforward way to align feature distributions, the performance is worse than directly applying the source model. We suspect that this performance drop occurs because of small but systematic differences between the original source data on which the segmentation engine was trained, and the style transferred data on which no training ever occurs. In contrast, ACE trains the segmentation network on synthesized images, and constrains the segmentation output on generated images to be compatible with output on the original source image. In addition, IADA improves the source only model slightly by aligning feature distributions in a sequential manner, however, it relies on an adversarial loss function that is hard to optimize . More importantly, while IADA proves to be successful for classification tasks, for tasks like segmentation where multiple classifiers are used for deep supervision [70, 30] at different distance scales, it is hard to know which feature maps to align to achieve the best performance. Further, we can also see that ADDA-Replay offers better results compared to IADA by using a memory to replay, however this requires storing all samples from previous tasks.
Note that ADDA  focuses on aligning distributions at the feature-level rather than the pixel-level, and this reduces low-level discrepancies in our approach. Yet, our approach is complimentary to approaches that explore feature-level alignment in the segmentation network at the cost of storing image samples for replay. When combining ADDA with ACE, and further improvements are achieved on both Highway and NYC-like City.
Fast adaptation with meta-updates. ACE achieves good results by batch training on each task using tens of thousands of SGD updates. We are also interested in how to adapt to the target task quickly by leveraging recent advances of meta-learning. We propose the meta-update method (ACE-Meta) which uses Reptile for learning meta-parameters, which are then fine-tuned to a specific task using only iterations of SGD. We compare to ACE-Fast, which also uses iterations per task, but without meta-learning, and also ACE-Slow, which uses full batch training with SGD for K iterations. The results are summarized in Figure 3. ACE-Meta achieves better performance compared to ACE-Fast, trained under the same settings, almost for all the target tasks on both Highway and NYC-like City, and we observe clear gains when applying the model to “winter” and “winter night”. Moreover, the results of ACE-Meta are on par with full batch training with SGD, demonstrating that meta-updates are able to learn the structures among different tasks.
Image generation with GANs
. We compare images generated by ACE to MUNIT in Figure 5. MUNIT learns to transfer the style of images from one domain to another by learning a shared space regularized by cycle consistency, and compared to CycleGAN , it is able synthesize a diverse set of results with a style encoder and a content encoder that disentangle the generation of style and content. Note that MUNIT also relies on AdaIN to control the style, but uses a GAN loss for generation. We can see that image generated with our approach preserves more detailed content (, facade of the building), and successfully transfers the snow to the walkway, while there are artifacts (, blurred regions) in the image generated with MUNIT.
We presented ACE, a framework that dynamically adapts a pre-trained model to a sequential stream of unlabeled tasks that suffer from domain shift. ACE leverages style replay to generalize well on new tasks without forgetting knowledge acquired in the past. In particular, given a new task, we introduced an image generator to align distributions at the pixel-level by synthesizing new images with the contents of the source task but in the style of the target task such that label maps from source images can be directly used for training the segmentation network. These generated images are used to optimize the segmentation network to adapt to new target distributions. To prevent forgetting, we also introduce a memory unit that stores the image statistics needed to produce different image styles, and replays these styles over time to prevent forgetting. We also study how meta-learning strategies can be used to accelerate the speed of adaptation. Extensive experiments are conducted on Synthia and demonstrate that the proposed framework can effectively adapt to a sequence of tasks with shifting weather and lighting conditions. Future directions for research include how to handle distribution changes that involve significant geometry mismatch.
-  M. Arjovsky and L. Bottou. Towards principled methods for training generative adversarial networks. In ICLR, 2017.
-  A. Bobu, E. Tzeng, J. Hoffman, and T. Darrell. Adapting to continuously shifting domains. In ICLR Workshop, 2018.
-  K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan. Unsupervised pixel-level domain adaptation with generative adversarial networks. In CVPR, 2017.
-  K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, and D. Erhan. Domain separation networks. In NIPS, 2016.
-  F. M. Castro, M. J. Marin-Jimenez, N. Guil, C. Schmid, and K. Alahari. End-to-end incremental learning. In ECCV, 2018.
-  L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, 2018.
-  C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, 2017.
Y. Ganin and V. S. Lempitsky.
Unsupervised domain adaptation by backpropagation.In ICML, 2015.
-  Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky. Domain-adversarial training of neural networks. JMLR, 2016.
L. A. Gatys, A. S. Ecker, and M. Bethge.
Image style transfer using convolutional neural networks.In CVPR, 2016.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
-  D. Hendrycks and T. Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In ILCR, 2019.
-  G. E. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. CoRR, 2015.
-  J. Hoffman, T. Darrell, and K. Saenko. Continuous manifold based adaptation for evolving visual domains. In CVPR, 2014.
-  J. Hoffman, E. Tzeng, T. Park, J.-Y. Zhu, P. Isola, K. Saenko, A. A. Efros, and T. Darrell. Cycada: Cycle-consistent adversarial domain adaptation. In ICML, 2018.
-  J. Hoffman, D. Wang, F. Yu, and T. Darrell. Fcns in the wild: Pixel-level adversarial and constraint-based adaptation. CoRR, 2016.
-  W. Hong, Z. Wang, M. Yang, and J. Yuan. Conditional generative adversarial network for structured domain adaptation. In CVPR, 2018.
-  H. Huang, Q. Huang, and P. Krahenbuhl. Domain transfer through deep activation matching. In ECCV, 2018.
-  X. Huang and S. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, 2017.
-  X. Huang, M.-Y. Liu, S. Belongie, and J. Kautz. Multimodal unsupervised image-to-image translation. In ECCV, 2018.
P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros.
Image-to-image translation with conditional adversarial networks.In CVPR, 2017.
J. Johnson, A. Alahi, and L. Fei-Fei.
Perceptual losses for real-time style transfer and super-resolution.In ECCV, 2016.
-  T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of gans for improved quality, stability, and variation. In ICLR, 2018.
-  J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. PNAS, 2017.
-  C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. In CVPR, 2017.
-  Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.-H. Yang. Universal style transfer via feature transforms. In NIPS, 2017.
-  Y. Li, N. Wang, J. Liu, and X. Hou. Demystifying neural style transfer. In IJCAI, 2018.
-  M.-Y. Liu, T. Breuel, and J. Kautz. Unsupervised image-to-image translation networks. In NIPS, 2017.
-  M.-Y. L. Liu and O. Tuzel. Coupled generative adversarial networks. In NIPS, 2016.
-  J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
-  M. Long, Y. Cao, J. Wang, and M. I. Jordan. Learning transferable features with deep adaptation networks. In ICML, 2015.
-  D. Lopez-Paz et al. Gradient episodic memory for continual learning. In NIPS, 2017.
-  M. McCloskey and N. J. Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation. 1989.
-  T. Munkhdalai and H. Yu. Meta networks. In ICML, 2017.
-  A. Nichol, J. Achiam, and J. Schulman. On first-order meta-learning algorithms. CoRR, 2018.
-  A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier gans. In ICML, 2017.
-  G. Perarnau, J. van de Weijer, B. Raducanu, and J. M. Álvarez. Invertible conditional gans for image editing. In NIPS Workshop, 2016.
-  A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016.
-  S. Ravi and H. Larochelle. Optimization as a model for few-shot learning. In ILCR, 2017.
-  S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert. icarl: Incremental classifier and representation learning. In CVPR, 2017.
-  S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text to image synthesis. In ICML, 2016.
-  S. R. Richter, V. Vineet, S. Roth, and V. Koltun. Playing for data: Ground truth from computer games. In ECCV, 2016.
-  G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez. The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In CVPR, 2016.
-  G. Ros, S. Stent, P. F. Alcantarilla, and T. Watanabe. Training constrained deconvolutional networks for road scene semantic segmentation. CoRR, 2016.
-  K. Saito, K. Watanabe, Y. Ushiku, and T. Harada. Maximum classifier discrepancy for unsupervised domain adaptation. In CVPR, 2018.
-  F. S. Saleh, M. S. Aliakbarian, M. Salzmann, L. Petersson, and J. M. Alvarez. Effective use of synthetic data for urban scene semantic segmentation. In ECCV. Springer, 2018.
-  A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap. Meta-learning with memory-augmented neural networks. In ICML, 2016.
-  J. Schmidhuber, J. Zhao, and M. Wiering. Shifting inductive bias with success-story algorithm, adaptive levin search, and incremental self-improvement. Machine Learning, 1997.
-  W. Shen and R. Liu. Learning residual images for face attribute manipulation. In CVPR, 2017.
-  H. Shin, J. K. Lee, J. Kim, and J. Kim. Continual learning with deep generative replay. In NIPS, 2017.
-  K. Shmelkov, C. Schmid, and K. Alahari. Incremental learning of object detectors without catastrophic forgetting. In ICCV, 2017.
-  A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb. Learning from simulated and unsupervised images through adversarial training. In CVPR, 2017.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
-  B. Sun and K. Saenko. Deep CORAL - correlation alignment for deep domain adaptation. In ECCV, 2016.
-  Y. Taigman, A. Polyak, and L. Wolf. Unsupervised cross-domain image generation. In ICLR, 2017.
-  S. Thrun and L. Pratt. Learning to learn: Introduction and overview. In Learning to learn. 1996.
-  E. Tzeng, J. Hoffman, T. Darrell, and K. Saenko. Simultaneous deep transfer across domains and tasks. In ICCV, 2015.
-  E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell. Adversarial discriminative domain adaptation. In CVPR, 2017.
-  D. Ulyanov, V. Lebedev, A. Vedaldi, and V. S. Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images. In ICML, 2016.
-  D. Ulyanov, A. Vedaldi, and V. S. Lempitsky. Instance normalization: The missing ingredient for fast stylization. CoRR, 2016.
-  O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al. Matching networks for one shot learning. In NIPS, 2016.
-  Z. Wu, X. Han, Y.-L. Lin, M. Gokhan Uzunbas, T. Goldstein, S. Nam Lim, and L. S. Davis. Dcan: Dual channel-wise alignment networks for unsupervised scene adaptation. In ECCV, 2018.
-  M. Wulfmeier, A. Bewley, and I. Posner. Incremental adversarial domain adaptation for continually changing environments. In ICRA, 2018.
-  J. Xu and Z. Zhu. Reinforced continual learning. In NIPS, 2018.
-  D. Yoo, N. Kim, S. Park, A. S. Paek, and I.-S. Kweon. Pixel-level domain transfer. In ECCV, 2016.
-  J. Yoon, E. Yang, J. Lee, and S. J. Hwang. Lifelong learning with dynamically expandable networks. In ICLR, 2018.
-  F. Zenke, B. Poole, and S. Ganguli. Continual learning through synaptic intelligence. In ICML, 2017.
-  Y. Zhang, P. David, and B. Gong. Curriculum domain adaptation for semantic segmentation of urban scenes. In ICCV, 2017.
-  Y. Zhang, Z. Qiu, T. Yao, D. Liu, and T. Mei. Fully convolutional adaptation networks for semantic segmentation. In CVPR, 2018.
-  H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. In CVPR, 2017.
-  J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV, 2017.