Models trained on synthetic images often face degraded generalization to real data. As a convention, these models are often initialized with ImageNet pre-trained representation. Yet the role of ImageNet knowledge is seldom discussed despite common practices that leverage this knowledge to maintain the generalization ability. An example is the careful hand-tuning of early stopping and layer-wise learning rates, which is shown to improve synthetic-to-real generalization but is also laborious and heuristic. In this work, we explicitly encourage the synthetically trained model to maintain similar representations with the ImageNet pre-trained model, and propose a learning-to-optimize (L2O) strategy to automate the selection of layer-wise learning rates. We demonstrate that the proposed framework can significantly improve the synthetic-to-real generalization performance without seeing and training on real data, while also benefiting downstream tasks such as domain adaptation. Code is available at: https://github.com/NVlabs/ASG.READ FULL TEXT VIEW PDF
Training on synthetic data can be beneficial for label or data-scarce
Fine-tuning through knowledge transfer from a pre-trained model on a
Domain-specific contextualized language models have demonstrated substan...
Model inversion, whose goal is to recover training data from a pre-train...
Hand segmentation is a crucial task in first-person vision. Since
Domain adaptation (DA) mitigates the domain shift problem when transferr...
Representation learning has significantly been developed with the advanc...
, depth/flow estimation(Dosovitskiy et al., 2015; Mayer et al., 2016; Gaidon et al., 2016), object detection (Johnson-Roberson et al., 2016), visual navigation (Savva et al., 2019), and grasping (Coumans and Bai, 2016). When there is label scarcity, a popular approach is to resort to training with synthetic images, where full supervision can be obtained at a low cost. This finds applications in label-scarce domains such as robotics and autonomous driving where simulation can play an important role.
However, there are many challenges to train with synthetic images. Models trained on synthetic images often face problems from degraded generalization on the real domain. Such a domain gap is usually caused by limitations on rendering quality, including unrealistic texture, appearance, illumination and scene layout, etc. As a result, networks are prone to overfitting to the synthetic domain with learned representations that differ from those obtained on real images. To this end, domain generalization methods (Li et al., 2017; Pan et al., 2018; Yue et al., 2019) have been proposed to overcome the above domain gaps and improve model generalization on real target domains.
Synthetic-to-real transfer learning involves training a model only on synthetic images (source domain) without seeing any real ones, and targets on the generalization performance on unseen real images (target domain). Recent synthetic-to-real generalization algorithms often start with an ImageNet pre-trained model. To achieve the best generalization performance, it is a common practice to fine-tune the pre-trained model on synthetic images for only a few epochs (i.e.early-stopping) with a small learning rate. Figure 1 illustrates the evaluation dynamics of several popular heuristic solutions on the VisDA-17 dataset (Peng et al., 2017). One could clearly see the high performance in early epochs, and the improvements of fine-tuning with a small learning rate (or even a fixed backbone) over training with a large one (red dashed line). Similar behavior exists in recent works (e.g. IBN-Net (Pan et al., 2018)). This observation implies an important clue: all these heuristics try to retain the ImageNet domain knowledge during the synthetic-to-real transfer learning. It explains why the heuristic solutions in Figure 1
work: they allow the classifier to quickly adjust from ImageNet to the task defined by the synthetic images, while preventing the ImageNet pre-trained representations of natural images to be “washed out” due to catastrophic forgetting.
Unfortunately, existing solutions (e.g. IBN-Net) still face degraded generalization and are highly dependent on manual selections of training epochs and schedules (learning rates). Motivated by this open issue, we propose an Automated Synthetic-to-real Generalization (ASG) framework to improve synthetic-to-real transfer learning. This method is automated from two aspects: (1) It stably improves the generalization during transfer learning, avoiding the difficulty of choosing epochs to stop. (2) It automates the complicated tuning of layer-wise learning rates towards better generalization. The core of our work is the intuition that a good synthetically-trained model should share similar representations with ImageNet-models, and we leverage this intuition as a proxy guidance to search layer-wise training schedules through learning-to-optimize (L2O).
We examine the behaviors of various training heuristics, in order to study the role of the ImageNet domain knowledge in synthetic-to-real generalization, which is not thoroughly discussed by the literature to the best of our knowledge.
We provide a novel perspective to address synthetic-to-real generalization, by formulating it as a lifelong learning problem. We enforce the representation similarity between synthetically trained models and ImageNet pre-trained model, and treat their similarity as a proxy guidance of generalization performance. An overall design is illustrated in Figure 2.
We demonstrate that proxy guidance not only dramatically improves the generalization performance, but can also be easily integrated by existing transfer learning frameworks as a simple drop-in module, without requiring any additional training beyond synthetic images. Experiments also prove the cross-task generalizability of our proxy guidance, which magnifies the strength of synthetic-to-real transfer learning.
We design a reinforcement learning based learning-to-optimize (RL-L2O) approach to make the synthetic-to-real generalization practically more convenient, by automating the complicated heuristic designs with layer-wise learning rates. We demonstrate that our RL-L2O method out-performs hand-crafted decisions and learns explainable learning rate strategy.
In our work, we propose an automated framework to address the synthetic-to-real transfer learning, dubbed Automated Synthetic-to-real Generalization (ASG). We assume an ImageNet pre-trained model as our starting point. Our target is to maximize the performance of the model on a target domain which consists of unseen real images, by utilizing only synthetic images from the source domain.
The accessibility to model pre-trained on ImageNet (Deng et al., 2009) implicitly provides the domain knowledge of real images. As we are transferring a model trained on synthetic data to unseen real images, retaining the ImageNet domain knowledge is potentially beneficial to the generalization. Motivated by this, we force the model to memorize how to capture the representation learned from ImageNet while training on synthetic images, to maintain both the domain knowledge on real images and task-specific information provided by the synthetic data.
We start with an ImageNet pre-trained model , and formulate our transfer learning as a life-long learning problem: training on synthetic images as the new task while still memorizing the old ImageNet classification task. While updating the model with synthetic images, we also keep a copy of the original ImageNet pre-trained model which is frozen during the training. In addition to the cross-entropy loss calculated on the synthetic dataset, we also forward the synthetic images through and minimize the KL divergence between the output of and . Formally, we leverage the minimization of as a proxy guidance during our transfer learning process:
Here, is a balancing factor that controls how much ImageNet domain knowledge the model should retain. denotes the parameters for the synthetic-to-real transfer learning (i.e. the classifier layers for the new task),
denotes the parameters for ImageNet classifier which will output the predicted probabilities on the ImageNet domain.denotes the parameters for the feature extractor (a.k.a. backbone) updated for the new tasks, and denotes the parameters for the feature extractor which is frozen with ImageNet pre-trained weights. and share the same structure. is the current batch size, , and are sample and ground truth from the new task in the current batch. This synthetic-to-real transfer learning with proxy guidance is illustrated in Figure 2. The new task and the old ImageNet task are jointly optimized during the training.
Cross-task proxy guidance: It is important to note that, the new task is not necessarily limited to be also for the image classification purpose. For some models in semantic segmentation (e.g. ResNet based FCN (Long et al., 2015a)), a pixel-wise provides a much denser supervision than the image-wise in Eq. 4. To spatially balance and , we also make denser by applying it on cropped feature map patches:
Here, are cropped patches from . Later in section 3.4 we will demonstrate that this formulation also works well for cross-task training.
As observed in Figure 1, different convolution blocks contribute differently to the generalizability. This leads to a question: does different layers in a deep network require different training strategy towards optimal synthetic-to-real generalization performance during the transfer learning?
To avoid manually tuning the hyperparameters, we propose a reinforcement learning based learning-to-optimize (RL-L2O) framework to automatically adjust the learning rates for layers. In the RL-L2O framework, we aim to learn a parameterized policyto dynamically control the learning rates given the training statistics of our model during transfer learning.
Generally, the goal of the reinforcement learning algorithm is to learn a policy that maximizes the total expected reward over time. More precisely,
where the expectation is taken over the sequence of states (or observations) and actions. In short, an action produced by will update the learning rates for in the RL-L2O framework. A state contains optimization related statistics of the model during the transfer learning, and the reward measures how well the optimization performs.
Design of Optimization Coordinates: One challenge in applying reinforcement learning in our setting is that we want to be able to control the training schedules of a deep network of up to a hundred layers (ResNet-101), each of them requiring an action from our policy. As layers may have strong correlations during the optimization (Ghiasi et al., 2018), the policy may fall into sub-optimal solutions in this large scale action space. To avoid this difficulty and simplify our policy training, we leverage the underlying structures in current deep networks. Specifically, layers in with similar input resolution will be grouped into a block, named as an optimization coordinate. Taking the ResNet family as an example, we group layers into a new coordinate whenever the feature map resolution is reduced. This grouping strategy keeps the action space of the policy small, and speeds-up the L2O training.
Design of Action Space: Intuitively, our policy could directly output learning rate for each coordinate. However, the model could be very sensitive to the learning rate (as observed in Figure 1), and the learning rate usually resides in a small value range (e.g. ). Directly predicting the value of the learning rate could be very unstable. Instead, we propose a learning rate scaling factor as the action. We first provide the policy a base learning rate . In the following steps, outputs discrete coordinate-wise learning rate scale factors as its actions where is the number of optimization coordinates in . We formulate as categorical actions, where each learning rate scale factor . The learning rate for each coordinate is set to be
, and we leverage the gradients and momentums calculated by stochastic gradient descent (SGD)(Rumelhart et al., 1986) to update the parameters in .
Design of Observation Space and Reward: At each step, the state (observation) for includes: current (Eq. 3) and (Eq. 4, Eq. 5), the training progress of (i.e. , where equals to total training steps (i.e., “total epochs”
“iterations per epoch”)), the mean and standard deviation of the weights of the classifier (and ), and finally the scale factors from the last step . The policy learning is guided by reward .
Policy Training: We update our policy via the REINFORCE algorithm (Williams, 1992) to minimize:
where is the unroll length for . Algorithm LABEL:algo:L2O illustrate the procedure of our RL-L2O framework.
Once we obtained the learned policy , we then freeze and apply it to the synthetic-to-real transfer learning of together with SGD, as illustrated in Figure 3.
VisDA-17 (Peng et al., 2017) We perform ablation study on the VisDA-17 image classification benchmark. The VisDA-17 dataset provides three subsets (domains), each with the same 12 object categories. Among them, the training set (source domain) is collected from synthetic renderings of 3D models under different angles and lighting conditions, whereas the validation set (target domain) contains real images cropped from the Microsoft COCO dataset (Lin et al., 2014).
GTA5 (Richter et al., 2016) is a vehicle-egocentric image dataset collected in a computer game with pixel-wise semantic labels. It contains 24,966 images with a resolution of 10521914. There are 19 classes that are compatible with the Cityscapes dataset.
Cityscapes (Cordts et al., 2016) contains urban street images taken on a vehicle from some European cities. There are 5,000 images with pixel-wise annotations. The images have a resolution of 10242048 and are labeled into 19 semantic categories.
Image classification: For VisDA-17, we choose ResNet-101 (He et al., 2016) as the backbone, and one fully-connected layer as the classifier. Backbone is pre-trained on ImageNet (Deng et al., 2009), and then fine-tuned on source domain, with learning rate = , weight decay = , momentum = 0.9, and batch size = 32. The model is trained for 30 epochs and for is set as . In section 3.3, we will additionally study how to choose .
Semantic segmentation: We study both FCN with ResNet-50 and FCN with VGG-16 (Long et al., 2015a). Backbones are pre-trained on ImageNet. Our learning rate is , weight decay is , momentum is 0.9, and batch size is six. We crop the images into patches of 512512 and train the model with multi-scale augmentation (0.75 1.25) and horizontal flipping. The model is trained for 50 epochs, and for is set as . Note that in segmentation is considerably larger since is a pixel-wise dense loss.
RL-L2O policy: We set the learning rate for policy training as
. The size of the hidden state vectoris set to 20, and the unroll length . We train for 50 epochs. For the ResNet family, we follow the convention (He et al., 2016) to group the layers into coordinates: , and the . For VGG-16 (Long et al., 2015a), we also group the layers into coordinates: , and the remaining layers.
Proxy guidance: For all backbones we studied (ResNet-50, ResNet-101, and VGG-16), we forward the feature maps extracted by group into the ImageNet classifier (parameterized by ) to calculate .
We first perform the ablation studies on the VisDA-17 image classification task111There is no previous synthetic-to-real transfer work on VisDA-17 classification task, only domain adaptation works..
Generalization with Proxy Guidance. To evaluate the effect of our proxy guidance, we apply our loss on different learning rate settings we studied in Figure 1. As demonstrated in Figure 2, once we force the model to memorize the ImageNet domain knowledge, we achieve stably increasing and eventually better generalization performance for each setting we explored in Figure 1. The relative ranking still holds among the different learning rate settings, while the degraded generalizability is addressed. Early stopping is no longer needed, as models enjoy improved generalization given sufficient training epochs. This ablation study validates the contribution of retaining the ImageNet domain knowledge during the synthetic-to-real transfer learning. It is also worth noting that our proxy guidance can be also applied to different networks (e.g. the IBN-Net (Pan et al., 2018), green line in Figure 2), which demonstrate the easy integration of our approach as a simple drop-in module with existing synthetic-to-real generalization works, without requiring any additional training beyond synthetic images.
Moreover, a vital conclusion from Figure 2 is that, only reporting the (final) performance as a number is far from sufficient for analyzing and comparing synthetic-to-real transfer learning methods. Instead, the curve of the target performance during training can better demonstrate how well a model’s generalizability is. Meanwhile, a stably increasing training curve implies that, the model is both better leveraging synthetic images and retaining ImageNet domain knowledge, instead of overfitting on synthetic appearance and leaving the domain gap an open issue.
How to choose : We also study the effect of different strengths of the proxy guidance loss by adjusting in Equation 2 for a ResNet-101 model trained with a small learning rate for the backbone and a large one for the classification layer (blue line in Figure 2). In Table 1, we adjust in a wide range from 0.01 to 1. While we obtain the best generalization accuracy with , we can see that our proxy guidance is very robust to different strength of . Therefore, choosing is much easier than tuning hyperparameters in heuristic solutions like epochs.
Automated Syn-to-Real Generalization. We next evaluate the performance of our RL-L2O framework. Specifically, we want to make sure the policy learned by our RL-L2O can perform better than both the random policy and the best hand-tuned learning rate policy we explored in Figure 2. A random policy means that the controller will always randomly pick an action as the learning rate scale factor. In all these three settings we start from the same base learning rate . Figure 6 demonstrates that, although the hand-tuned learning rate strategy is better than a random policy, RL-L2O can still out-perform it (blue line).
Additional Ablation Study on VisDA-17. We conduct additional ablation studies on VisDA-17 to further analyze the learning behaviors of ASG. Specifically, as both the proxy guidance and the RL-L2O frameworks are motivated to carefully preserve the ImageNet representations while targeting updates from the new tasks on synthetic data, it is interesting and important to connect the relation between the level of retained ImageNet knowledge and the synthetic-to-real generalization. In our experiment, we compute ImageNet validation accuracy as well as the generalization performance on Visda-17 target domain for the classification task.
Table 2 demonstrates two conclusions: 1) Heuristic solutions that retain more ImageNet domain knowledge achieve higher synthetic-to-real generalization (#3 versus #1), i.e., using hand-crafted small learning rates to prevent the ImageNet pre-trained representations of natural images from being “washed out” due to catastrophic forgetting; 2) By leveraging Proxy Guidance, the generalization performance on VisDA-17 is dramatically improved, while the ImageNet accuracy is also maintained with almost no drop. It is interesting that Proxy Guidance leads to learned model parameters that achieve high accuracy simultaneously on both ImageNet and VisDA-17. In contrast, naively freezing the backbone and only fine-tuning the classifier layer (“Oracle” #5) results in inferior synthetic-to-real generalization despite high ImageNet performance.
|1.||Large LR for all layers||28.2||0.8|
|2.||+ our Proxy Guidance||58.7 (+30.5)||76.2 (+75.4)|
|3.||Small LR for backbone||49.3||33.1|
|and large LR for FC|
|4.||+ our Proxy Guidance||60.2 (+10.9)||76.5 (+43.4)|
|5.||Oracle on ImageNet333 53.3 (+4.0)||77.4|
|6.||ROAD (Chen et al., 2018)||57.1 (+7.8)||77.4|
|7.||Vanilla L2 distance||56.4 (+7.1)||49.1|
|8.||SI (Zenke et al., 2017)||57.6 (+8.3)||53.9|
Oracle is obtained by freezing the ResNet-101 backbone while only training the last new fully-connected classification layer on the Visda-17 source domain (the FC layer for ImageNet remains unchanged). We use the PyTorch official model of ImageNet pre-trained ResNet-101.
In addition, we compare ASG with several other lifelong learning algorithms, including both feature-level regularization (Chen et al., 2018) and weight-level importance-reweighted constraints (Zenke et al., 2017). Row #58 in Table 2 shows that although the three comparing methods indeed retain ImageNet domain knowledge while improving over the baseline (49.3%), they are not performing as well as the proxy guidance (60.2%) under the same LR policy.
We also conduct comprehensive experiments to evaluate the synthetic-to-real generalization performance of ASG on the semantic segmentation task. In particular, we treat GTA5 as the synthetic source domain and train segmentation models on it. We then treat the Cityscapes validation/test sets as target domains where we directly evaluate the segmentation performance of the synthetically trained models.
Figure 7 shows the dynamics of evaluation accuracy on the Cityscapes validation set. Again, ASG demonstrates significantly improved generalization performance on semantic segmentation over naive synthetic training. In addition, integrating proxy guidance with RL-L2O also consistently outperforms baselines where proxy guidance is integrated with other policy strategies. Note that in this case, both and are oriented to the classification task, while and designed for segmentation. This showcases the ability of ASG to generalize across different tasks.
In Table 3, we compare our method with prior domain generalization methods for semantic segmentation. One can see that ASG achieves the best performance gain. Among the comparing methods, IBN-Net (Pan et al., 2018) improves domain generalization by fine-tuning the mixed IN-BN residual building blocks, while (Yue et al., 2019) transfers the styles from images in ImageNet to synthetic images. It is worth noting that (Yue et al., 2019) requires ImageNet images during training and implicitly leverages ImageNet label information (i.e. “Auxiliary Domains”) which brings potential advantages. In contrast, our method requires minimum extra information without using any additional images or labels, therefore can be conveniently applied to existing frameworks as a drop-in training strategy.
|Methods||Model||mIoU %||mIoU %|
|Yue et al. (2019)||37.42|
|Yue et al. (2019)||36.11|
Policy Behaviors. Figure 8 shows clear and explainable behavior patterns of our policy for FCN-VGG16 on the segmentation task. In FCN-VGG16, groups belong to the ImageNet pre-trained backbone, while and the remaining layers act as the classifier for the dense predictions. The feature map captured by is forward into to calculate . As is close to the calculation of , fixing (i.e. selecting action = 0 which represents the learning rate scale factor = 0) can effectively minimize and retain the ImageNet domain knowledge. As parameters from group to are gradually far from the supervision, the corresponding selected actions also increase.
On the other hand, to perform dense prediction in semantic segmentation, the extracted feature maps are first forwarded to and then to . In addition, similar trend holds for the classifier part: as is the closest group to , it is assigned with the highest scale factor for learning rate.
The proposed ASG framework not only can improve the synthetic-to-real generalization performance, but also can considerably benefit downstream tasks such as unsupervised domain adaptation. Here we present synthetic-to-real domain adaptation results on VisDA-17 (Peng et al., 2017) in Table 4, where the model trained by ASG (which did not use any real target images during training) is leveraged as the source model (i.e., starting point for the unsupervised domain adaptation training), and the CBST/CRST frameworks are adopted exactly following (Zou et al., 2018, 2019) for fair comparison purposes.
Starting from a much better initialization (our 61.1% compared with 51.6% in (Zou et al., 2019)), we significantly boost the adaptation performance over 6% compared with CBST/CRST, achieving 84.6% on Visda-17. It is important to emphasize that such improvement is obtained without any extra supervision and external knowledge. The only difference lies in smarter synthetic-to-real source training which ultimately leads to improved adaptation.
|Source (Saito et al., 2017)||✗||52.4|
|DANN (Ganin et al., 2016)||✓||57.4|
|MCD (Saito et al., 2018b)||✓||71.9|
|ADR (Saito et al., 2017)||✓||74.8|
|SimNet-Res152 (Pinheiro, 2018)||✓||72.9|
|Source-Res101 (Zou et al., 2019)||✗||51.6|
|CBST (Zou et al., 2018)||✓||76.4 (0.9)|
|CRST (MRKLD) (2019)||✓||77.9 (0.5)|
|CRST (MRKLD + LRENT) (2019)||✓||78.1 (0.2)|
|ASG + CBST||✓||82.5 (0.7)|
|ASG + CRST (MRKLD)||✓||84.6 (0.4)|
|ASG + CRST (MRKLD + LRENT)||✓||84.5 (0.4)|
Feature visualization. We show the t-SNE visualization of the feature embeddings extracted by the backbone (ResNet-101) of different models in Fig. 10. Compared with Both CBST (Zou et al., 2018) and CRST (MRKLD+LRENT) (Zou et al., 2019), feature embeddings obtained by ASG + CRST form purer clusters in terms of semantic labels.
Domain generalization considers the problem of generalizing a model on the unseen target domain without leveraging any target domain images (Gan et al., 2016; Muandet et al., 2013; Yuan et al., 2020). Muandet et al. (2013) proposed to use the MMD (Maximum Mean Discrepancy) to align the distributions from different domains and train the network with adversarial learning. Li et al. (2017) built separate networks for each source domain and used the shared parameters for the test. Li et al. (2018) improved the generalization performance by using a meta-learning approach on the split training sets. Pan et al. (2018)
boosted a CNN’s generalization by carefully integrating the Instance Normalization and Batch Normalization as building blocks.
Unsupervised domain adaptation (UDA) trains a model towards a specific target domain, where the (unlabeled) images from the target domain are available for training. One major idea is to learn domain invariant embeddings by minimizing the distribution divergence between the source and target domain (Long et al., 2015b; Sun and Saenko, 2016; Tzeng et al., 2014). Hoffman et al. (2017) reduced domain gap by first translating the source images into target style with a cycle consistency loss, and then aligning the feature maps of the network across different domains through the adversarial training. Other works that leverage image level translation to bridge the domain gap include domain stylization (Dundar et al., 2020) and DLOW (Gong et al., 2019). Besides image-level translation, a number of works also perform adversarial learning at feature (Saito et al., 2018a; Chen et al., 2019; Liu et al., 2019) or output level (Tsai et al., 2018) for the improved domain adaptation performance. In addition, Zou et al. (2018; 2019)
proposed an expectation-maximization like UDA framework based on an iterative self-training process, where the loss of the latent variable is minimized. This is achieved by alternatively generating pseudo labels on target data and re-training the model with the mixed source and pseudo target labels.
In contrast to the above existing domain generalization and adaptation methods, we resort to leveraging the ImageNet pre-trained model as a proxy guidance during the synthetic-to-real transfer learning, without any extra adversarial training or modification to model architecture.
Lifelong learning (Thrun, 1998) focuses on flexibly appending new tasks to the model’s training schedules, while maintaining the knowledge captured from previous old tasks. Li & Hoiem (2017) leverages only new task data to train the network while preserving the original capabilities by minimizing the outputs between the old network and the newly learned one. Lopez-Paz and Ranzato (2017) proposed a Gradient Episodic Memory (GEM) to alleviate the knowledge forgetting while transferring knowledge from previous tasks. Shin et al. (2017) developed a Deep Generative Replay framework, which is used to sample training data from previous tasks when training the new task. A number of other works on lifelong learning with related or similar applications include (Zenke et al., 2017; Kirkpatrick et al., 2017; Shafahi et al., 2019) where lifelong learning is shown to avoid catastrophic forgetting and benefit tasks such as incremental tasks learning, domain adaptation and adversarial defense. One work that is particularly related to our synthetic-to-real generalization theme is (Chen et al., 2018) where the authors propose a spatial aware adaptation scheme and also leverage a distillation loss to avoid overfitting to synthetic data. Our work differs from the above prior works by carefully looking into the important role played by layer-wise learning rate policies in synthetic-to-real transfer learning problems and accordingly propose a principled solution to automate the policy search.
Andrychowicz et al. (2016)
proposed the first learning-to-optimize framework, where both the optimizee’s gradients and loss function values were formulated as the input features for a Recurrent neural network (RNN) optimizer. Their RNN optimizer adopted coordinate-wise weight sharing to alleviate the dimensionality challenge. Li and Malik(2016) used the gradient history and objective values as observations and step vectors as actions in their reinforcement learning framework. Chen et al. (2017) leveraged RNN to train a meta-optimizer to optimize black-box functions (e.g. Gaussian process bandits). Recently, Wichrowska et al. (2017) introduced an optimizer of multi-level hierarchical RNN architecture augmented with additional architectural features, in order to improve the generalizability of the optimization tasks. (Cao et al., 2019; You et al., 2020) further extended learned optimizers to handling Bayesian swarm optimization, and graph network training, respectively. In our work, we leverage the learning-to-optimize approach to control the layer-wise learning rates for the training of deep CNNs, where the deep CNN (i.e. optimizee) will be transferred from the synthetic source domain to the real target domain, extending the application range of the current learning-to-optimize methods.
In this paper, we present an Automated Synthetic Generalization (ASG) method for the synthetic-to-real transfer learning problem. We carefully analyzed the pitfall in existing generalization approaches where the ImageNet domain knowledge is catastrophically forgotten. By leveraging the minimization of predictions between ImageNet pre-trained model and the model for the new task as a proxy guidance, the generalization performance is dramatically improved during the whole training process. We further include a reinforcement learning based learning-to-optimize strategy to automate the layer-wise learning rates towards a better generalization performance. Our experiments demonstrate both the superior generalization performance and the automated learning schedules by our ASG framework.
Work done during internship at NVIDIA. We appreciate the computing power supported by NVIDIA GPU infrastructure. We also thank for the discussion and suggestions from four anonymous reviewers and the help from Yang Zou for the domain adaptation experiments. The research of Z. Wang was partially supported by NSF Award RI-1755701.
The cityscapes dataset for semantic urban scene understanding. In CVPR, Cited by: §3.1.
Pybullet, a python module for physics simulation for games, robotics and machine learning. Note: http://pybullet.org Cited by: §1.