Most state-of-the-art visual recognition models rely on supervised learning using a large set of manually annotated data(alexnet; He2015DeepRL; maskrcnn; yolov3). As recognition task complexity increases, so does the number of potential real world variations in visual appearance and hence the size of the example set needed for sufficient test time generalization. Unfortunately, large labeled data sets are laborious to acquire (imagenet_cvpr09; zhou2017places), and may even be infeasible for applications with evolving data distributions.
Often a large portion of the variance within a collection of data is due to task-agnostic factors of variation. For example, the appearance of a street scene will change substantially based on the time of day, weather pattern, and number of traffic lanes, regardless of whether cars or pedestrians are present. Ideally, the ability to recognize cars and pedestrians would not require labeled examples of street scenes for all combinations of times of day, weather conditions, and geographic locations. Rather it should be sufficient to observe examples from each factor independently and generalize to unseen combinations. However, often the in-domain labeled data available may not even linearly cover all factors of variation. This calls for methods that encourage such sample efficiency by focusing on the individual complexities of the factors of variation, as opposed to their product.
Prior approaches to learning representations which isolate factors of variation in the data have typically regularized the representation itself, with the aim of learning “disentangled” representations (kulkarni2015deep; chen2016infogan; higgins2017beta; higgins2017darla; burgess2018understanding; kimmnih18). Alternatively, one might regularize the way the representation can be manipulated rather than the representational structure itself. Here, we take such an approach by introducing latent canonicalization, in which we constrain the representation such that individual factors can be clamped to an arbitrary, but fixed value (“canonicalized”) by a simple linear transformation of the representation. These canonicalizers can be applied independently or composed together to canonicalize multiple factors.
We assume access to a large collection of source domain data with meta-labels specifying the factors of variation within an image. This may be available from meta-data, attribute labels, or from hyper-parameters used for generation of simulated imagery. Latent canonicalizers are optimized by a pixel loss over pairs of ground-truth canonicalized examples and reconstructions of images with various factors of variation whose representation has been passed through the relevant latent canonicalizers. By requiring the ability for manipulation of the latent space according to factors of variation, latent canonicalization encourages our representation to linearize such factors.
We evaluate our approach on its ability to learn general representations after observing only a subset of all potential combinations of factors of variation. We first consider the simple dSprites dataset, introduced to study disentangled representations (dsprites17) and show qualitatively that we can effectively canonicalize individual factors of variation. We next consider the more realistic, though still tractable, task of digit recognition on street view house numbers (SVHN) (svhn) with few labeled examples. Using a simulator, we train our representation with latent canonicalization along multiple axes of variation such as font type, background color, etc. and then use the factored representation to enable more efficient few-shot training of real SVHN data. Our method substantially increased performance over standard supervised learning and fine-tuning for equivalent amounts of data, achieving digit recognition performance that was only attainable with 5
as much SVHN labeled training data for the best-performing baseline method. Finally, to demonstrate that our approach scales to naturalistic images, we evaluate our method on a subset of ImageNet, again outperforming the best baselines. Our experiments offer promising evidence that encoding structure into the latent representation guided by known factors of variation within data can enable more data efficient learning solutions.
2 Related Work
A number of studies have sought to learn low-dimensional representations of the world, with many aiming to learn “disentangled” representations. In disentangled representations, single latent units independently represent semantically-meaningful factors of variation in the world and can lead to better performance on some tasks (van2019disentangled). This problem has been most commonly studied in an unsupervised setting, often by regularizing latent representations to stay close to an isotropic Gaussian (higgins2017beta; higgins2017darla; burgess2018understanding; kimmnih18). Other popular unsupervised approaches include maximizing the mutual information between the latents and the observations (chen2016infogan) and adversarial approaches (donahue2016adversarial; radford2015unsupervised). When supervision on the sources of variation is available, it is possible to use this in a weak way (kulkarni2015deep).
Many of these works have explicitly endeavored to learn semantically meaningful representations which are both linearly independent and axis-aligned, such that individual latents correspond to individual factors of variation in the world. However, recent work has questioned the importance of axis-aligned representations, showing that many of these methods rely on implicit supervision and finding little correspondence between this strict definition of disentanglement and learning of downstream tasks (rolinek2018variational; challenging_disentangling). Further, while axis-alignment is useful for human interpretability, for the purposes of decodability, any arbitrary rotation of these latents would be equally decodable so long as factors are linearly independent (challenging_disentangling). In this work, we use explicit supervision in a simulated setting to encourage linear, but not necessarily axis-aligned representations.
The setting of near-unlimited simulated data with ground truth labels and scarce real data occurs often in computer vision and robotics. However, thedomain gap between simulated and real data reduces generalization capacity. Many approaches have been proposed to overcome this difficulty which are broadly referred to as sim2real approaches. A simple approach to closing the sim2real gap is to train networks with combinations of real and synthetic data (varol2017learning).
Transfer Learning and Few-shot Learning: Alternatively, synthetic data can be used for pre-training followed by fine-tuning on real data (richter2016playing)
—a form of transfer learning. Often, the reason one uses simulated data is that there is a shortage of labeled real data. In this situation, one may make use of few-shot learning techniques which seek to prevent over-fitting to the few examples by using a metric loss between data tripets(koch2015_siamese), comparing similarity between individual examples (vinyals2016_matchingnets) or between a prototypical example per category and each instance (snell2017_prototypicalnets). A separate class of techniques use meta-learning to design a learning solution which is most amenable to learning from few examples (finn2017_maml). However, these approaches may be sensitive to the ratio of synthetic and real data.
Domain adaptation: With access to a large set of unlabeled real examples, domain adaptation techniques can be used to close the sim2real gap. One class of techniques focuses on matching domains at the feature level (ganin2015unsupervised; long2015_icml; tzeng2017_cvpr), aiming to learn domain-invariant features than can make models more transferrable (zou2018domain; li2018semantic)
. In fact, image-to-image translation focuses on the appearance gap by bridging theappearance gap in the image domain instead of feature space (shrivastava2017cvpr; liu2017unsupervised). Domain adaptation can also be used to learn content gap in addition to appearance gap (kar2019meta). Additional structural constraints, such as cycle consistency, can further refine this image domain translation (zhu2017unpaired; hoffman2018cycada; mueller2018ganerated). Finally, image stylization methods can also be adapted for style transfer and sim2real adaptation (li2018closed).
Domain randomization: (DR) offers a surprising alternative (tobin2017domain; structDR) by exploiting control of the synthetic data generation pipeline to randomize sources of variation. Random variations will likely be ignored by networks and thus result in invariance to those variations. A particularly interesting instantiation of DR was suggested in (sundermeyer2018implicit)
for pose estimation. Pose is an example of a factor of variation which could be ambiguous due to occlusions and symmetries. Instead of explicitly regressing for the pose angle, the authors propose an implicit latent representation. This is achieved by an augmented-autoencoder, a form of denoising-autoencoder that addresses all nuisance factors of variation as noise. This idea can be seen as a particular case of our method where all factors are being canonicalized at once instead of learning to canonicalize each individually. Another interesting example is the quotient space approach of(mehr2018_cvpr)
removes pose information for a 3D shape representation by max-pooling encoder features over a sampling of object rotations. It, however, does not consider how to perform canonicalization as a linear transformation in latent space, nor how to compose different canonicalizers.
Our goal is to learn representations which capture the generative structure of a dataset by independently representing the underlying factors of variation present in the data. While many previous works have approached this problem by regularizing the representation itself (kulkarni2015deep; chen2016infogan; higgins2017beta; higgins2017darla; burgess2018understanding; kimmnih18), here we take a different approach: rather than directly encourage the representation to be disentangled, we instead encourage the representation to be structured such that individual factors of variation can be manipulated by a simple linear transformation. In other words, we constrain the way that the representation can be manipulated rather than the structure of the representation itself.
3.1 Latent canonicalization
In our approach, we augment a standard convolutional denoising autoencoder (AE) with latent canonicalizers. A standard AE learns an encoder, Enc, which takes as input a given image,
, and produces a corresponding latent vector,. At the same time, the latent vector is used as input to a decoder, Dec, which produces an output image, . Both the encoder and decoder are learned according to the following objective, , which minimizes the difference between the original input image and the reconstructed output image:
To encourage noise-robustness, we augment the potential input images following previous work on denoising autoencoders (Vincent:2008:ECR:1390156.1390294; vincent2010stacked), noising each raw input image, , by adding Gaussian noise, blur, and random crops and rotations, leading to our noised input image, .
In this work, we additionally constrain the structure of the learned latent space using a set of latent canonicalization losses. We define a latent canonicalizer as a learned linear transformation, , which operates on the latent representation, , in order to transform a given factor of variation (e.g., color or scale) from its original value to an arbitrary, but fixed, canonical value. So that individual factors can be manipulated separately, we learn unique canonicalization matrices, , for each factor of variation, , present in the dataset. In order to constrain the latent representation according to canonicalization along one factor, , our method yields the following basic form:
To supervise the learning of latent canonicalizers, we compare the images generated by canonicalized latents, , to ground truth images with the appropriate factors of variation set to their canonical values, . Canonicalizers can also be composed together to canonicalize multiple factors (e.g., ). During training, each image is passed through a random subset of both individual and pairs of canonicalizers. Given canonicalization paths for a given image, , the corresponding canonicalization loss for that single image (red in Figure 1) is written as:
Since many outputs are canonicalized, it is possible that the decoder will simply learn to only generate the canonical value of a given factor of variation. To prevent this form of input-independent memorization, we also include a “bypass” loss which is equivalent to the standard denoising auto-encoder formulation (green in Figure 1) defined in Equation (2), thus forcing information about each factor to be captured in the latent vector, .
Finally, we ensure that our representation not only allows for linear manipulation along factors of variation, but does so while capturing the information necessary to train a classifier to solve our end task. To this end, we add a supervised cross-entropy loss,, which optimizes our end task using available labeled data (cyan in Figure 1):
In practice, two canonicalizers are chosen at random for each input batch, and , and the corresponding latent representation is passed through generating four unique canonicalized latents: . A diagram of the method is shown in Figure 1, illustrating each canonicalization path, the bypass path, and the classification model. We have thus far has focused on the single image loss for simplicity. The full model averages the per image loss over mini-batch before making gradient updates.
Latent canonicalizer constraints:
Our approach relies upon constraining the way a representation can be manipulated. As a result, the specific choice of constraints should have a significant impact on the representations which are ultimately learned. Here, we limit ourselves to only two constraints: the transformations must be linear and canonicalizers must be composable, at least in pairs. If we were to allow non-linear canonicalizers, there would be little incentive for the encoder to learn an easily manipulable embedding. This would only be exacerbated as the non-linear canonicalizer is made more powerful by e.g., additional depth. By requiring canonicalizers to be composable, we encourage independence as each canonicalizer must be able to be applied without damaging any other. We explore some other potential constraints in Section 4.2.4.
3.2 sim2real evaluation
A main motivation of latent canonicalization is to leverage structure gleaned from a large source of data with rich annotations to better adapt to downstream tasks. A natural such setting for performance evaluation is sim-to-real. Specifically, we make use of the Street View House Numbers (SVHN) dataset and a subset of ImageNet (imagenet_cvpr09) as our real domains. To simulate SVHN, we built a SVHN simulator in which we have full control over many factors of variation such as font, colors and geometric transformations (a detailed description of the simulator is given in Section 4.2.1). To simulate ImageNet, we built a simulator which renders 3D models from ShapeNet (chang2015shapenet) to generate ImageNet-like images (see Section 4.3.1 for details).
We first pre-train on the synthetic data with latent canonicalization. Following this step, we freeze the canonicalizers and investigate whether the learned representations can be leveraged as the input to a linear classifier for few-shot learning on real examples, labeled only with the class of interest (e.g., no meta-labels for additional factors of variation). During this stage, the encoder is also refined.
Because latent canonicalization manipulates the latent representation, we can use canonicalization as a form of “latent augmentation.” In this setting, we can aggregate the predictions of the digit classifier across many canonicalization paths, each of which confers a single “vote.” Critically, such an approach requires the ability to cleanly manipulate the learned representation, and is therefore only possible for our proposed method. For a more detailed exploration of the impact of majority vote, see Section 4.2.4.
We compare our proposed latent canonicalization with several baselines. To ensure the most fair comparison possible, we aimed to fix as many hyperparameters as possible: we use the same back-bone architecture in all our network modules (a detailed description of our network is given in Section3.1
); the same number of epochs at the pre-training stage (learning from synthetic data); and a carefully chosen number of epochs at the refinement stage to fit well the method overfitting rate. For all latent canonicalization experiments we trained three models on the simulated data and then performed five refinements of each pre-trained model, for a total of fifteen replicates. Results are reported as meanstandard deviation. For all baseline models, 15 independent replicates were trained.
Our simplest baseline, which is meant as a lower bound, is simply a classifier trained only on the low-shot real data. Next is a classifier trained on synthetic data and then refined on low-shot real data. To evaluate the performance of pre-training on the synthetic data using a self-supervised method, we use the rotation prediction task from gidaris2018unsupervised. The vanilla-AE is an autoencoder pre-trained on synthetic data, after which a linear classifier is trained on low-shot real data, allowing the encoder to be refined. The strongest baseline is an autoencoder augmented with a digit classifier at both pre-training (on simulated data) and at refinement. The loss weighting was chosen via a hyper-parameter search individually for each model.
4.1 Latent canonicalization of dSprites
Key to our method is the use of latent canonicalizers, which are learned linear transformations optimized to eliminate individual factors of variation. As a first test of the effectiveness of latent canonicalization, we evaluated our framework using the toy data set, dSprites (dsprites17), which was designed for the exploration and evaluation of disentanglement methods. Specifically, dSprites is a dataset of images of white shapes on a black background generated by five independent factors of variation (shape, scale, rotation, and x- and y-positions). Training our model (Figure 1) on dSprites, we demonstrate the effect of applying different individual canonicalizers to various values of the input factors (Figure 2 left and middle). We therefore also applied a set of three canonicalizers (scale, rotation, and x-position) sequentially as shown in Figure 2, right. Encouragingly, we found that not only did individual canonicalizers effectively canonicalize their factor of interest, multiple canonicalizers can be applied in sequence. Furthermore, although models were trained with only pairs of canonicalizers, triplets of canonicalizers also performed well (Figure 2, right).
4.2 Latent canonicalization of SVHN
To evaluate the impact of latent canonicalization in a more realistic, though still controllable setting, we turned to the well-known Street View House Numbers (SVHN) dataset (svhn). In Section 4.2.1, we discuss our approach to designing a simulator for SVHN, which we use in Section 4.2.2 to pre-train our models and measure the impact of latent canonicalization on sim2real transfer, finding that latent canonicalization substantially improved performance relative to our strongest baseline. In Section 4.2.3, we investigate the structure of the learned representations to measure the representational impact of latent canonicalization. Finally, in Section 4.2.4, we discuss potential directions for improvement which, unfortunately, either harmed or left unchanged sim2real performance relative to our best-performing models.
4.2.1 Simulating SVHN
To support our proposed training procedure, we require a comprehensive dataset with detailed meta-data regarding ground-truth factors of variation. While this is possible for a natural dataset, such data can also be generated for visually realistic, but fairly simplistic datasets such as SVHN. To this end, we built a procedural image generator that simulates the SVHN dataset by rendering images with digits on a constant-colored background (see examples in Figure A1). Apart from the digit class variation, we also simulate additional factors of variation: font color, background color, font size, font type, rotation, shear, fill color for newly created pixels, scale, number of digit instances, translation, Gaussian noise, and blur. A detailed description of the simulator is provided in Section C. Among these factors we chose the first six as our supervised factors of variation, noise and blur as a joint noise model, and the rest as additional factors which enrich the variety of the data without supervised canonicalization. Some of the resulting images can be seen in Figure 3111see Appendix C for further simulator details. To enable reproducability across comparisons, and to minimize unaccounted for variability in the data, we generated a fixed training set with 75,000 images along with targets for all possible canonicalization paths. We emphasize that this training set represents a small fraction () of the total number of possible combinations of factors. We used such a small fraction of the total space to demonstrate that latent canonicalization is feasible even if the factor space is only sparsely sampled. The simulator along with the generated train set will be made publicly available.
4.2.2 sim2real SVHN transfer using latent canonicalization
|Model||10 shot||20 shot||50 shot||100 shot||1000 shot|
|Classifier trained on real only|
|Classifier trained on synth|
|AE + Classifier|
|Ours + majority vote|
We want to learn representations which enable models to generalize to novel data with consistent underlying structure. Moreover, the effectiveness of disentangling for acquisition of downstream tasks has recently been called into question (challenging_disentangling). We therefore evaluate the quality of our learned representations by measuring their ability to adapt to real examples. Specifically, we consider a few-shot setting in which we allow models pre-trained on simulated data access to a small refinement set of a few annotated examples per class. We ran this experiment with per-class set sizes of 10, 20, 50, 100 and 1000. To measure sim2real transfer, we train a fresh linear classifier on the representation learned by the encoder pre-canonicalization, , while allowing the encoder to be refined as well. We report accuracy on the unseen SVHN test set. As a measure of the pre-trained model, we include examples of reconstructions generated by canonicalized latents in Figure 3.
When small train-sets are used, results may vary substantially depending on the selected set. To account for this, we (a) use the same train set for all methods, and (b) ran each experiment times: 3 different networks were trained on simulated data with different random seeds and 5 replicate refinements were performed per pre-trained network.
Table 1 shows the sim2real SVHN results for our method along with five baselines discussed in Section 3.3. For all settings with fewer than one thousand examples per class, we found that our model outperformed the best competing baseline by . We further improved our model’s performance by taking a “majority vote” approach, in which we pass the representation through multiple latent canonicalizers in parallel to generate multiple votes as discussed in Section 3.2. Consistently, we found that majority vote boosted performance, by up to , with the largest gains coming for the lowest-shot settings.
To contextualize the importance of this improvement, one can see that to match our reported performance on 20 shot with the best baseline of an AE + Classifier, a 5 times larger train set of 100 is required. This demonstrates the potential of our proposed method in better utilizing access to meta-labels for better adaptation to real data.
4.2.3 Analysis of representations
Linear decodability of factors of variation from representations:
While latent canonicalization encourages representations to be linearly manipulable, it does not explicitly encourage linear decodability. However, since our canonicalizers are constrained to be linear, latent canonicalization may also encourage linear decodability. To test this, we trained linear classifiers on the pre-trained, frozen encoder for each factor of variation. We ran this experiment separately for each factor and compared linear decodability to our best baseline, AE+Cls. For continuous factors of variation, we binned target values, converting the problem into a multi-class classification task. Importantly, the number of bins differs across factors, so chance performance is different per factor. For background color, font color, and rotation angle, the canonicalized representation was noticeably more linear than the baseline (shown here by higher accuracy on a held-out test set), whereas font type showed a smaller improvement and font size and shear showed no improvement in linear decodability (Figure 4). One possible explanation for the discrepancy across factors is that font color, background color and, rotation are the most visually salient factors with the largest range of variability.
Visualizing the impact of canonicalization:
In the previous experiment, we quantified the linear decoding performance of our representations; here, we attempt to visualize the extent to which they are linear. If these representations were indeed linear, we would expect them to be easily decomposable using principal component analysis (PCA), the components of which we can visualize. However, the latent codes from each of the canonicalizersremoves the effect of a source of variation while keeping the others. We therefore compute the principal components (PCs) of , i.e., the difference between a canonicalized latent and the pre-canonicalized latent, such that PCs now represent the removed factor of variation. In Figure 5, we show sorted images along the first PC, showing a clear linear sorting of rotation, and font size. This visually demonstrates that our approach is able to extract latents that have strong linearity.
4.2.4 Alternative Design Decisions
Latent canonicalization opens up many additional avenues for modification to potentially produce better representations and, consequently, better sim2real performance. In the previous section, we showed how incorporating majority vote further increased performance. Here, we discuss several other modifications we explored, which resulted in no change or a decrease in sim2real performance.
Idempotency reconstruction loss:
To encourage composability of canonicalizers, we trained them in identical pairs with alternating orders for consistency (e.g., and ). We also tried encouraging idempotency, such that the same canonicalizer can be repeatedly applied without changing the reconstruction (e.g., ). We found that applying this loss actually harmed performance, reducing pre-majority vote classification accuracy by (Table A1, second row).
Classifier location during pre-training:
In our model, the classifier is placed at the output of the encoder prior to canonicalization, . However, one might imagine that latent canonicalization could serve as a form of data augmentation, such that placing the classifier after the canonicalization step, Canon, might increase performance. In contrast, we found that placing the classifier after the canonicalization step harmed performance, reducing pre-majority vote classification accuracy by (Table A1, third row). Interestingly, however, we found that the majority vote method, which can also be viewed as a form of data augmentation, did in fact increase performance.
Latent consistency and idempotency loss:
The impact of latent canonicalization was supervised at the image level, by comparing the reconstruction to a target image. However, we could also use a self-supervised loss at the latent level, by enforcing consistency (i.e., ) and idempotency (i.e., ). To account for the scale of this latent loss, which was much larger than the other loss components, we used a very small scale factor for the latent loss to maximize performance (1e-7). Even with an appropriately scaled loss, however, the latent loss either had little impact or harmed sim2real performance (Table A1, fourth row).
Alternative majority votes:
We found that a simple majority vote containing the pre-canonicalized and individually canonicalized representations (1 and 6 votes, respectively) increased performance by . To further augment the vote set, we also tried adding votes via idempotency (6 additional votes) and pairs (30 additional votes). We found that neither of these additions further improved performance over the simplest majority vote approach (Table A2).
4.3 Latent canonicalization of ImageNet subset
4.3.1 Simulating ImageNet: SynthImageNet
In order to successfully demonstrate the few-shot sim2real transfer capability of our method on a more naturalistic, complex dataset, we built a simulator to synthesize images that are similar to ImageNet (imagenet_cvpr09). Our simulator uses 3D models from ShapeNet (chang2015shapenet) to generate plausible images by rendering the 3D models of different shapes with various camera orientation and scale (Figure 6). To evaluate few-shot transfer from our simulated version of ImageNet images to real ImageNet, we chose a subset of 10 classes which overlapped with ShapeNet categories (“ImageNet subset”). For each class, we rendered a total of 5000 frames, each containing a randomly chosen 3D model instance from the category. To increase variability, we also augment the background of each image with a randomly chosen texture from the Describable Textures dataset (cimpoi14describing). We consider 4 factors of variation for this synthetic dataset, which we call SynthImageNet: camera orientation (latitude, longitude), object scale, and background texture.
|Model||10 shot||20 shot||100 shot|
|Classifier trained on real only|
|Classifier trained on synth|
|AE + Classifier|
4.3.2 sim2real ImageNet subset transfer using latent canonicalization
Table 2 shows sim2real results on the 10-class subset of ImageNet. Compared with the four baselines described in Section 3.3, our method shows consistent improvement in all few-shot settings. This result demonstrates that latent canonicalization can generalize to more naturalistic, complex data types.
We have introduced the notion of latent canonicalization, in which we train models to manipulate latent representations through constrained transformations which set individual factors of variation to fixed values (“canonicalizers”). We demonstrate that latent canonicalization encourages representations which result in markedly better sim2real transfer than comparable models on both the SVHN dataset and on a subset of ImageNet. This holds even when only a small sample of the possible combination space was used for training. Notably, latent canonicalized pre-trained models reached few-shot performance which required 5 as much data for comparable baselines. Analyzing the representations themselves, we found that the representation of factors of variation was linearized, as measured by decodability and linear dimensionality reduction (PCA). Finally, we discuss alternative constraints which did not help performance.
Here, we primarily analyzed SVHN, a realistic, but relatively simple dataset. However, we also found that latent canonicalization led to markedly improved performance on a subset of ImageNet. The strong performance on both of these datasets (and ImageNet in particular) is encouraging for larger-scale data with more complex factors of variation. Our results suggest the promise not only of latent canonicalization, but, more broadly, methods which encourage representational structure by constraining representational transformations rather than a particular structure itself.
Appendix A Alternative design decisions – additional material
|Model||10 shot||20 shot||50 shot||100 shot||1K shot|
|Ours (no maj vote)|
|Ours + idem|
|Ours + classifier after|
|Ours + latent loss 1e-7|
|Model||10 shot||20 shot||50 shot||100 shot||1K shot|
|Ours with maj vote|
|Ours + majority vote idem|
|Ours + majority vote all-pairs|
Appendix B Network implementation details
All the models in this paper are based on the modules: Encoder (Enc), Decoder (Dec), latent canonicalizers (Can) and a linear classifier (Cls). Specifically the classifier baseline is a composition of Enc+Cls; the AE is a composition Enc+Dec. For AE+Cls we compose a linear classifier and supervised for both reconstruction and classification loss. For our proposed network, we also have latent canonicalizers, used both individually and in pairs.
We performed a hyper-parameter sweep over the classification loss weight ( and found 50 to be the strongest baseline. We also used 50.0 as the classification loss for our network.
The Encoder is comprised of 3 modules, each includes 3 layers of 3x3 2D CNN with 64 latent dimensions followed by a batch norm and leaky-ReLU (parameter=0.1). A 2x2 max-pool anddropout is done after each module and finally a max-pool to get a 64 dimensional latent vector, .
The Decoder is comprised of 3 layers of 3x3 transposed convolutions with dimensionalities: (64, 64, 32) each followed by a batch normalization and ReLU. Finally, a last transposed convolution from 64 to 3 dimensions and a ReLU.
The networks were implemented using the PyTorch framework(paszke2017automatic). Pre-training on simulated data and refinement on real data were done using the Adam optimizer (kingma2014adam) with default parameters, with learning rates of and respectiverly. Pretraining was done for epochs and refinement was done for roughly epochs. The implementation was done
The classifier and the latent canonicalizers are simply single linear layers with 64 dimensions (corresponding to the dimensionality of the latent, ).
Appendix C Simulator Details
To support our training requirements, we built a generator for SVHN-like images. Importantly, we do not aim to mimic the true SVHN dataset statistics. We make only very general simplistic assumptions such as a 32x32 image size, that images contain a centralized digit, etc. We build the simulator based on the imgaug library (imgaug) which allows to easy control of the different factors of variation. The specific factors used to generate the images are given in the table together with their set of values. The first row of Table A3 shows the factors used for canonicalization. For each of these (except for noise, blur and crop which are canonicalized together to serve as the bypass), we generate a copy of the image with that factor set to its canonical value. The canonical values are reported in row 3 of the table. They are chosen arbitrarily but are the same for the entire train set. Following the 73257 real SVHN train set samples (not used in this work) we generated a synthetic set of size 75000 total images. Some examples of generated images can be seen in Figure A1. The set together with the simulator code will be made publicly available.
|Supervised factors||Rotation||Font color||Background color||Font scale||font type||shear||Noise , Blur , Crop|
|Canonical value||type||(0, 0, 0)|
|Unupervised factors||Cval||Scale||Number of digits||Translation|
Domain gap between synthetic and real images:
Having created quite a generic set of digit images with naive assumptions, one may ask whether it is at all useful for the target domain. By taking intermediate checkpoints during training and measuring the classification error without fine-tuning on the real test set, we see in Figure A2 that indeed there is good correlation the train set error and the test set accuracy.
Sampling the space of compositions:
Scene complexity grows exponentially with the number of factors. One goal of this work is to show we can get good performance even when sampling a fraction of this space. To get a rough estimate of that percentage we can bin even just the supervised factors to [, , , , , ] (following the order of table A3) discrete values. This gives a space of possibilities. Noticing the coarse binning of color-space and the fact we haven’t included the unsupervised factors, we conclude that our 75000 samples set size is at least order of magnitude smaller than the number of possible factor combinations.