1 Introduction
Knowledge transfer and generalization are hallmarks of human intelligence. From grammatical generalization when learning a new language to visual generalization when interpreting a Picasso, humans have an extreme ability to recognize and apply learned patterns in new contexts. Current machine learning algorithms pale in contrast, suffering from overfitting, adversarial attacks, and domain specialization
(Lake et al., 2016; Marcus, 2018). We believe that one fruitful approach to improve generalization in machine learning is to learn compositional representations in an unsupervised manner. A compositional representation consists of components that can be recombined, and such recombination underlies generalization. For example, consider a pink elephant. With a representation that composes color and object independently, imagining a pink elephant is trivial. However, a pink elephant may not be within the scope of a representation that mixes color and object. Compositionality comes in a variety of flavors, including feature compositionality (e.g. pink elephant), multiobject compositionality (e.g. elephant next to a penguin), and relational compositionality (e.g. the smallest elephant). In this paper we will focus on feature compositionality.Representations with feature compositionality are sometimes referred to as “disentangled” representations (Bengio et al., 2013). However, there is as yet no consensus on the definition of a disentangled representation (Higgins et al., 2018; Locatello et al., 2018; Higgins et al., 2017a). In this paper, when evaluating disentangled representations we both employ standard metrics (Kim and Mnih, 2017; Chen et al., 2018; Locatello et al., 2018) and (in view of their limitations) emphasize visualization analysis.
Learning disentangled representations from images has recently garnered much attention. However, even in the best understood conditions, finding hyperparameters to robustly obtain such representations still proves quite challenging
(Locatello et al., 2018). Here we present a simple modification of the variational autoencoder (VAE) (Kingma and Welling, 2014; Rezende et al., 2014) decoder architecture that can dramatically improve both the robustness to hyperparameters and the quality of the learned representations in a variety of VAEbased models. We call this the Spatial Broadcast decoder, which we propose as an alternative to the standard MLP/deconvolutional network (which we refer to as the DeConv decoder). See Figure 1 for a schematic.In this paper we show benefits of using the Spatial Broadcast decoder for image representationlearning, including:

Improvements to both disentangling and reconstruction accuracy on datasets of simple objects.

Complementarity to (and improvement of) stateoftheart disentangling techniques.

Particularly significant benefits when the objects in the dataset are small, a regime that is notoriously difficult for standard VAE architectures.

Improved representational generalization to outofdistribution test datasets involving both interpolation and extrapolation in latent space.
We also introduce a simple method for visualizing latent space geometry that we found helpful for gaining better insights when qualitatively assessing models and may prove useful to others interested in analyzing autoencoder representations
2 Spatial Broadcast Decoder
When modeling the distribution of images in a dataset with a variational autoencoder (Kingma and Welling, 2014; Rezende et al., 2014), standard architectures use an encoder consisting of a downsampling convolutional network followed by an MLP and a decoder consisting of an MLP followed by an upsampling deconvolutional network. The convolutional and deconvolutional networks share features across space, improving training efficiency for VAEs much like they do for all models that use image data.
However, while convolution surely lends some useful spatial inductive bias to the representations, a standard VAE learns highly entangled representations in an effort to represent the data as closely as possibly to its Gaussian prior (e.g. see Figure 7).
A number of new variations of the VAE objective have been developed to alleviate this problem, though all of them introduce additional hyperparameters (Higgins et al., 2017a; Burgess et al., 2018; Kim and Mnih, 2017; Chen et al., 2018). Furthermore, a recent study found them to be extremely sensitive to these hyperparameters (Locatello et al., 2018).
Meanwhile, upsampling deconvolutional networks (like the one in the standard VAE’s decoder) have been found to pose optimization challenges, such as producing checkerboard artifacts (Odena et al., 2016) and spatial discontinuities (Liu et al., 2018), effects that seem likely to raise problems for representationlearning in the VAE’s latent space.
Intuitively, asking a deconvolutional network to render an object at a particular position is a tall order — the network’s filters have no explicit spatial information, in other words they don’t “know where they are.” Hence the network must learn to propagate spatial asymmetries down from its highest layers and in from the spatial boundaries of the layers. This requires learning a complicated function, so optimization is difficult. To remedy this, in the Spatial Broadcast decoder we remove all upsampling deconvolutions from the network, instead tiling (broadcasting) the latent vector across space, appending fixed coordinate channels, then applying an unstrided fully convolutional network. This operation is depicted and described in Figure 1. With this architecture, rendering an object at a position becomes a very simple function (essentially just a thresholding operation in addition to the local convolutional features), though the network still has capacity to represent some more complex datasets (e.g. see Figure 5). Such simplicity of computation yields ease of optimization, and indeed we find that the Spatial Broadcast decoder greatly improves performance in a variety of VAEbased models.
Note that the Spatial Broadcast decoder does not provide any additional supervision to the generative model. The model must still learn to encode spatial information in its latent space in order to reconstruct accurately. The Spatial Broadcast decoder only allows the model to use the encoded spatial information in its latent space very efficiently.
In addition to better disentangling, the Spatial Broadcast VAE can on some datasets yield better reconstructions (see Figure 4), all with shallower networks and fewer parameters than a standard deconvolutional architecture. However it is worth noting that if the data does not benefit from having access to an absolute coordinate system, the Spatial Broadcast decoder could hurt. A standard DeConv decoder may in some cases more easily place patterns relative to each other or capture more extended spatial correlations. As we show in Section 4.2, we did not find this to be the case in the datasets we explored, but it is still a possible limitation of our model.
While the Spatial Broadcast decoder can be applied to any generative model that renders images from a vector representation, here we only explore its application to VAE models.
3 Related Work
Some of the ideas behind the Spatial Broadcast decoder have been explored extensively in many other bodies of work.
The idea of appending coordinate channels to convolutional layers has recently been highlighted (and named CoordConv) in the context of improving positional generalization (Liu et al., 2018). However, the CoordConv technique had been used beforehand (Zhao et al., 2015; Liang et al., 2015; Watters et al., 2017; Wojna et al., 2017; Perez et al., 2017; Nash et al., 2017; Ulyanov et al., 2017) and its origin is unclear. The CoordConv VAE (Liu et al., 2018) incorporates CoordConv layers into an upsampling deconvolutional network in a VAE (which, as we show in Appendix D, does not yield representationlearning benefits comparable to those of the Spatial Broadcast decoder). To our knowledge no prior work has combined CoordConv with spatially tiling a generative model’s representation as we do here.
Another line of work that has used coordinate channels extensively is language modeling. In this context, many models use fixed or learned position embeddings, combined in different ways to the input sequences to help compute translations depending on the sequence context in a differentiable manner (Gu et al., 2016; Vaswani et al., 2017; Gehring et al., 2017; Devlin et al., 2018).
We can draw parallels between the Spatial Broadcast decoder and generative models that learn “where” to write to an image (Jaderberg et al., 2015; Gregor et al., 2015; Reed et al., 2016), but in comparison these models render local image patches whereas we spatially tile a learned latent embedding and render an entire image. Work by Dorta et al. (2017) explored using a Laplacian Pyramid arrangement for VAEs, where some parts of the latent distributions would have global effects, whereas others would modify fine scale details about the reconstruction. Similar constraints on the extent of the spatial neighbourhood affected by the latent distribution has also been explored in Parmar et al. (2018). Other types of generative models already use convolutional latent distributions (Finn et al., 2015; Levine et al., 2015; Eslami et al., 2018), unlike the flat vectors we consider here. In this case the tiling operation is not necessary, but adding coordinate channels might be of help.
4 Results
We present here a select subset of our assessment of the Spatial Broadcast decoder. Interested readers can refer to Appendices AG for a more thorough set of observations and comparisons.
4.1 Performance on colored sprites



lowestvariance ones are shown in the traversals (the remainder are noncoding coordinates).
The Spatial Broadcast decoder was designed with objectfeature representations in mind, hence to initially showcase its performance we use a dataset of simple objects: colored 2dimensional sprites. This dataset is described in the literature (Burgess et al., 2018) as a colored version of the dSprites dataset (Matthey et al., 2017). One advantage of the colored sprites dataset is it has known factors of variation, of which there are 8: Xposition, Yposition, Size, Shape, Angle, and threedimensional Color. Thus we can evaluate disentangling performance with metrics that rely on ground truth factors of variation (Chen et al., 2018; Kim and Mnih, 2017).
To quantitatively evaluate disentangling, we focus primarily on the Mutual Information Gap (MIG) metric (Chen et al., 2018). The MIG metric is defined by first computing the mutual information matrix between the latent distribution means and ground truth factors, then computing the difference between the highest and secondhighest elements for each ground truth factor (i.e. the gap), then finally averaging these values over all factors.
Despite significant shortcomings (see Section 4.4), we did find the MIG metric overall can a helpful tool for evaluating representational quality, though should by no means be trusted blindly. We did also use the FactorVAE metric (Kim and Mnih, 2017), yet found it to be less consistent then MIG (we report these results in Appendix F).
In Figure 2 we compare a standard DeConv VAE (a VAE with an MLP + deconvolutional network decoder) to a Spatial Broadcast VAE (a VAE with the Spatial Broadcast decoder). We see that the Spatial Broadcast VAE outperforms the DeConv VAE both in terms of the MIG metric and traversal visualizations. The Spatial Broadcast VAE traversals clearly show all 8 factors represented in separate latent factors, seemingly on par with more complicated stateoftheart methods on this dataset (Burgess et al., 2018). The hyperparameters for the models in Figure 2 were chosen over a large sweep to minimize the model’s error, not chosen for any disentangling properties explicitly. See Appendix F for more details about hyperparameter choices and sensitivity.
The Spatial Broadcast decoder is complementary to existing disentangling VAE techniques, hence improves not only a vanilla VAE but SOTA models as well. To demonstrate this, we consider two recently developed models, FactorVAE (Kim and Mnih, 2017) and VAE (Higgins et al., 2017a).



Figure 3 shows the Spatial Broadcast decoder improving the disentangling of FactorVAE in terms of both the MIG metric and traversal visualizations. See Appendix G for further results to this effect.


, and the shaded region shows the hull of one standard deviation. White dots indicate
. (a) Reconstruction (Negative LogLikelihood, NLL) vs KL. yields low NLL and high KL (bottomright of figure), whereas yields high NLL and low KL (topleft of figure). See Alemi et al. (2017) for details. Spatial Broadcast VAE shows a better ratedistortion curve than Deconv VAE. (b) Reconstruction vs MIG metric. correspond to lower NLL and low MIG regions (bottomleft of figure), and values correspond to high NLL and high MIG scores (towards topright of figure). Spatial Broadcast VAE is better disentangled (higher MIG scores) than Deconv VAE.Figure 4 shows performance of a VAE with and without the Spatial Broadcast decoder for a range of values of . Not only does the Spatial Broadcast decoder improve disentangling, it also yields a lower ratedistortion curve, hence learns to more efficiently represent the data than the same model with a DeConv decoder.
4.2 Datasets without positional variation


The colored sprites dataset discussed in Section 4.1 seems particularly wellsuited for the Spatial Broadcast decoder because X and Yposition are factors of variation. However, we also evaluate the architecture on datasets that have no positional factors of variation: Chairs and 3D ObjectinRoom datasets (Aubry et al., 2014; Kim and Mnih, 2017). In the latter, the factors of variation are highly nonlocal, affecting multiple regions spanning nearly the entire image. We find that on both datasets a Spatial Broadcast VAE learns representations that look very welldisentangled, seemingly as well as SOTA methods on these datasets and without any modification of the standard VAE objective (Kim and Mnih, 2017; Higgins et al., 2017a). See Figure 5 for results (and supplementary Figure 12 for additional traversals).
While this shows that the Spatial Broadcast decoder does not hurt when used on datasets without positional variation, it may seem unlikely that it would help in this context. However, we show in Appendix G that it actually can help to some extent on such datasets. We attribute this to its using a shallower network and no upsampling deconvolutions (which have been observed to cause optimization difficulties in a variety of settings (Liu et al., 2018; Odena et al., 2016)).
4.3 Datasets with small objects
In exploring datasets with objects varying in position, we often find a (standard) DeConv VAE learns a representation that is discontinuous with respect to object position. This effect is amplified as the size of the object decreases. This makes sense, because the pressure for a VAE to represent position continuously comes from the fact that an object and a positionperturbed version of itself overlap in pixel space (hence it is economical for the VAE to map noise in its latent samples to local translations of an object). However, as an object’s size decreases, this pixel overlap decreases, hence the pressure for a VAE to represent position continuously weakens.
4.4 Latent space geometry visualization
Dataset Distribution Ground Truth Factors Dataset Samples  
MIG Metric Values Worst Replica Best Replica  



Evaluating the quality of a representation can be challenging and timeconsuming. While a number of metrics have been proposed to quantify disentangling, many of them have serious shortcomings and there is as yet no consensus in the literature which to use (Locatello et al., 2018). We believe it is impossible to quantify how good a representation is with a single scalar, because there is a fundamental tradeoff between how much information a representation contains and how wellstructured the representation is. This has been noted by others in the disentangling literature (Ridgeway and Mozer, 2018; Eastwood and Williams, 2018). This disentanglingdistortion tradeoff is a recapitulation of the ratedistortion tradeoff (Alemi et al., 2017) and can be seen firsthand in Figure 4. We would like representations that both reconstruct well and disentangle well, but exactly how to balance these two factors is a matter of subjective preference (and surely depends on the dataset). Any scalar disentangling metric will implicitly favor some arbitrary disentanglingreconstruction potential.
In addition to this unavoidable limitation of disentangling metrics, we found that the MIG metric (while perhaps more accurate than other existing metrics) does not capture the intuitive notion of disentangling because:

It depends on a choice of basis for the ground truth factors, and heavily penalizes rotation of the representation with respect to this basis. Yet it is often unclear what the correct basis for the ground truth factors is (e.g. RGB vs HSV vs HSL). For example, see the bottom row of Figure 7.

It is invariant to a folding of the representation space, as long as the folds align with the axes of variation. See the middle row of Figure 7 for an example of a doublefold in the latent space which isn’t penalized by the MIG metric.
Due to the subjective nature of disentangling and the difficulty in defining appropriate metrics, we put heavy emphasis on latent space visualization as a means for representational analysis. Latent space traversals have been extensively used in the literature and can be quite revealing (Higgins et al., 2017a, b). However, in our experience, traversals suffer two shortcomings:

Some latent space entanglement can be difficult for the eye to perceive in traversals. For example, a slight change in brightness in a latent traversal that represents changing position can easily go unnoticed.

Traversals only represent the latent space geometry around one point in space, and crossreferencing corresponding traversals between multiple points is quite timeconsuming.
Consequently, we caution the reader against relying too heavily on traversals when evaluating latent space geometry. We propose an additional method for analyzing latent spaces, which we found very useful in our research. This is possible when there are known generative factors in the dataset and aims to directly view the embedding of generative factor space in the model’s latent space: We plot in latent space the locations corresponding to a grid of points in generative factor space. While this is can only visualize the latent embedding of a 2dimensional subspace of generative factor space, it can be very revealing of the latent space geometry.
We showcase this analysis method in Figure 7 on a dataset of circles varying in X and Yposition. (See Appendix G for similar analyses on many more datasets.) We compare the representations learned by a DeConv VAE and a Spatial Broadcast VAE. This reveals a stark contrast: The Spatial Broadcast latent geometry is a nearperfect Euclidean transformation, while the Deconv decoder model’s representations are very entangled.
The MIG metric does not capture this difference well: It gives a score near zero to a Spatial Broadcast model with a latent space rotated with respect to the generative factors and a greater score to a highly entangled Deconv model. In practice we care primarily about compositionality of (and hence disentangling of) subspaces of generative factor space and not about axisalignment within these subspaces. For example, rotation within XY position subspace is acceptable, whereas rotation in positioncolor space is not. This naturally poses a great challenge for both designing disentangling metrics and formally defining disentangling (cf. Higgins et al., 2018)
. We believe that additional structure over the factors of variation can be inferred from temporal correlations, the structure of a reinforcement learning agent’s action space, and other effects not necessarily captured by a staticimage dataset. Nonetheless, inductive biases like the Spatial Broadcast decoder can be explored in the context of static images, as they will likely also help representation learning in more complete contexts.
4.5 Datasets with dependent factors
Dataset Distribution Ground Truth Factors Dataset Samples  
MIG Metric Values Worst Replica Best Replica  



Our primary motivation for unsupervised representation learning is the prospect of improved generalization and transfer, as discussed in Section 1. Consequently, we would like a model to learn compositional representations that can generalize to new feature combinations. We explore this in a controlled setting by holding out a region of generative factor space from the training dataset, then evaluating the representation in this heldout region. Figure 8 shows results to this effect, comparing the Spatial Broadcast VAE to a DeConv VAE on a dataset of circles with a heldout region in the middle of XY positionspace (so the model sees no circles in the middle of the image, indicated by the black dots in the second column). The Spatial Broadcast VAE generalizes almost perfectly in this case, appearing unaffected by the fact that the data generative factors are no longer independent and extrapolating nearly linearly throughout the heldout region. In contrast, the DeConv VAE’s latent space is highly entangled, even more so than in the case with independent factors of Figure 7. See Appendix G for more analyses of generalization on other datasets.
From the perspective of density modeling, disentangling of the Spatial Broadcast VAE may seem undesirable in this case because it allocates high probability in latent space to a region of low (here, zero) probability in the dataset. However, from the perspective of representation learning and compositional features, such generalization is a highly desirable property.
5 Conclusion
Here we present and analyze the Spatial Broadcast decoder in the context of Variational Autoencoders. We demonstrate that it improves learned latent representations, most dramatically for datasets with objects varying in position. It also improves generalization in latent space and can be incorporated into SOTA models to boost their performance in terms of both disentangling and reconstruction accuracy. We rigorously analyze the contribution of the Spatial Broadcast decoder on a wide variety of datasets and models using a range of visualizations and metrics. We hope that this analysis has provided the reader with an intuitive understanding of how and why it improves learned representations.
We believe that learning compositional representations is an important ingredient for flexibility and generalization in many contexts, from supervised learning to reinforcement learning, and the Spatial Broadcast decoder is one step towards robust compositional visual representation learning.
Acknowledgments
We thank Irina Higgins, Danilo Rezende, Matt Botvinick, and Yotam Doron for helpful discussions and insights.
References
 Alemi et al. [2017] Alexander A. Alemi, Ben Poole, Ian Fischer, Joshua V. Dillon, Rif A. Saurous, and Kevin Murphy. An informationtheoretic analysis of deep latentvariable models. CoRR, abs/1711.00464, 2017. URL http://arxiv.org/abs/1711.00464.
 Aubry et al. [2014] M. Aubry, D. Maturana, A. Efros, B. Russell, and J. Sivic. Seeing 3d chairs: exemplar partbased 2d3d alignment using a large dataset of cad models. In CVPR, 2014.
 Bengio et al. [2013] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.
 Burgess et al. [2018] Christopher P. Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. Understanding disentangling in VAE. arXiv, 2018.
 Chen et al. [2018] Tian Qi Chen, Xuechen Li, Roger B. Grosse, and David K. Duvenaud. Isolating sources of disentanglement in variational autoencoders. CoRR, abs/1802.04942, 2018. URL http://arxiv.org/abs/1802.04942.
 Devlin et al. [2018] Jacob Devlin, MingWei Chang, Kenton Lee, and Kristina Toutanova. BERT: pretraining of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. URL http://arxiv.org/abs/1810.04805.
 Dorta et al. [2017] Garoe Dorta, Sara Vicente, Lourdes Agapito, Neill D.F. Campbell, Simon Prince, and Ivor Simpson. Laplacian pyramid of conditional variational autoencoders. In Proceedings of the 14th European Conference on Visual Media Production (CVMP 2017), CVMP 2017, pages 7:1–7:9, New York, NY, USA, 2017. ACM. ISBN 9781450353298. doi: 10.1145/3150165.3150172. URL http://doi.acm.org/10.1145/3150165.3150172.
 Eastwood and Williams [2018] Cian Eastwood and Christopher K. I. Williams. A framework for the quantitative evaluation of disentangled representations. ICLR, 2018.

Eslami et al. [2018]
S. M. Ali Eslami, Danilo Jimenez Rezende, Frederic Besse, Fabio Viola, Ari S.
Morcos, Marta Garnelo, Avraham Ruderman, Andrei A. Rusu, Ivo Danihelka, Karol
Gregor, David P. Reichert, Lars Buesing, Theophane Weber, Oriol Vinyals, Dan
Rosenbaum, Neil Rabinowitz, Helen King, Chloe Hillier, Matt Botvinick, Daan
Wierstra, Koray Kavukcuoglu, and Demis Hassabis.
Neural scene representation and rendering.
Science, 360(6394):1204–1210, 2018. ISSN 00368075. doi: 10.1126/science.aar6170. URL http://science.sciencemag.org/content/360/6394/1204.  Finn et al. [2015] Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, and Pieter Abbeel. Learning visual feature spaces for robotic manipulation with deep spatial autoencoders. CoRR, abs/1509.06113, 2015. URL http://arxiv.org/abs/1509.06113.
 Gehring et al. [2017] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolutional sequence to sequence learning. CoRR, abs/1705.03122, 2017. URL http://arxiv.org/abs/1705.03122.
 Gregor et al. [2015] Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. DRAW: A recurrent neural network for image generation. CoRR, abs/1502.04623, 2015. URL http://arxiv.org/abs/1502.04623.
 Gu et al. [2016] Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. Incorporating copying mechanism in sequencetosequence learning. CoRR, abs/1603.06393, 2016. URL http://arxiv.org/abs/1603.06393.
 Higgins et al. [2017a] Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. VAE: Learning basic visual concepts with a constrained variational framework. ICLR, 2017a.
 Higgins et al. [2017b] Irina Higgins, Nicolas Sonnerat, Loic Matthey, Arka Pal, Christopher P Burgess, Matthew Botvinick, Demis Hassabis, and Alexander Lerchner. Scan: learning abstract hierarchical compositional visual concepts. arXiv preprint arXiv:1707.03389, 2017b.
 Higgins et al. [2018] Irina Higgins, David Amos, David Pfau, Sébastien Racanière, Loïc Matthey, Danilo J. Rezende, and Alexander Lerchner. Towards a definition of disentangled representations. CoRR, abs/1812.02230, 2018. URL http://arxiv.org/abs/1812.02230.
 Ioffe and Szegedy [2015] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015. URL http://arxiv.org/abs/1502.03167.
 Jaderberg et al. [2015] Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial transformer networks. CoRR, abs/1506.02025, 2015. URL http://arxiv.org/abs/1506.02025.
 Kim and Mnih [2017] Hyunjik Kim and Andriy Mnih. Disentangling by factorising. arxiv, 2017.
 Kingma and Ba [2015] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015.
 Kingma and Welling [2014] Diederik P. Kingma and Max Welling. Autoencoding variational bayes. ICLR, 2014.
 Lake et al. [2016] Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Building machines that learn and think like people. Behavioral and Brain Sciences, pages 1–101, 2016.
 Levine et al. [2015] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. Endtoend training of deep visuomotor policies. CoRR, abs/1504.00702, 2015. URL http://arxiv.org/abs/1504.00702.
 Liang et al. [2015] Xiaodan Liang, Yunchao Wei, Xiaohui Shen, Jianchao Yang, Liang Lin, and Shuicheng Yan. Proposalfree network for instancelevel object segmentation. CoRR, abs/1509.02636, 2015. URL http://arxiv.org/abs/1509.02636.
 Liu et al. [2018] Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski Such, Eric Frank, Alex Sergeev, and Jason Yosinski. An intriguing failing of convolutional neural networks and the coordconv solution. CoRR, abs/1807.03247, 2018. URL http://arxiv.org/abs/1807.03247.
 Locatello et al. [2018] Francesco Locatello, Stefan Bauer, Mario Lucic, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. arXiv preprint arXiv:1811.12359, 2018.
 Marcus [2018] Gary Marcus. Deep learning: A critical appraisal. CoRR, abs/1801.00631, 2018. URL http://arxiv.org/abs/1801.00631.
 Matthey et al. [2017] Loic Matthey, Irina Higgins, Demis Hassabis, and Alexander Lerchner. dsprites: Disentanglement testing sprites dataset, 2017. URL https://github.com/deepmind/dspritesdataset/.
 Nash et al. [2017] Charlie Nash, Ali Eslami, Chris Burgess, Irina Higgins, Daniel Zoran, Theophane Weber, and Peter Battaglia. The multientity variational autoencoder. NIPS Workshops, 2017.
 Odena et al. [2016] Augustus Odena, Vincent Dumoulin, and Chris Olah. Deconvolution and checkerboard artifacts. Distill, 2016. doi: 10.23915/distill.00003. URL http://distill.pub/2016/deconvcheckerboard.
 Parmar et al. [2018] Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, and Alexander Ku. Image transformer. CoRR, abs/1802.05751, 2018. URL http://arxiv.org/abs/1802.05751.
 Perez et al. [2017] Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron C. Courville. Film: Visual reasoning with a general conditioning layer. CoRR, abs/1709.07871, 2017. URL http://arxiv.org/abs/1709.07871.
 Reed et al. [2016] Scott E. Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learning what and where to draw. CoRR, abs/1610.02454, 2016. URL http://arxiv.org/abs/1610.02454.

Rezende et al. [2014]
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra.
Stochastic backpropagation and approximate inference in deep generative models.
ICML, 32(2):1278–1286, 2014.  Ridgeway and Mozer [2018] Karl Ridgeway and Michael C Mozer. Learning deep disentangled embeddings with the fstatistic loss. NIPS, 2018.
 Ulyanov et al. [2017] Dmitry Ulyanov, Andrea Vedaldi, and Victor S. Lempitsky. Deep image prior. CoRR, abs/1711.10925, 2017. URL http://arxiv.org/abs/1711.10925.
 Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. CoRR, abs/1706.03762, 2017. URL http://arxiv.org/abs/1706.03762.
 Watters et al. [2017] Nicholas Watters, Daniel Zoran, Theophane Weber, Peter Battaglia, Razvan Pascanu, and Andrea Tacchetti. Visual interaction networks: Learning a physics simulator from video. In Advances in Neural Information Processing Systems 30, pages 4539–4547. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/7040visualinteractionnetworkslearningaphysicssimulatorfromvideo.pdf.
 Wojna et al. [2017] Zbigniew Wojna, Alexander N. Gorban, DarShyang Lee, Kevin Murphy, Qian Yu, Yeqing Li, and Julian Ibarz. Attentionbased extraction of structured information from street view imagery. CoRR, abs/1704.03549, 2017. URL http://arxiv.org/abs/1704.03549.
 Zhao et al. [2015] Junbo Zhao, Michael Mathieu, Ross Goroshin, and Yann LeCun. Stacked whatwhere autoencoders. arXiv, abs/1506.02351, 2015. URL https://arxiv.org/abs/1506.02351.
Appendix A Experiment Details
For all VAE models we used a Bernoulli decoder distribution, parametrized by its logits. It is with respect to this distribution that the reconstruction error (negative log likelihood) was computed. This could accomodate our datasets, since they were normalized to have pixel values in
. We also explored using a Gaussian distribution with fixed variance (for which the NLL is equivalent to scaled MSE), and found that this produces qualitatively similar results and in fact improves stability. Hence while a Bernoulli distribution usually works, we suggest the reader wishing to experiment with these models starts with a Gaussian decoder distribution with mean parameterized by the decoder network output and variance constant at
.In all networks we used ReLU activations, weights initialized by a truncated normal (see
[Ioffe and Szegedy, 2015]), and biases initialized to zero. We use no other neural network tricks (no BatchNorm or dropout), and all models were trained with the Adam optimizer
[Kingma and Ba, 2015]. See below for learning rate details.a.1 VAE Hyperparameters
For all VAE models except VAE (shown only in Figure 4), we use a standard VAE loss, namely with a KL term coefficient . For FactorVAE we also use , as in Kim and Mnih [2017].
For the VAE, VAE, CoordConv VAE and ablation study we used the network parameters in Table 1. We note that, while the Spatial Broadcast decoder uses fewer parameters than the DeConv decoder, it does require about more memory to store the weights. However, for the 3D ObjectinRoom dataset we included three additional deconv layers in the Spatial Droadcast decoder (without these additional layers the decoder was not powerful enough to give good reconstructions on that dataset).
All of these models were trained using a learning rate of
on with batch size 16. All convolutional and deconvolutional layers have “same” padding, i.e. have zeropadded input so that the output shape is
in the case of convolution and in the case of deconvolution.a.2 FactorVAE
For the FactorVAE model, we used the hyperparameters described in the FactorVAE paper [Kim and Mnih, 2017]. Those network parameters are reiterated in Table 2. Note that the Spatial Broadcast parameters are the same as for the other models in Table 1. For the optimization hyperparameters we used , a learning rate of for the VAE updates, a learning rate of for the discriminator updates, and batch size 32. These parameters generally gave stable results.
However, when training the FactorVAE model on colored sprites we encountered instability during training. We subsequently did a number of hyperparameter sweeps attempting to improve stability, but to no avail. Ultimately, we used the hyperparameters in Table 2, though even with limited training steps (see Appendix Section A.4) about of seeds diverged before training completed for both Spatial Broadcast and Deconv decoder.
a.3 Datasets
All datasets were rendered in images of size and normalized to .
Colored Sprites:
For this dataset, we use the binary dsprites dataset opensourced in [Matthey et al., 2017], but multiplied by colors sampled in HSV space uniformly within the region , , . Sans color, there are 737,280 images in this dataset. However, we sample the colors online from a continuous distribution, effectively making the dataset size infinite.
Chairs:
This dataset is opensourced in [Aubry et al., 2014]. This dataset, unlike all others we use, has only a single channel in its images. It contains 86,366 images.
3D ObjectinRoom:
This dataset was used extensively in the FactorVAE paper [Kim and Mnih, 2017]. It consists of an object in a room and has 6 factors of variation: Camera angle, object size, object shape, object color, wall color, and floor color. The colors are sampled uniformly from a continuous set of hues in the range . This dataset contains 480,000 images, procedurally generated as all combinations of 10 floor hues, 10 wall hues, 10 object hues, 8 object sizes, 4 object shapes, and 15 camera angles.
Circles Datasets:
To more thoroughly explore datasets with a variety of distributions, factors of variation, and heldout test sets we wrote our own procedural image generator for circular objects in PyGame (rendered with an antialiasing factor of 5). We used this to generate the data for results in Section 4.4. In these datasets we control subsets of the following factors of variation: Xposition, Yposition, Size, Color. We generated five datasets in this way, which we call XY, XH, RG, XYH Small, and XYH Tiny, and can be seen in Figures (Fig 13), (Fig 16), (Fig 19), (Fig 6), and (Fig 10) respectively.
Table 3 shows the values of these factors for each dataset. Note that for some datasets we define the color distribution in RGB space, and for others we define it in HSV space.
To create the datasets with dependent factors, we hold out one quarter of the dataset (the intersection of half of the ranges of each of the two factors), either centered within the data distribution or in the corner.
For each dataset we generate 500,000 randomly sampled training images.
X  Y  Size  (H, S, V)  (R, G, B)  
3 XY  [0.2, 0.8]  [0.2, 0.8]  0.2  N/A  (1.0, 1.0, 1.0) 
XH  [0.2, 0.8]  0.5  0.3  ([0.2, 0.8], 1.0, 1.0)  N/A 
RG  0.5  0.5  0.5  N/A  ([0.4, 0.8], [0.4, 0.8], 1.0) 
XYH Small  [0.2, 0.8]  [0.2, 0.8]  0.1  ([0.2, 0.8], 1.0, 1.0)  N/A 
XYH Tiny  [0.2, 0.8]  [0.2, 0.8]  0.075  ([0.2, 0.8], 1.0, 1.0)  N/A 
a.4 Training Steps
The number of training steps for each model on each dataset can be found in Table 4. In general, for each dataset we used enough training steps so that all models converged. Note that while the training iterations is different for FactorVAE than for the other models on colored sprites (due to instability of FactorVAE), this has no bearing on our results because we do not compare across models. We only compare across decoder architectures, and we always used the same training steps for both DeConv and Spatial Broadcast decoders within each model.
VAE  FactorVAE  
3 Colored Sprites  
Chairs  N/A  
3D Objects  N/A  
Circles 
Appendix B Ablation Study
One aspect of the Spatial Broadcast decoder is the concatenation of constant coordinate channels to its tiled input latent vector. While our justification of its performance emphasizes the simplicity of computation it affords, it may seem possible that the coordinate channels are only used to provide positional information and the simplicity of this positional information (linear meshgrid) is irrelevant. Here we perform an ablation study to demonstrate that this is not the case; the organization of the coordinate channels is important. For this experiment, we randomly permute the coordinate channels through space. Specifically, we take the shape coordinate channels and randomly permute the entries. We keep each pair together to ensure that after the shuffling each location does still have a unique pair of coordinates in the coordinate channels. Importantly, we only shuffle the coordinate channels once, then keep them constant throughout training.
Figure 9 shows reconstructions and traversals for two replicas (with different shuffled coordinate channels). Both disentangling and reconstruction accuracy are significantly reduced.
Appendix C Disentangling Tiny Objects
In the dataset of small colored circles shown in Figure 6 the circle diameter is times the framewidth. We also generated a dataset with circles times the framewidth, and Figure 10 shows similar results on this dataset (albeit more difficult for the eye to make out). We were surprised to see disentangling of such tiny objects and have not explored the lower object size limit for disentangling with the Spatial Broadcast decoder.
Appendix D CoordConv VAE
CoordConv VAE [Liu et al., 2018] has been proposed as a decoder architecture to improve the continuity of VAE representations. CoordConv VAE appends coordinate channels to every feature layer of the standard deconvolutional decoder, yet does not spatially tile the latent vector, hence retains upsampling deconvolutions.
Figure 11 shows analysis of this model on the colored sprites dataset. While the latent space does appear to be continuous with respect to object position, it is quite entangled (far more so than a Spatial Broadcast VAE). This is not very surprising, since CoordConv VAE uses upscale deconvolutions to go all they way from spatial shape to spatial shape , while in Table 10 we see that introducing upscaling hurts disentangling in a Spatial Broadcast VAE.


Appendix E Extra traversals for datasets without positional variation
As we acknowledge in the main text, a single latent traversal plot only shows local disentangling at one point in latent space. Hence to support our claim in Section 4.2 that the Spatial Broadcast VAE disentangled the Chairs and 3D objects datasets, we show in Figure 12 traversals about a second seed in each dataset for the same models as in Figure 5.
Appendix F Architecture Hyperparameters
In order to remain objective when selecting model hyperparameters for the Spatial Broadcast and Deconv decoders, we chose hyperparameters based on minimizing the ELBO loss, not considering any information about disentangling. After finding reasonable encoder hyperparameters, we performed largescale (25 replicas each) sweeps over a few decoder hyperparameters for both the DeConv and Spatial Broadcast decoder on the colored sprites dataset. These sweeps are revealing of hyperparameter sensitivity, so we report the following quantities for them:

ELBO. This is the evidence lower bound (total VAE loss). It is the sum of the negative log likelihood (NLL) and KLdivergence.

NLL. This is the negative log likelihood of an image with respect to the model’s reconstructed distribution of that image. It is a measure of reconstruction accuracy.

KL. This is the KL divergence of the VAE’s latent distribution with its Gaussian prior. It measures how much information is being encoded in the latent space.

Latents Used. This is the mean number of latent coordinates with standard deviation less than . Typically, a VAE will have some unused latent coordinates (with standard deviation near ) and some used latent coordinates. The threshold is arbitrary, but this quantity does provide a rough idea of how many factors of variation the model may be representing.

MIG. The MIG metric.

Factor VAE. This is the metric described in the FactorVAE paper [Kim and Mnih, 2017]. We found this metric to be less consistent than the MIG metric (and equally flawed with respect to rotated coordinates), but it qualitatively agrees with the MIG metric most of the time.
f.1 ConvNet Depth
Table 5 shows results of sweeping over ConvNet depth in the Spatial Broadcast decoder. This reveals a consistent trend: As the ConvNet deepens, the model moves towards lower rate/higher distortion. Consequently, latent space information and reconstruction accuracy drop. Traversals with deeper nets show the model dropping factors of variation (the dataset has 8 factors of variation).
Table 6 shows a noisier but similar trend when increasing DeConvNet depth in the DeConv decoder.
ConvNet  ELBO  NLL  KL  Latents Used  MIG  Factor VAE 

2Layer  339 (2.3)  312 (2.7)  27.5 (0.54)  8.33 (0.37)  0.076 (0.038)  0.187 (0.027) 
3Layer  329 (4.1)  305 (4.6)  24.4 (0.54)  7.22 (0.34)  0.147 (0.057)  0.208 (0.084) 
4Layer  341 (6.8)  318 (7.7)  22.6 (0.97)  5.93 (0.51)  0.157 (0.045)  0.226 (0.046) 
5Layer  340 (8.8)  317 (9.7)  22.7 (0.99)  5.70 (0.37)  0.173 (0.059)  0.218 (0.030) 
ConvNet  ELBO  NLL  KL  Latents Used  MIG  Factor VAE 

3Layer  372 (8.6)  346 (8.9)  26.8 (0.40)  9.20 (0.04)  0.031 (0.018)  0.144 (0.031) 
4Layer  349 (9.4)  322 (10.0)  27.1 (0.88)  8.90 (0.24)  0.025 (0.015)  0.139 (0.009) 
5Layer  340 (9.8)  314 (10.4)  26.0 (1.00)  7.95 (0.64)  0.056 (0.32)  0.184 (0.053) 
6Layer  349 (15.0)  326 (16.1)  23.3 (1.42)  6.36 (0.81)  0.056 (0.019)  0.199 (0.029) 
f.2 MLP Depth
The Spatial Broadcast decoder as presented in this work is fully convolutional. It contains no MLP. However, motivated by the need for more depth on the 3D ObjectinRoom dataset, we did explore applying an MLP to the input vector prior to the broadcast operation. We found that including this MLP had a qualitatively similar effect as increasing the number of convolutional layers on the colored sprited dataset, decreasing latent capacity and giving poorer reconstructions. These results are shown in Table 7.
However, on the 3D ObjectinRoom dataset adding the MLP did improve the model when using ConvNet depth 3 (the same as for colored sprites). Results of a sweep over depth of a prebroadcast MLP are shown in Table 9. As mentioned in Section 4.2, we were able to achieve the same effect by instead increasing the ConvNet depth to 6, but for those interested in computational efficiency using a prebroadcast MLP may be a better choice for datasets of this sort.
In the DeConv decoder, increasing the MLP layers again has a broadly similar effect as increasing the ConvNet layers, as shown in Table 8.
MLP  ELBO  NLL  KL  Latents Used  MIG  Factor VAE 

0Layer  329 (4)  305 (5)  24.5 (0.54)  7.21 (0.34)  0.147 (0.057)  0.208 (0.084) 
1Layer  330 (6)  307 (6)  23.9 (0.72)  6.68 (0.46)  0.164 (0.043)  0.200 (0.034) 
2Layer  349 (15)  327 (17)  21.5 (2.13)  6.03 (0.66)  0.210 (0.048)  0.232 (0.045) 
3Layer  392 (23)  399 (114)  15.7 (2.93)  4.17 (1.23)  0.160 (0.034)  0.275 (0.064) 
MLP  ELBO  NLL  KL  Latents Used  MIG  Factor VAE 

1Layer  347 (13)  321 (14)  25.8 (1.3)  7.97 (0.72)  0.052 (0.020)  0.174 (0.043) 
2Layer  352 (17)  328 (18)  23.7 (1.7)  6.68 (0.78)  0.051 (0.024)  0.196 (0.024) 
3Layer  365 (19)  345 (21)  19.7 (2.4)  5.24 (0.73)  0.144 (0.062)  0.243 (0.043) 
MLP  ELBO  NLL  KL  Latents Used  MIG  Factor VAE 

0Layer  4039 (3.4)  4010 (3.3)  29.3 (0.46)  8.73 (0.41)  0.541 (0.091)  0.931 (0.043) 
1Layer  4022 (3.7)  4003 (3.7)  19.3 (0.35)  6.30 (0.49)  0.538 (0.105)  0.946 (0.043) 
2Layer  4018 (3.3)  3999 (3.3)  18.5 (0.30)  5.94 (0.41)  0.574 (0.096)  0.978 (0.027) 
3Layer  4020 (3.0)  4002 (2.9)  18.3 (0.38)  5.73 (0.31)  0.659 (0.123)  0.979 (0.037) 
f.3 Decoder Upscale Factor
We acknowledge that there is a continuum of models between the Spatial Broadcast decoder and the Deconv decoder. One could interpolate from one to the other by incrementally replacing the convolutional layers in the Spatial Broadcast decoder’s network by deconvolutional layers with stride 2 (and simultaneously decreasing the height and width of the tiling operation). Table 10 shows a few steps of such a progression, where (starting from the bottom) 1, 2, and all 3 of the convolutional layers in the Spatial Broadcast decoder are replaced by a deconvolutional layer with stride 2. We see that this hurts disentangling without affecting the other metrics, further evidence that upscaling deconvolutional layers are bad for representation learning.
MLP  ELBO  NLL  KL  Latents Used  MIG  Factor VAE 

0 Upscales  329 (4.1)  305 (4.6)  24.4 (0.54)  7.22 (0.34)  0.147 (0.057)  0.208 (0.084) 
1 Upscale  327 (4.4)  302 (4.9)  24.4 (0.55)  7.29 (0.26)  0.149 (0.048)  0.194 (0.026) 
2 Upscales  329 (4.3)  304 (4.8)  24.2 (0.60)  7.14 (0.42)  0.122 (0.045)  0.235 (0.070) 
3 Upscales  330 (2.4)  305 (2.7)  24.2 (0.24)  7.39 (0.08)  0.110 (0.032)  0.182 (0.028) 
Appendix G Latent Space Geometry Analysis for Circle Datasets
We showed visualization of latent space geometry on the circles datasets in Figures 7 (with independent factors of variation) and 8 (with dependent factors of variation). These figures showcased the improvement that the Spatial Broadcast decoder lends. However, we also conducted the same style experiments on many more datasets and on FactorVAE models. In this section we will present these additional results.
We consider three generative factor pairs: (XPosition, YPosition), (XPosition, Hue), and (Redness, Greenness). Broadly, the following figures show that the Spatial Broadcast decoder nearly always helps disentangling. It helps most dramatically on the most positional variation (XPosition, YPosition) and least significantly when there is no positional variation (Redness, Greenness).
Note, however, that even with no position variation, the Spatial Broadcast decoder does seem to improve latent space geometry in the generalization experiments (Figures 20 and 21). We believe this may be due in part to the fact that the Spatial Broadcast decoder is shallower than the DeConv decoder.
Finally, we explore one completely different dataset with dependent factors: A dataset where half the images have no object (are entirely black). This we do to simulate conditions like that in a multientity VAE such as [Nash et al., 2017] when the dataset has a variable number of entities. These conditions pose a challenge for disentangling, because the VAE objective will wish to allocate a large (lowKL) region of latent space to representing a blank image when there is a large proportion of blank images in the dataset. However, we do see a stark improvement by using the Spatial Broadcast decoder in this case.
totalheight= Dataset Distribution Ground Truth Factors Dataset Samples MIG Metric Values Worst Replica Best Replica VAE DeConv VAE Spatial Broadcast FactorVAE DeConv FactorVAE Spatial Broadcast
totalheight= Dataset Distribution Ground Truth Factors Dataset Samples MIG Metric Values Worst Replica Best Replica VAE DeConv VAE Spatial Broadcast FactorVAE DeConv FactorVAE Spatial Broadcast
totalheight= Dataset Distribution Ground Truth Factors Dataset Samples MIG Metric Values Worst Replica Best Replica VAE DeConv VAE Spatial Broadcast FactorVAE DeConv FactorVAE Spatial Broadcast
totalheight= Dataset Distribution Ground Truth Factors Dataset Samples MIG Metric Values Worst Replica Best Replica VAE DeConv VAE Spatial Broadcast FactorVAE DeConv FactorVAE Spatial Broadcast
totalheight= Dataset Distribution Ground Truth Factors Dataset Samples MIG Metric Values Worst Replica Best Replica VAE DeConv VAE Spatial Broadcast FactorVAE DeConv FactorVAE Spatial Broadcast
totalheight= Dataset Distribution Ground Truth Factors Dataset Samples MIG Metric Values Worst Replica Best Replica VAE DeConv VAE Spatial Broadcast FactorVAE DeConv FactorVAE Spatial Broadcast
totalheight= Dataset Distribution Ground Truth Factors Dataset Samples MIG Metric Values Worst Replica Best Replica VAE DeConv VAE Spatial Broadcast FactorVAE DeConv FactorVAE Spatial Broadcast
totalheight= Dataset Distribution Ground Truth Factors Dataset Samples MIG Metric Values Worst Replica Best Replica VAE DeConv VAE Spatial Broadcast FactorVAE DeConv FactorVAE Spatial Broadcast
totalheight= Dataset Distribution Ground Truth Factors Dataset Samples MIG Metric Values Worst Replica Best Replica VAE DeConv VAE Spatial Broadcast FactorVAE DeConv FactorVAE Spatial Broadcast
totalheight= Dataset Distribution Ground Truth Factors Dataset Samples MIG Metric Values Worst Replica Best Replica VAE DeConv VAE Spatial Broadcast FactorVAE DeConv FactorVAE Spatial Broadcast
2 Spatial Broadcast Decoder
When modeling the distribution of images in a dataset with a variational autoencoder (Kingma and Welling, 2014; Rezende et al., 2014), standard architectures use an encoder consisting of a downsampling convolutional network followed by an MLP and a decoder consisting of an MLP followed by an upsampling deconvolutional network. The convolutional and deconvolutional networks share features across space, improving training efficiency for VAEs much like they do for all models that use image data.
However, while convolution surely lends some useful spatial inductive bias to the representations, a standard VAE learns highly entangled representations in an effort to represent the data as closely as possibly to its Gaussian prior (e.g. see Figure 7).
A number of new variations of the VAE objective have been developed to alleviate this problem, though all of them introduce additional hyperparameters (Higgins et al., 2017a; Burgess et al., 2018; Kim and Mnih, 2017; Chen et al., 2018). Furthermore, a recent study found them to be extremely sensitive to these hyperparameters (Locatello et al., 2018).
Meanwhile, upsampling deconvolutional networks (like the one in the standard VAE’s decoder) have been found to pose optimization challenges, such as producing checkerboard artifacts (Odena et al., 2016) and spatial discontinuities (Liu et al., 2018), effects that seem likely to raise problems for representationlearning in the VAE’s latent space.
Intuitively, asking a deconvolutional network to render an object at a particular position is a tall order — the network’s filters have no explicit spatial information, in other words they don’t “know where they are.” Hence the network must learn to propagate spatial asymmetries down from its highest layers and in from the spatial boundaries of the layers. This requires learning a complicated function, so optimization is difficult. To remedy this, in the Spatial Broadcast decoder we remove all upsampling deconvolutions from the network, instead tiling (broadcasting) the latent vector across space, appending fixed coordinate channels, then applying an unstrided fully convolutional network. This operation is depicted and described in Figure 1. With this architecture, rendering an object at a position becomes a very simple function (essentially just a thresholding operation in addition to the local convolutional features), though the network still has capacity to represent some more complex datasets (e.g. see Figure 5). Such simplicity of computation yields ease of optimization, and indeed we find that the Spatial Broadcast decoder greatly improves performance in a variety of VAEbased models.
Note that the Spatial Broadcast decoder does not provide any additional supervision to the generative model. The model must still learn to encode spatial information in its latent space in order to reconstruct accurately. The Spatial Broadcast decoder only allows the model to use the encoded spatial information in its latent space very efficiently.
In addition to better disentangling, the Spatial Broadcast VAE can on some datasets yield better reconstructions (see Figure 4), all with shallower networks and fewer parameters than a standard deconvolutional architecture. However it is worth noting that if the data does not benefit from having access to an absolute coordinate system, the Spatial Broadcast decoder could hurt. A standard DeConv decoder may in some cases more easily place patterns relative to each other or capture more extended spatial correlations. As we show in Section 4.2, we did not find this to be the case in the datasets we explored, but it is still a possible limitation of our model.
While the Spatial Broadcast decoder can be applied to any generative model that renders images from a vector representation, here we only explore its application to VAE models.
3 Related Work
Some of the ideas behind the Spatial Broadcast decoder have been explored extensively in many other bodies of work.
The idea of appending coordinate channels to convolutional layers has recently been highlighted (and named CoordConv) in the context of improving positional generalization (Liu et al., 2018). However, the CoordConv technique had been used beforehand (Zhao et al., 2015; Liang et al., 2015; Watters et al., 2017; Wojna et al., 2017; Perez et al., 2017; Nash et al., 2017; Ulyanov et al., 2017) and its origin is unclear. The CoordConv VAE (Liu et al., 2018) incorporates CoordConv layers into an upsampling deconvolutional network in a VAE (which, as we show in Appendix D, does not yield representationlearning benefits comparable to those of the Spatial Broadcast decoder). To our knowledge no prior work has combined CoordConv with spatially tiling a generative model’s representation as we do here.
Another line of work that has used coordinate channels extensively is language modeling. In this context, many models use fixed or learned position embeddings, combined in different ways to the input sequences to help compute translations depending on the sequence context in a differentiable manner (Gu et al., 2016; Vaswani et al., 2017; Gehring et al., 2017; Devlin et al., 2018).
We can draw parallels between the Spatial Broadcast decoder and generative models that learn “where” to write to an image (Jaderberg et al., 2015; Gregor et al., 2015; Reed et al., 2016), but in comparison these models render local image patches whereas we spatially tile a learned latent embedding and render an entire image. Work by Dorta et al. (2017) explored using a Laplacian Pyramid arrangement for VAEs, where some parts of the latent distributions would have global effects, whereas others would modify fine scale details about the reconstruction. Similar constraints on the extent of the spatial neighbourhood affected by the latent distribution has also been explored in Parmar et al. (2018). Other types of generative models already use convolutional latent distributions (Finn et al., 2015; Levine et al., 2015; Eslami et al., 2018), unlike the flat vectors we consider here. In this case the tiling operation is not necessary, but adding coordinate channels might be of help.
4 Results
We present here a select subset of our assessment of the Spatial Broadcast decoder. Interested readers can refer to Appendices AG for a more thorough set of observations and comparisons.
4.1 Performance on colored sprites



lowestvariance ones are shown in the traversals (the remainder are noncoding coordinates).
The Spatial Broadcast decoder was designed with objectfeature representations in mind, hence to initially showcase its performance we use a dataset of simple objects: colored 2dimensional sprites. This dataset is described in the literature (Burgess et al., 2018) as a colored version of the dSprites dataset (Matthey et al., 2017). One advantage of the colored sprites dataset is it has known factors of variation, of which there are 8: Xposition, Yposition, Size, Shape, Angle, and threedimensional Color. Thus we can evaluate disentangling performance with metrics that rely on ground truth factors of variation (Chen et al., 2018; Kim and Mnih, 2017).
To quantitatively evaluate disentangling, we focus primarily on the Mutual Information Gap (MIG) metric (Chen et al., 2018). The MIG metric is defined by first computing the mutual information matrix between the latent distribution means and ground truth factors, then computing the difference between the highest and secondhighest elements for each ground truth factor (i.e. the gap), then finally averaging these values over all factors.
Despite significant shortcomings (see Section 4.4), we did find the MIG metric overall can a helpful tool for evaluating representational quality, though should by no means be trusted blindly. We did also use the FactorVAE metric (Kim and Mnih, 2017), yet found it to be less consistent then MIG (we report these results in Appendix F).
In Figure 2 we compare a standard DeConv VAE (a VAE with an MLP + deconvolutional network decoder) to a Spatial Broadcast VAE (a VAE with the Spatial Broadcast decoder). We see that the Spatial Broadcast VAE outperforms the DeConv VAE both in terms of the MIG metric and traversal visualizations. The Spatial Broadcast VAE traversals clearly show all 8 factors represented in separate latent factors, seemingly on par with more complicated stateoftheart methods on this dataset (Burgess et al., 2018). The hyperparameters for the models in Figure 2 were chosen over a large sweep to minimize the model’s error, not chosen for any disentangling properties explicitly. See Appendix F for more details about hyperparameter choices and sensitivity.
The Spatial Broadcast decoder is complementary to existing disentangling VAE techniques, hence improves not only a vanilla VAE but SOTA models as well. To demonstrate this, we consider two recently developed models, FactorVAE (Kim and Mnih, 2017) and VAE (Higgins et al., 2017a).



Figure 3 shows the Spatial Broadcast decoder improving the disentangling of FactorVAE in terms of both the MIG metric and traversal visualizations. See Appendix G for further results to this effect.


, and the shaded region shows the hull of one standard deviation. White dots indicate
. (a) Reconstruction (Negative LogLikelihood, NLL) vs KL. yields low NLL and high KL (bottomright of figure), whereas yields high NLL and low KL (topleft of figure). See Alemi et al. (2017) for details. Spatial Broadcast VAE shows a better ratedistortion curve than Deconv VAE. (b) Reconstruction vs MIG metric. correspond to lower NLL and low MIG regions (bottomleft of figure), and values correspond to high NLL and high MIG scores (towards topright of figure). Spatial Broadcast VAE is better disentangled (higher MIG scores) than Deconv VAE.Figure 4 shows performance of a VAE with and without the Spatial Broadcast decoder for a range of values of . Not only does the Spatial Broadcast decoder improve disentangling, it also yields a lower ratedistortion curve, hence learns to more efficiently represent the data than the same model with a DeConv decoder.
4.2 Datasets without positional variation


The colored sprites dataset discussed in Section 4.1 seems particularly wellsuited for the Spatial Broadcast decoder because X and Yposition are factors of variation. However, we also evaluate the architecture on datasets that have no positional factors of variation: Chairs and 3D ObjectinRoom datasets (Aubry et al., 2014; Kim and Mnih, 2017). In the latter, the factors of variation are highly nonlocal, affecting multiple regions spanning nearly the entire image. We find that on both datasets a Spatial Broadcast VAE learns representations that look very welldisentangled, seemingly as well as SOTA methods on these datasets and without any modification of the standard VAE objective (Kim and Mnih, 2017; Higgins et al., 2017a). See Figure 5 for results (and supplementary Figure 12 for additional traversals).
While this shows that the Spatial Broadcast decoder does not hurt when used on datasets without positional variation, it may seem unlikely that it would help in this context. However, we show in Appendix G that it actually can help to some extent on such datasets. We attribute this to its using a shallower network and no upsampling deconvolutions (which have been observed to cause optimization difficulties in a variety of settings (Liu et al., 2018; Odena et al., 2016)).
4.3 Datasets with small objects
In exploring datasets with objects varying in position, we often find a (standard) DeConv VAE learns a representation that is discontinuous with respect to object position. This effect is amplified as the size of the object decreases. This makes sense, because the pressure for a VAE to represent position continuously comes from the fact that an object and a positionperturbed version of itself overlap in pixel space (hence it is economical for the VAE to map noise in its latent samples to local translations of an object). However, as an object’s size decreases, this pixel overlap decreases, hence the pressure for a VAE to represent position continuously weakens.
4.4 Latent space geometry visualization
Dataset Distribution Ground Truth Factors Dataset Samples  
MIG Metric Values Worst Replica Best Replica  



Evaluating the quality of a representation can be challenging and timeconsuming. While a number of metrics have been proposed to quantify disentangling, many of them have serious shortcomings and there is as yet no consensus in the literature which to use (Locatello et al., 2018). We believe it is impossible to quantify how good a representation is with a single scalar, because there is a fundamental tradeoff between how much information a representation contains and how wellstructured the representation is. This has been noted by others in the disentangling literature (Ridgeway and Mozer, 2018; Eastwood and Williams, 2018). This disentanglingdistortion tradeoff is a recapitulation of the ratedistortion tradeoff (Alemi et al., 2017) and can be seen firsthand in Figure 4. We would like representations that both reconstruct well and disentangle well, but exactly how to balance these two factors is a matter of subjective preference (and surely depends on the dataset). Any scalar disentangling metric will implicitly favor some arbitrary disentanglingreconstruction potential.
In addition to this unavoidable limitation of disentangling metrics, we found that the MIG metric (while perhaps more accurate than other existing metrics) does not capture the intuitive notion of disentangling because:

It depends on a choice of basis for the ground truth factors, and heavily penalizes rotation of the representation with respect to this basis. Yet it is often unclear what the correct basis for the ground truth factors is (e.g. RGB vs HSV vs HSL). For example, see the bottom row of Figure 7.

It is invariant to a folding of the representation space, as long as the folds align with the axes of variation. See the middle row of Figure 7 for an example of a doublefold in the latent space which isn’t penalized by the MIG metric.
Due to the subjective nature of disentangling and the difficulty in defining appropriate metrics, we put heavy emphasis on latent space visualization as a means for representational analysis. Latent space traversals have been extensively used in the literature and can be quite revealing (Higgins et al., 2017a, b). However, in our experience, traversals suffer two shortcomings:

Some latent space entanglement can be difficult for the eye to perceive in traversals. For example, a slight change in brightness in a latent traversal that represents changing position can easily go unnoticed.

Traversals only represent the latent space geometry around one point in space, and crossreferencing corresponding traversals between multiple points is quite timeconsuming.
Consequently, we caution the reader against relying too heavily on traversals when evaluating latent space geometry. We propose an additional method for analyzing latent spaces, which we found very useful in our research. This is possible when there are known generative factors in the dataset and aims to directly view the embedding of generative factor space in the model’s latent space: We plot in latent space the locations corresponding to a grid of points in generative factor space. While this is can only visualize the latent embedding of a 2dimensional subspace of generative factor space, it can be very revealing of the latent space geometry.
We showcase this analysis method in Figure 7 on a dataset of circles varying in X and Yposition. (See Appendix G for similar analyses on many more datasets.) We compare the representations learned by a DeConv VAE and a Spatial Broadcast VAE. This reveals a stark contrast: The Spatial Broadcast latent geometry is a nearperfect Euclidean transformation, while the Deconv decoder model’s representations are very entangled.
The MIG metric does not capture this difference well: It gives a score near zero to a Spatial Broadcast model with a latent space rotated with respect to the generative factors and a greater score to a highly entangled Deconv model. In practice we care primarily about compositionality of (and hence disentangling of) subspaces of generative factor space and not about axisalignment within these subspaces. For example, rotation within XY position subspace is acceptable, whereas rotation in positioncolor space is not. This naturally poses a great challenge for both designing disentangling metrics and formally defining disentangling (cf. Higgins et al., 2018)
. We believe that additional structure over the factors of variation can be inferred from temporal correlations, the structure of a reinforcement learning agent’s action space, and other effects not necessarily captured by a staticimage dataset. Nonetheless, inductive biases like the Spatial Broadcast decoder can be explored in the context of static images, as they will likely also help representation learning in more complete contexts.
4.5 Datasets with dependent factors
Dataset Distribution Ground Truth Factors Dataset Samples  
MIG Metric Values Worst Replica Best Replica  



Our primary motivation for unsupervised representation learning is the prospect of improved generalization and transfer, as discussed in Section 1. Consequently, we would like a model to learn compositional representations that can generalize to new feature combinations. We explore this in a controlled setting by holding out a region of generative factor space from the training dataset, then evaluating the representation in this heldout region. Figure 8 shows results to this effect, comparing the Spatial Broadcast VAE to a DeConv VAE on a dataset of circles with a heldout region in the middle of XY positionspace (so the model sees no circles in the middle of the image, indicated by the black dots in the second column). The Spatial Broadcast VAE generalizes almost perfectly in this case, appearing unaffected by the fact that the data generative factors are no longer independent and extrapolating nearly linearly throughout the heldout region. In contrast, the DeConv VAE’s latent space is highly entangled, even more so than in the case with independent factors of Figure 7. See Appendix G for more analyses of generalization on other datasets.
From the perspective of density modeling, disentangling of the Spatial Broadcast VAE may seem undesirable in this case because it allocates high probability in latent space to a region of low (here, zero) probability in the dataset. However, from the perspective of representation learning and compositional features, such generalization is a highly desirable property.
5 Conclusion
Here we present and analyze the Spatial Broadcast decoder in the context of Variational Autoencoders. We demonstrate that it improves learned latent representations, most dramatically for datasets with objects varying in position. It also improves generalization in latent space and can be incorporated into SOTA models to boost their performance in terms of both disentangling and reconstruction accuracy. We rigorously analyze the contribution of the Spatial Broadcast decoder on a wide variety of datasets and models using a range of visualizations and metrics. We hope that this analysis has provided the reader with an intuitive understanding of how and why it improves learned representations.
We believe that learning compositional representations is an important ingredient for flexibility and generalization in many contexts, from supervised learning to reinforcement learning, and the Spatial Broadcast decoder is one step towards robust compositional visual representation learning.
Acknowledgments
We thank Irina Higgins, Danilo Rezende, Matt Botvinick, and Yotam Doron for helpful discussions and insights.
References
 Alemi et al. [2017] Alexander A. Alemi, Ben Poole, Ian Fischer, Joshua V. Dillon, Rif A. Saurous, and Kevin Murphy. An informationtheoretic analysis of deep latentvariable models. CoRR, abs/1711.00464, 2017. URL http://arxiv.org/abs/1711.00464.
 Aubry et al. [2014] M. Aubry, D. Maturana, A. Efros, B. Russell, and J. Sivic. Seeing 3d chairs: exemplar partbased 2d3d alignment using a large dataset of cad models. In CVPR, 2014.
 Bengio et al. [2013] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.
 Burgess et al. [2018] Christopher P. Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. Understanding disentangling in VAE. arXiv, 2018.
 Chen et al. [2018] Tian Qi Chen, Xuechen Li, Roger B. Grosse, and David K. Duvenaud. Isolating sources of disentanglement in variational autoencoders. CoRR, abs/1802.04942, 2018. URL http://arxiv.org/abs/1802.04942.
 Devlin et al. [2018] Jacob Devlin, MingWei Chang, Kenton Lee, and Kristina Toutanova. BERT: pretraining of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. URL http://arxiv.org/abs/1810.04805.
 Dorta et al. [2017] Garoe Dorta, Sara Vicente, Lourdes Agapito, Neill D.F. Campbell, Simon Prince, and Ivor Simpson. Laplacian pyramid of conditional variational autoencoders. In Proceedings of the 14th European Conference on Visual Media Production (CVMP 2017), CVMP 2017, pages 7:1–7:9, New York, NY, USA, 2017. ACM. ISBN 9781450353298. doi: 10.1145/3150165.3150172. URL http://doi.acm.org/10.1145/3150165.3150172.
 Eastwood and Williams [2018] Cian Eastwood and Christopher K. I. Williams. A framework for the quantitative evaluation of disentangled representations. ICLR, 2018.

Eslami et al. [2018]
S. M. Ali Eslami, Danilo Jimenez Rezende, Frederic Besse, Fabio Viola, Ari S.
Morcos, Marta Garnelo, Avraham Ruderman, Andrei A. Rusu, Ivo Danihelka, Karol
Gregor, David P. Reichert, Lars Buesing, Theophane Weber, Oriol Vinyals, Dan
Rosenbaum, Neil Rabinowitz, Helen King, Chloe Hillier, Matt Botvinick, Daan
Wierstra, Koray Kavukcuoglu, and Demis Hassabis.
Neural scene representation and rendering.
Science, 360(6394):1204–1210, 2018. ISSN 00368075. doi: 10.1126/science.aar6170. URL http://science.sciencemag.org/content/360/6394/1204.  Finn et al. [2015] Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, and Pieter Abbeel. Learning visual feature spaces for robotic manipulation with deep spatial autoencoders. CoRR, abs/1509.06113, 2015. URL http://arxiv.org/abs/1509.06113.
 Gehring et al. [2017] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolutional sequence to sequence learning. CoRR, abs/1705.03122, 2017. URL http://arxiv.org/abs/1705.03122.
 Gregor et al. [2015] Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. DRAW: A recurrent neural network for image generation. CoRR, abs/1502.04623, 2015. URL http://arxiv.org/abs/1502.04623.
 Gu et al. [2016] Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. Incorporating copying mechanism in sequencetosequence learning. CoRR, abs/1603.06393, 2016. URL http://arxiv.org/abs/1603.06393.
 Higgins et al. [2017a] Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. VAE: Learning basic visual concepts with a constrained variational framework. ICLR, 2017a.
 Higgins et al. [2017b] Irina Higgins, Nicolas Sonnerat, Loic Matthey, Arka Pal, Christopher P Burgess, Matthew Botvinick, Demis Hassabis, and Alexander Lerchner. Scan: learning abstract hierarchical compositional visual concepts. arXiv preprint arXiv:1707.03389, 2017b.
 Higgins et al. [2018] Irina Higgins, David Amos, David Pfau, Sébastien Racanière, Loïc Matthey, Danilo J. Rezende, and Alexander Lerchner. Towards a definition of disentangled representations. CoRR, abs/1812.02230, 2018. URL http://arxiv.org/abs/1812.02230.
 Ioffe and Szegedy [2015] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015. URL http://arxiv.org/abs/1502.03167.
 Jaderberg et al. [2015] Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial transformer networks. CoRR, abs/1506.02025, 2015. URL http://arxiv.org/abs/1506.02025.
 Kim and Mnih [2017] Hyunjik Kim and Andriy Mnih. Disentangling by factorising. arxiv, 2017.
 Kingma and Ba [2015] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015.
 Kingma and Welling [2014] Diederik P. Kingma and Max Welling. Autoencoding variational bayes. ICLR, 2014.
 Lake et al. [2016] Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Building machines that learn and think like people. Behavioral and Brain Sciences, pages 1–101, 2016.
 Levine et al. [2015] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. Endtoend training of deep visuomotor policies. CoRR, abs/1504.00702, 2015. URL http://arxiv.org/abs/1504.00702.
 Liang et al. [2015] Xiaodan Liang, Yunchao Wei, Xiaohui Shen, Jianchao Yang, Liang Lin, and Shuicheng Yan. Proposalfree network for instancelevel object segmentation. CoRR, abs/1509.02636, 2015. URL http://arxiv.org/abs/1509.02636.
 Liu et al. [2018] Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski Such, Eric Frank, Alex Sergeev, and Jason Yosinski. An intriguing failing of convolutional neural networks and the coordconv solution. CoRR, abs/1807.03247, 2018. URL http://arxiv.org/abs/1807.03247.
 Locatello et al. [2018] Francesco Locatello, Stefan Bauer, Mario Lucic, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. arXiv preprint arXiv:1811.12359, 2018.
 Marcus [2018] Gary Marcus. Deep learning: A critical appraisal. CoRR, abs/1801.00631, 2018. URL http://arxiv.org/abs/1801.00631.
 Matthey et al. [2017] Loic Matthey, Irina Higgins, Demis Hassabis, and Alexander Lerchner. dsprites: Disentanglement testing sprites dataset, 2017. URL https://github.com/deepmind/dspritesdataset/.
 Nash et al. [2017] Charlie Nash, Ali Eslami, Chris Burgess, Irina Higgins, Daniel Zoran, Theophane Weber, and Peter Battaglia. The multientity variational autoencoder. NIPS Workshops, 2017.
 Odena et al. [2016] Augustus Odena, Vincent Dumoulin, and Chris Olah. Deconvolution and checkerboard artifacts. Distill, 2016. doi: 10.23915/distill.00003. URL http://distill.pub/2016/deconvcheckerboard.
 Parmar et al. [2018] Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, and Alexander Ku. Image transformer. CoRR, abs/1802.05751, 2018. URL http://arxiv.org/abs/1802.05751.
 Perez et al. [2017] Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron C. Courville. Film: Visual reasoning with a general conditioning layer. CoRR, abs/1709.07871, 2017. URL http://arxiv.org/abs/1709.07871.
 Reed et al. [2016] Scott E. Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learning what and where to draw. CoRR, abs/1610.02454, 2016. URL http://arxiv.org/abs/1610.02454.

Rezende et al. [2014]
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra.
Stochastic backpropagation and approximate inference in deep generative models.
ICML, 32(2):1278–1286, 2014.  Ridgeway and Mozer [2018] Karl Ridgeway and Michael C Mozer. Learning deep disentangled embeddings with the fstatistic loss. NIPS, 2018.
 Ulyanov et al. [2017] Dmitry Ulyanov, Andrea Vedaldi, and Victor S. Lempitsky. Deep image prior. CoRR, abs/1711.10925, 2017. URL http://arxiv.org/abs/1711.10925.
 Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. CoRR, abs/1706.03762, 2017. URL http://arxiv.org/abs/1706.03762.
 Watters et al. [2017] Nicholas Watters, Daniel Zoran, Theophane Weber, Peter Battaglia, Razvan Pascanu, and Andrea Tacchetti. Visual interaction networks: Learning a physics simulator from video. In Advances in Neural Information Processing Systems 30, pages 4539–4547. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/7040visualinteractionnetworkslearningaphysicssimulatorfromvideo.pdf.
 Wojna et al. [2017] Zbigniew Wojna, Alexander N. Gorban, DarShyang Lee, Kevin Murphy, Qian Yu, Yeqing Li, and Julian Ibarz. Attentionbased extraction of structured information from street view imagery. CoRR, abs/1704.03549, 2017. URL http://arxiv.org/abs/1704.03549.
 Zhao et al. [2015] Junbo Zhao, Michael Mathieu, Ross Goroshin, and Yann LeCun. Stacked whatwhere autoencoders. arXiv, abs/1506.02351, 2015. URL https://arxiv.org/abs/1506.02351.
Appendix A Experiment Details
For all VAE models we used a Bernoulli decoder distribution, parametrized by its logits. It is with respect to this distribution that the reconstruction error (negative log likelihood) was computed. This could accomodate our datasets, since they were normalized to have pixel values in
. We also explored using a Gaussian distribution with fixed variance (for which the NLL is equivalent to scaled MSE), and found that this produces qualitatively similar results and in fact improves stability. Hence while a Bernoulli distribution usually works, we suggest the reader wishing to experiment with these models starts with a Gaussian decoder distribution with mean parameterized by the decoder network output and variance constant at
.In all networks we used ReLU activations, weights initialized by a truncated normal (see
[Ioffe and Szegedy, 2015]), and biases initialized to zero. We use no other neural network tricks (no BatchNorm or dropout), and all models were trained with the Adam optimizer
[Kingma and Ba, 2015]. See below for learning rate details.a.1 VAE Hyperparameters
For all VAE models except VAE (shown only in Figure 4), we use a standard VAE loss, namely with a KL term coefficient . For FactorVAE we also use , as in Kim and Mnih [2017].
For the VAE, VAE, CoordConv VAE and ablation study we used the network parameters in Table 1. We note that, while the Spatial Broadcast decoder uses fewer parameters than the DeConv decoder, it does require about more memory to store the weights. However, for the 3D ObjectinRoom dataset we included three additional deconv layers in the Spatial Droadcast decoder (without these additional layers the decoder was not powerful enough to give good reconstructions on that dataset).
All of these models were trained using a learning rate of
on with batch size 16. All convolutional and deconvolutional layers have “same” padding, i.e. have zeropadded input so that the output shape is
in the case of convolution and in the case of deconvolution.a.2 FactorVAE
For the FactorVAE model, we used the hyperparameters described in the FactorVAE paper [Kim and Mnih, 2017]. Those network parameters are reiterated in Table 2. Note that the Spatial Broadcast parameters are the same as for the other models in Table 1. For the optimization hyperparameters we used , a learning rate of for the VAE updates, a learning rate of for the discriminator updates, and batch size 32. These parameters generally gave stable results.
However, when training the FactorVAE model on colored sprites we encountered instability during training. We subsequently did a number of hyperparameter sweeps attempting to improve stability, but to no avail. Ultimately, we used the hyperparameters in Table 2, though even with limited training steps (see Appendix Section A.4) about of seeds diverged before training completed for both Spatial Broadcast and Deconv decoder.
a.3 Datasets
All datasets were rendered in images of size and normalized to .
Colored Sprites:
For this dataset, we use the binary dsprites dataset opensourced in [Matthey et al., 2017], but multiplied by colors sampled in HSV space uniformly within the region , , . Sans color, there are 737,280 images in this dataset. However, we sample the colors online from a continuous distribution, effectively making the dataset size infinite.
Chairs:
This dataset is opensourced in [Aubry et al., 2014]. This dataset, unlike all others we use, has only a single channel in its images. It contains 86,366 images.
3D ObjectinRoom:
This dataset was used extensively in the FactorVAE paper [Kim and Mnih, 2017]. It consists of an object in a room and has 6 factors of variation: Camera angle, object size, object shape, object color, wall color, and floor color. The colors are sampled uniformly from a continuous set of hues in the range . This dataset contains 480,000 images, procedurally generated as all combinations of 10 floor hues, 10 wall hues, 10 object hues, 8 object sizes, 4 object shapes, and 15 camera angles.
Circles Datasets:
To more thoroughly explore datasets with a variety of distributions, factors of variation, and heldout test sets we wrote our own procedural image generator for circular objects in PyGame (rendered with an antialiasing factor of 5). We used this to generate the data for results in Section 4.4. In these datasets we control subsets of the following factors of variation: Xposition, Yposition, Size, Color. We generated five datasets in this way, which we call XY, XH, RG, XYH Small, and XYH Tiny, and can be seen in Figures (Fig 13), (Fig 16), (Fig 19), (Fig 6), and (Fig 10) respectively.
Table 3 shows the values of these factors for each dataset. Note that for some datasets we define the color distribution in RGB space, and for others we define it in HSV space.
To create the datasets with dependent factors, we hold out one quarter of the dataset (the intersection of half of the ranges of each of the two factors), either centered within the data distribution or in the corner.
For each dataset we generate 500,000 randomly sampled training images.
X  Y  Size  (H, S, V)  (R, G, B)  
3 XY  [0.2, 0.8]  [0.2, 0.8]  0.2  N/A  (1.0, 1.0, 1.0) 
XH  [0.2, 0.8]  0.5  0.3  ([0.2, 0.8], 1.0, 1.0)  N/A 
RG  0.5  0.5  0.5  N/A  ([0.4, 0.8], [0.4, 0.8], 1.0) 
XYH Small  [0.2, 0.8]  [0.2, 0.8]  0.1  ([0.2, 0.8], 1.0, 1.0)  N/A 
XYH Tiny  [0.2, 0.8]  [0.2, 0.8]  0.075  ([0.2, 0.8], 1.0, 1.0)  N/A 
a.4 Training Steps
The number of training steps for each model on each dataset can be found in Table 4. In general, for each dataset we used enough training steps so that all models converged. Note that while the training iterations is different for FactorVAE than for the other models on colored sprites (due to instability of FactorVAE), this has no bearing on our results because we do not compare across models. We only compare across decoder architectures, and we always used the same training steps for both DeConv and Spatial Broadcast decoders within each model.
VAE  FactorVAE  
3 Colored Sprites  
Chairs  N/A  
3D Objects  N/A  
Circles 
Appendix B Ablation Study
One aspect of the Spatial Broadcast decoder is the concatenation of constant coordinate channels to its tiled input latent vector. While our justification of its performance emphasizes the simplicity of computation it affords, it may seem possible that the coordinate channels are only used to provide positional information and the simplicity of this positional information (linear meshgrid) is irrelevant. Here we perform an ablation study to demonstrate that this is not the case; the organization of the coordinate channels is important. For this experiment, we randomly permute the coordinate channels through space. Specifically, we take the shape coordinate channels and randomly permute the entries. We keep each pair together to ensure that after the shuffling each location does still have a unique pair of coordinates in the coordinate channels. Importantly, we only shuffle the coordinate channels once, then keep them constant throughout training.
Figure 9 shows reconstructions and traversals for two replicas (with different shuffled coordinate channels). Both disentangling and reconstruction accuracy are significantly reduced.
Appendix C Disentangling Tiny Objects
In the dataset of small colored circles shown in Figure 6 the circle diameter is times the framewidth. We also generated a dataset with circles times the framewidth, and Figure 10 shows similar results on this dataset (albeit more difficult for the eye to make out). We were surprised to see disentangling of such tiny objects and have not explored the lower object size limit for disentangling with the Spatial Broadcast decoder.
Appendix D CoordConv VAE
CoordConv VAE [Liu et al., 2018] has been proposed as a decoder architecture to improve the continuity of VAE representations. CoordConv VAE appends coordinate channels to every feature layer of the standard deconvolutional decoder, yet does not spatially tile the latent vector, hence retains upsampling deconvolutions.
Figure 11 shows analysis of this model on the colored sprites dataset. While the latent space does appear to be continuous with respect to object position, it is quite entangled (far more so than a Spatial Broadcast VAE). This is not very surprising, since CoordConv VAE uses upscale deconvolutions to go all they way from spatial shape to spatial shape , while in Table 10 we see that introducing upscaling hurts disentangling in a Spatial Broadcast VAE.


Appendix E Extra traversals for datasets without positional variation
As we acknowledge in the main text, a single latent traversal plot only shows local disentangling at one point in latent space. Hence to support our claim in Section 4.2 that the Spatial Broadcast VAE disentangled the Chairs and 3D objects datasets, we show in Figure 12 traversals about a second seed in each dataset for the same models as in Figure 5.
Appendix F Architecture Hyperparameters
In order to remain objective when selecting model hyperparameters for the Spatial Broadcast and Deconv decoders, we chose hyperparameters based on minimizing the ELBO loss, not considering any information about disentangling. After finding reasonable encoder hyperparameters, we performed largescale (25 replicas each) sweeps over a few decoder hyperparameters for both the DeConv and Spatial Broadcast decoder on the colored sprites dataset. These sweeps are revealing of hyperparameter sensitivity, so we report the following quantities for them:

ELBO. This is the evidence lower bound (total VAE loss). It is the sum of the negative log likelihood (NLL) and KLdivergence.

NLL. This is the negative log likelihood of an image with respect to the model’s reconstructed distribution of that image. It is a measure of reconstruction accuracy.

KL. This is the KL divergence of the VAE’s latent distribution with its Gaussian prior. It measures how much information is being encoded in the latent space.

Latents Used. This is the mean number of latent coordinates with standard deviation less than . Typically, a VAE will have some unused latent coordinates (with standard deviation near ) and some used latent coordinates. The threshold is arbitrary, but this quantity does provide a rough idea of how many factors of variation the model may be representing.

MIG. The MIG metric.

Factor VAE. This is the metric described in the FactorVAE paper [Kim and Mnih, 2017]. We found this metric to be less consistent than the MIG metric (and equally flawed with respect to rotated coordinates), but it qualitatively agrees with the MIG metric most of the time.
f.1 ConvNet Depth
Table 5 shows results of sweeping over ConvNet depth in the Spatial Broadcast decoder. This reveals a consistent trend: As the ConvNet deepens, the model moves towards lower rate/higher distortion. Consequently, latent space information and reconstruction accuracy drop. Traversals with deeper nets show the model dropping factors of variation (the dataset has 8 factors of variation).
Table 6 shows a noisier but similar trend when increasing DeConvNet depth in the DeConv decoder.
ConvNet  ELBO  NLL  KL  Latents Used  MIG  Factor VAE 

2Layer  339 (2.3)  312 (2.7)  27.5 (0.54)  8.33 (0.37)  0.076 (0.038)  0.187 (0.027) 
3Layer  329 (4.1)  305 (4.6)  24.4 (0.54)  7.22 (0.34)  0.147 (0.057)  0.208 (0.084) 
4Layer  341 (6.8)  318 (7.7)  22.6 (0.97)  5.93 (0.51)  0.157 (0.045)  0.226 (0.046) 
5Layer  340 (8.8)  317 (9.7)  22.7 (0.99)  5.70 (0.37)  0.173 (0.059)  0.218 (0.030) 
ConvNet  ELBO  NLL  KL  Latents Used  MIG  Factor VAE 

3Layer  372 (8.6)  346 (8.9)  26.8 (0.40)  9.20 (0.04)  0.031 (0.018)  0.144 (0.031) 
4Layer  349 (9.4)  322 (10.0)  27.1 (0.88)  8.90 (0.24)  0.025 (0.015)  0.139 (0.009) 
5Layer  340 (9.8)  314 (10.4)  26.0 (1.00)  7.95 (0.64)  0.056 (0.32)  0.184 (0.053) 
6Layer  349 (15.0)  326 (16.1)  23.3 (1.42)  6.36 (0.81)  0.056 (0.019)  0.199 (0.029) 
f.2 MLP Depth
The Spatial Broadcast decoder as presented in this work is fully convolutional. It contains no MLP. However, motivated by the need for more depth on the 3D ObjectinRoom dataset, we did explore applying an MLP to the input vector prior to the broadcast operation. We found that including this MLP had a qualitatively similar effect as increasing the number of convolutional layers on the colored sprited dataset, decreasing latent capacity and giving poorer reconstructions. These results are shown in Table 7.
However, on the 3D ObjectinRoom dataset adding the MLP did improve the model when using ConvNet depth 3 (the same as for colored sprites). Results of a sweep over depth of a prebroadcast MLP are shown in Table 9. As mentioned in Section 4.2, we were able to achieve the same effect by instead increasing the ConvNet depth to 6, but for those interested in computational efficiency using a prebroadcast MLP may be a better choice for datasets of this sort.
In the DeConv decoder, increasing the MLP layers again has a broadly similar effect as increasing the ConvNet layers, as shown in Table 8.
MLP  ELBO  NLL  KL  Latents Used  MIG  Factor VAE 

0Layer  329 (4)  305 (5)  24.5 (0.54)  7.21 (0.34)  0.147 (0.057)  0.208 (0.084) 
1Layer  330 (6)  307 (6)  23.9 (0.72)  6.68 (0.46)  0.164 (0.043)  0.200 (0.034) 
2Layer  349 (15)  327 (17)  21.5 (2.13)  6.03 (0.66)  0.210 (0.048)  0.232 (0.045) 
3Layer  392 (23)  399 (114)  15.7 (2.93)  4.17 (1.23)  0.160 (0.034)  0.275 (0.064) 
MLP  ELBO  NLL  KL  Latents Used  MIG  Factor VAE 

1Layer  347 (13)  321 (14)  25.8 (1.3)  7.97 (0.72)  0.052 (0.020)  0.174 (0.043) 
2Layer  352 (17)  328 (18)  23.7 (1.7)  6.68 (0.78)  0.051 (0.024)  0.196 (0.024) 
3Layer  365 (19)  345 (21)  19.7 (2.4)  5.24 (0.73)  0.144 (0.062)  0.243 (0.043) 
MLP  ELBO  NLL  KL  Latents Used  MIG  Factor VAE 

0Layer  4039 (3.4)  4010 (3.3)  29.3 (0.46)  8.73 (0.41)  0.541 (0.091)  0.931 (0.043) 
1Layer  4022 (3.7)  4003 (3.7)  19.3 (0.35)  6.30 (0.49)  0.538 (0.105)  0.946 (0.043) 
2Layer  4018 (3.3)  3999 (3.3)  18.5 (0.30)  5.94 (0.41)  0.574 (0.096)  0.978 (0.027) 
3Layer  4020 (3.0)  4002 (2.9)  18.3 (0.38)  5.73 (0.31)  0.659 (0.123)  0.979 (0.037) 
f.3 Decoder Upscale Factor
We acknowledge that there is a continuum of models between the Spatial Broadcast decoder and the Deconv decoder. One could interpolate from one to the other by incrementally replacing the convolutional layers in the Spatial Broadcast decoder’s network by deconvolutional layers with stride 2 (and simultaneously decreasing the height and width of the tiling operation). Table 10 shows a few steps of such a progression, where (starting from the bottom) 1, 2, and all 3 of the convolutional layers in the Spatial Broadcast decoder are replaced by a deconvolutional layer with stride 2. We see that this hurts disentangling without affecting the other metrics, further evidence that upscaling deconvolutional layers are bad for representation learning.
MLP  ELBO  NLL  KL  Latents Used  MIG  Factor VAE 

0 Upscales  329 (4.1)  305 (4.6)  24.4 (0.54)  7.22 (0.34)  0.147 (0.057)  0.208 (0.084) 
1 Upscale  327 (4.4)  302 (4.9)  24.4 (0.55)  7.29 (0.26)  0.149 (0.048)  0.194 (0.026) 
2 Upscales  329 (4.3)  304 (4.8)  24.2 (0.60)  7.14 (0.42)  0.122 (0.045)  0.235 (0.070) 
3 Upscales  330 (2.4)  305 (2.7)  24.2 (0.24)  7.39 (0.08)  0.110 (0.032)  0.182 (0.028) 
Appendix G Latent Space Geometry Analysis for Circle Datasets
We showed visualization of latent space geometry on the circles datasets in Figures 7 (with independent factors of variation) and 8 (with dependent factors of variation). These figures showcased the improvement that the Spatial Broadcast decoder lends. However, we also conducted the same style experiments on many more datasets and on FactorVAE models. In this section we will present these additional results.
We consider three generative factor pairs: (XPosition, YPosition), (XPosition, Hue), and (Redness, Greenness). Broadly, the following figures show that the Spatial Broadcast decoder nearly always helps disentangling. It helps most dramatically on the most positional variation (XPosition, YPosition) and least significantly when there is no positional variation (Redness, Greenness).
Note, however, that even with no position variation, the Spatial Broadcast decoder does seem to improve latent space geometry in the generalization experiments (Figures 20 and 21). We believe this may be due in part to the fact that the Spatial Broadcast decoder is shallower than the DeConv decoder.
Finally, we explore one completely different dataset with dependent factors: A dataset where half the images have no object (are entirely black). This we do to simulate conditions like that in a multientity VAE such as [Nash et al., 2017] when the dataset has a variable number of entities. These conditions pose a challenge for disentangling, because the VAE objective will wish to allocate a large (lowKL) region of latent space to representing a blank image when there is a large proportion of blank images in the dataset. However, we do see a stark improvement by using the Spatial Broadcast decoder in this case.
totalheight= Dataset Distribution Ground Truth Factors Dataset Samples MIG Metric Values Worst Replica Best Replica VAE DeConv VAE Spatial Broadcast FactorVAE DeConv FactorVAE Spatial Broadcast
totalheight= Dataset Distribution Ground Truth Factors Dataset Samples MIG Metric Values Worst Replica Best Replica VAE DeConv VAE Spatial Broadcast FactorVAE DeConv FactorVAE Spatial Broadcast
totalheight= Dataset Distribution Ground Truth Factors Dataset Samples MIG Metric Values Worst Replica Best Replica VAE DeConv VAE Spatial Broadcast FactorVAE DeConv FactorVAE Spatial Broadcast
totalheight= Dataset Distribution Ground Truth Factors Dataset Samples MIG Metric Values Worst Replica Best Replica VAE DeConv VAE Spatial Broadcast FactorVAE DeConv FactorVAE Spatial Broadcast
totalheight= Dataset Distribution Ground Truth Factors Dataset Samples MIG Metric Values Worst Replica Best Replica VAE DeConv VAE Spatial Broadcast FactorVAE DeConv FactorVAE Spatial Broadcast
totalheight= Dataset Distribution Ground Truth Factors Dataset Samples MIG Metric Values Worst Replica Best Replica VAE DeConv VAE Spatial Broadcast FactorVAE DeConv FactorVAE Spatial Broadcast
totalheight= Dataset Distribution Ground Truth Factors Dataset Samples MIG Metric Values Worst Replica Best Replica VAE DeConv VAE Spatial Broadcast FactorVAE DeConv FactorVAE Spatial Broadcast
totalheight= Dataset Distribution Ground Truth Factors Dataset Samples MIG Metric Values Worst Replica Best Replica VAE DeConv VAE Spatial Broadcast FactorVAE DeConv FactorVAE Spatial Broadcast
totalheight= Dataset Distribution Ground Truth Factors Dataset Samples MIG Metric Values Worst Replica Best Replica VAE DeConv VAE Spatial Broadcast FactorVAE DeConv FactorVAE Spatial Broadcast
totalheight= Dataset Distribution Ground Truth Factors Dataset Samples MIG Metric Values Worst Replica Best Replica VAE DeConv VAE Spatial Broadcast FactorVAE DeConv FactorVAE Spatial Broadcast
3 Related Work
Some of the ideas behind the Spatial Broadcast decoder have been explored extensively in many other bodies of work.
The idea of appending coordinate channels to convolutional layers has recently been highlighted (and named CoordConv) in the context of improving positional generalization (Liu et al., 2018). However, the CoordConv technique had been used beforehand (Zhao et al., 2015; Liang et al., 2015; Watters et al., 2017; Wojna et al., 2017; Perez et al., 2017; Nash et al., 2017; Ulyanov et al., 2017) and its origin is unclear. The CoordConv VAE (Liu et al., 2018) incorporates CoordConv layers into an upsampling deconvolutional network in a VAE (which, as we show in Appendix D, does not yield representationlearning benefits comparable to those of the Spatial Broadcast decoder). To our knowledge no prior work has combined CoordConv with spatially tiling a generative model’s representation as we do here.
Another line of work that has used coordinate channels extensively is language modeling. In this context, many models use fixed or learned position embeddings, combined in different ways to the input sequences to help compute translations depending on the sequence context in a differentiable manner (Gu et al., 2016; Vaswani et al., 2017; Gehring et al., 2017; Devlin et al., 2018).
We can draw parallels between the Spatial Broadcast decoder and generative models that learn “where” to write to an image (Jaderberg et al., 2015; Gregor et al., 2015; Reed et al., 2016), but in comparison these models render local image patches whereas we spatially tile a learned latent embedding and render an entire image. Work by Dorta et al. (2017) explored using a Laplacian Pyramid arrangement for VAEs, where some parts of the latent distributions would have global effects, whereas others would modify fine scale details about the reconstruction. Similar constraints on the extent of the spatial neighbourhood affected by the latent distribution has also been explored in Parmar et al. (2018). Other types of generative models already use convolutional latent distributions (Finn et al., 2015; Levine et al., 2015; Eslami et al., 2018), unlike the flat vectors we consider here. In this case the tiling operation is not necessary, but adding coordinate channels might be of help.
4 Results
We present here a select subset of our assessment of the Spatial Broadcast decoder. Interested readers can refer to Appendices AG for a more thorough set of observations and comparisons.
4.1 Performance on colored sprites



lowestvariance ones are shown in the traversals (the remainder are noncoding coordinates).
The Spatial Broadcast decoder was designed with objectfeature representations in mind, hence to initially showcase its performance we use a dataset of simple objects: colored 2dimensional sprites. This dataset is described in the literature (Burgess et al., 2018) as a colored version of the dSprites dataset (Matthey et al., 2017). One advantage of the colored sprites dataset is it has known factors of variation, of which there are 8: Xposition, Yposition, Size, Shape, Angle, and threedimensional Color. Thus we can evaluate disentangling performance with metrics that rely on ground truth factors of variation (Chen et al., 2018; Kim and Mnih, 2017).
To quantitatively evaluate disentangling, we focus primarily on the Mutual Information Gap (MIG) metric (Chen et al., 2018). The MIG metric is defined by first computing the mutual information matrix between the latent distribution means and ground truth factors, then computing the difference between the highest and secondhighest elements for each ground truth factor (i.e. the gap), then finally averaging these values over all factors.
Despite significant shortcomings (see Section 4.4), we did find the MIG metric overall can a helpful tool for evaluating representational quality, though should by no means be trusted blindly. We did also use the FactorVAE metric (Kim and Mnih, 2017), yet found it to be less consistent then MIG (we report these results in Appendix F).
In Figure 2 we compare a standard DeConv VAE (a VAE with an MLP + deconvolutional network decoder) to a Spatial Broadcast VAE (a VAE with the Spatial Broadcast decoder). We see that the Spatial Broadcast VAE outperforms the DeConv VAE both in terms of the MIG metric and traversal visualizations. The Spatial Broadcast VAE traversals clearly show all 8 factors represented in separate latent factors, seemingly on par with more complicated stateoftheart methods on this dataset (Burgess et al., 2018). The hyperparameters for the models in Figure 2 were chosen over a large sweep to minimize the model’s error, not chosen for any disentangling properties explicitly. See Appendix F for more details about hyperparameter choices and sensitivity.
The Spatial Broadcast decoder is complementary to existing disentangling VAE techniques, hence improves not only a vanilla VAE but SOTA models as well. To demonstrate this, we consider two recently developed models, FactorVAE (Kim and Mnih, 2017) and VAE (Higgins et al., 2017a).



Figure 3 shows the Spatial Broadcast decoder improving the disentangling of FactorVAE in terms of both the MIG metric and traversal visualizations. See Appendix G for further results to this effect.


, and the shaded region shows the hull of one standard deviation. White dots indicate
. (a) Reconstruction (Negative LogLikelihood, NLL) vs KL. yields low NLL and high KL (bottomright of figure), whereas yields high NLL and low KL (topleft of figure). See Alemi et al. (2017) for details. Spatial Broadcast VAE shows a better ratedistortion curve than Deconv VAE. (b) Reconstruction vs MIG metric. correspond to lower NLL and low MIG regions (bottomleft of figure), and values correspond to high NLL and high MIG scores (towards topright of figure). Spatial Broadcast VAE is better disentangled (higher MIG scores) than Deconv VAE.Figure 4 shows performance of a VAE with and without the Spatial Broadcast decoder for a range of values of . Not only does the Spatial Broadcast decoder improve disentangling, it also yields a lower ratedistortion curve, hence learns to more efficiently represent the data than the same model with a DeConv decoder.
4.2 Datasets without positional variation


The colored sprites dataset discussed in Section 4.1 seems particularly wellsuited for the Spatial Broadcast decoder because X and Yposition are factors of variation. However, we also evaluate the architecture on datasets that have no positional factors of variation: Chairs and 3D ObjectinRoom datasets (Aubry et al., 2014; Kim and Mnih, 2017). In the latter, the factors of variation are highly nonlocal, affecting multiple regions spanning nearly the entire image. We find that on both datasets a Spatial Broadcast VAE learns representations that look very welldisentangled, seemingly as well as SOTA methods on these datasets and without any modification of the standard VAE objective (Kim and Mnih, 2017; Higgins et al., 2017a). See Figure 5 for results (and supplementary Figure 12 for additional traversals).
While this shows that the Spatial Broadcast decoder does not hurt when used on datasets without positional variation, it may seem unlikely that it would help in this context. However, we show in Appendix G that it actually can help to some extent on such datasets. We attribute this to its using a shallower network and no upsampling deconvolutions (which have been observed to cause optimization difficulties in a variety of settings (Liu et al., 2018; Odena et al., 2016)).
4.3 Datasets with small objects
In exploring datasets with objects varying in position, we often find a (standard) DeConv VAE learns a representation that is discontinuous with respect to object position. This effect is amplified as the size of the object decreases. This makes sense, because the pressure for a VAE to represent position continuously comes from the fact that an object and a positionperturbed version of itself overlap in pixel space (hence it is economical for the VAE to map noise in its latent samples to local translations of an object). However, as an object’s size decreases, this pixel overlap decreases, hence the pressure for a VAE to represent position continuously weakens.
4.4 Latent space geometry visualization
Dataset Distribution Ground Truth Factors Dataset Samples  
MIG Metric Values Worst Replica Best Replica  



Evaluating the quality of a representation can be challenging and timeconsuming. While a number of metrics have been proposed to quantify disentangling, many of them have serious shortcomings and there is as yet no consensus in the literature which to use (Locatello et al., 2018). We believe it is impossible to quantify how good a representation is with a single scalar, because there is a fundamental tradeoff between how much information a representation contains and how wellstructured the representation is. This has been noted by others in the disentangling literature (Ridgeway and Mozer, 2018; Eastwood and Williams, 2018). This disentanglingdistortion tradeoff is a recapitulation of the ratedistortion tradeoff (Alemi et al., 2017) and can be seen firsthand in Figure 4. We would like representations that both reconstruct well and disentangle well, but exactly how to balance these two factors is a matter of subjective preference (and surely depends on the dataset). Any scalar disentangling metric will implicitly favor some arbitrary disentanglingreconstruction potential.
In addition to this unavoidable limitation of disentangling metrics, we found that the MIG metric (while perhaps more accurate than other existing metrics) does not capture the intuitive notion of disentangling because:

It depends on a choice of basis for the ground truth factors, and heavily penalizes rotation of the representation with respect to this basis. Yet it is often unclear what the correct basis for the ground truth factors is (e.g. RGB vs HSV vs HSL). For example, see the bottom row of Figure 7.

It is invariant to a folding of the representation space, as long as the folds align with the axes of variation. See the middle row of Figure 7 for an example of a doublefold in the latent space which isn’t penalized by the MIG metric.
Due to the subjective nature of disentangling and the difficulty in defining appropriate metrics, we put heavy emphasis on latent space visualization as a means for representational analysis. Latent space traversals have been extensively used in the literature and can be quite revealing (Higgins et al., 2017a, b). However, in our experience, traversals suffer two shortcomings:

Some latent space entanglement can be difficult for the eye to perceive in traversals. For example, a slight change in brightness in a latent traversal that represents changing position can easily go unnoticed.

Traversals only represent the latent space geometry around one point in space, and crossreferencing corresponding traversals between multiple points is quite timeconsuming.
Consequently, we caution the reader against relying too heavily on traversals when evaluating latent space geometry. We propose an additional method for analyzing latent spaces, which we found very useful in our research. This is possible when there are known generative factors in the dataset and aims to directly view the embedding of generative factor space in the model’s latent space: We plot in latent space the locations corresponding to a grid of points in generative factor space. While this is can only visualize the latent embedding of a 2dimensional subspace of generative factor space, it can be very revealing of the latent space geometry.
We showcase this analysis method in Figure 7 on a dataset of circles varying in X and Yposition. (See Appendix G for similar analyses on many more datasets.) We compare the representations learned by a DeConv VAE and a Spatial Broadcast VAE. This reveals a stark contrast: The Spatial Broadcast latent geometry is a nearperfect Euclidean transformation, while the Deconv decoder model’s representations are very entangled.
The MIG metric does not capture this difference well: It gives a score near zero to a Spatial Broadcast model with a latent space rotated with respect to the generative factors and a greater score to a highly entangled Deconv model. In practice we care primarily about compositionality of (and hence disentangling of) subspaces of generative factor space and not about axisalignment within these subspaces. For example, rotation within XY position subspace is acceptable, whereas rotation in positioncolor space is not. This naturally poses a great challenge for both designing disentangling metrics and formally defining disentangling (cf. Higgins et al., 2018)
. We believe that additional structure over the factors of variation can be inferred from temporal correlations, the structure of a reinforcement learning agent’s action space, and other effects not necessarily captured by a staticimage dataset. Nonetheless, inductive biases like the Spatial Broadcast decoder can be explored in the context of static images, as they will likely also help representation learning in more complete contexts.
4.5 Datasets with dependent factors
Dataset Distribution Ground Truth Factors Dataset Samples  
MIG Metric Values Worst Replica Best Replica  



Our primary motivation for unsupervised representation learning is the prospect of improved generalization and transfer, as discussed in Section 1. Consequently, we would like a model to learn compositional representations that can generalize to new feature combinations. We explore this in a controlled setting by holding out a region of generative factor space from the training dataset, then evaluating the representation in this heldout region. Figure 8 shows results to this effect, comparing the Spatial Broadcast VAE to a DeConv VAE on a dataset of circles with a heldout region in the middle of XY positionspace (so the model sees no circles in the middle of the image, indicated by the black dots in the second column). The Spatial Broadcast VAE generalizes almost perfectly in this case, appearing unaffected by the fact that the data generative factors are no longer independent and extrapolating nearly linearly throughout the heldout region. In contrast, the DeConv VAE’s latent space is highly entangled, even more so than in the case with independent factors of Figure 7. See Appendix G for more analyses of generalization on other datasets.
From the perspective of density modeling, disentangling of the Spatial Broadcast VAE may seem undesirable in this case because it allocates high probability in latent space to a region of low (here, zero) probability in the dataset. However, from the perspective of representation learning and compositional features, such generalization is a highly desirable property.
5 Conclusion
Here we present and analyze the Spatial Broadcast decoder in the context of Variational Autoencoders. We demonstrate that it improves learned latent representations, most dramatically for datasets with objects varying in position. It also improves generalization in latent space and can be incorporated into SOTA models to boost their performance in terms of both disentangling and reconstruction accuracy. We rigorously analyze the contribution of the Spatial Broadcast decoder on a wide variety of datasets and models using a range of visualizations and metrics. We hope that this analysis has provided the reader with an intuitive understanding of how and why it improves learned representations.
We believe that learning compositional representations is an important ingredient for flexibility and generalization in many contexts, from supervised learning to reinforcement learning, and the Spatial Broadcast decoder is one step towards robust compositional visual representation learning.
Acknowledgments
We thank Irina Higgins, Danilo Rezende, Matt Botvinick, and Yotam Doron for helpful discussions and insights.
References
 Alemi et al. [2017] Alexander A. Alemi, Ben Poole, Ian Fischer, Joshua V. Dillon, Rif A. Saurous, and Kevin Murphy. An informationtheoretic analysis of deep latentvariable models. CoRR, abs/1711.00464, 2017. URL http://arxiv.org/abs/1711.00464.
 Aubry et al. [2014] M. Aubry, D. Maturana, A. Efros, B. Russell, and J. Sivic. Seeing 3d chairs: exemplar partbased 2d3d alignment using a large dataset of cad models. In CVPR, 2014.
 Bengio et al. [2013] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.
 Burgess et al. [2018] Christopher P. Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. Understanding disentangling in VAE. arXiv, 2018.
 Chen et al. [2018] Tian Qi Chen, Xuechen Li, Roger B. Grosse, and David K. Duvenaud. Isolating sources of disentanglement in variational autoencoders. CoRR, abs/1802.04942, 2018. URL http://arxiv.org/abs/1802.04942.
 Devlin et al. [2018] Jacob Devlin, MingWei Chang, Kenton Lee, and Kristina Toutanova. BERT: pretraining of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. URL http://arxiv.org/abs/1810.04805.
 Dorta et al. [2017] Garoe Dorta, Sara Vicente, Lourdes Agapito, Neill D.F. Campbell, Simon Prince, and Ivor Simpson. Laplacian pyramid of conditional variational autoencoders. In Proceedings of the 14th European Conference on Visual Media Production (CVMP 2017), CVMP 2017, pages 7:1–7:9, New York, NY, USA, 2017. ACM. ISBN 9781450353298. doi: 10.1145/3150165.3150172. URL http://doi.acm.org/10.1145/3150165.3150172.
 Eastwood and Williams [2018] Cian Eastwood and Christopher K. I. Williams. A framework for the quantitative evaluation of disentangled representations. ICLR, 2018.

Eslami et al. [2018]
S. M. Ali Eslami, Danilo Jimenez Rezende, Frederic Besse, Fabio Viola, Ari S.
Morcos, Marta Garnelo, Avraham Ruderman, Andrei A. Rusu, Ivo Danihelka, Karol
Gregor, David P. Reichert, Lars Buesing, Theophane Weber, Oriol Vinyals, Dan
Rosenbaum, Neil Rabinowitz, Helen King, Chloe Hillier, Matt Botvinick, Daan
Wierstra, Koray Kavukcuoglu, and Demis Hassabis.
Neural scene representation and rendering.
Science, 360(6394):1204–1210, 2018. ISSN 00368075. doi: 10.1126/science.aar6170. URL http://science.sciencemag.org/content/360/6394/1204.  Finn et al. [2015] Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, and Pieter Abbeel. Learning visual feature spaces for robotic manipulation with deep spatial autoencoders. CoRR, abs/1509.06113, 2015. URL http://arxiv.org/abs/1509.06113.
 Gehring et al. [2017] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolutional sequence to sequence learning. CoRR, abs/1705.03122, 2017. URL http://arxiv.org/abs/1705.03122.
 Gregor et al. [2015] Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. DRAW: A recurrent neural network for image generation. CoRR, abs/1502.04623, 2015. URL http://arxiv.org/abs/1502.04623.
 Gu et al. [2016] Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. Incorporating copying mechanism in sequencetosequence learning. CoRR, abs/1603.06393, 2016. URL http://arxiv.org/abs/1603.06393.
 Higgins et al. [2017a] Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. VAE: Learning basic visual concepts with a constrained variational framework. ICLR, 2017a.
 Higgins et al. [2017b] Irina Higgins, Nicolas Sonnerat, Loic Matthey, Arka Pal, Christopher P Burgess, Matthew Botvinick, Demis Hassabis, and Alexander Lerchner. Scan: learning abstract hierarchical compositional visual concepts. arXiv preprint arXiv:1707.03389, 2017b.
 Higgins et al. [2018] Irina Higgins, David Amos, David Pfau, Sébastien Racanière, Loïc Matthey, Danilo J. Rezende, and Alexander Lerchner. Towards a definition of disentangled representations. CoRR, abs/1812.02230, 2018. URL http://arxiv.org/abs/1812.02230.
 Ioffe and Szegedy [2015] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015. URL http://arxiv.org/abs/1502.03167.
 Jaderberg et al. [2015] Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial transformer networks. CoRR, abs/1506.02025, 2015. URL http://arxiv.org/abs/1506.02025.
 Kim and Mnih [2017] Hyunjik Kim and Andriy Mnih. Disentangling by factorising. arxiv, 2017.
 Kingma and Ba [2015] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015.
 Kingma and Welling [2014] Diederik P. Kingma and Max Welling. Autoencoding variational bayes. ICLR, 2014.
 Lake et al. [2016] Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Building machines that learn and think like people. Behavioral and Brain Sciences, pages 1–101, 2016.
 Levine et al. [2015] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. Endtoend training of deep visuomotor policies. CoRR, abs/1504.00702, 2015. URL http://arxiv.org/abs/1504.00702.
 Liang et al. [2015] Xiaodan Liang, Yunchao Wei, Xiaohui Shen, Jianchao Yang, Liang Lin, and Shuicheng Yan. Proposalfree network for instancelevel object segmentation. CoRR, abs/1509.02636, 2015. URL http://arxiv.org/abs/1509.02636.
 Liu et al. [2018] Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski Such, Eric Frank, Alex Sergeev, and Jason Yosinski. An intriguing failing of convolutional neural networks and the coordconv solution. CoRR, abs/1807.03247, 2018. URL http://arxiv.org/abs/1807.03247.
 Locatello et al. [2018] Francesco Locatello, Stefan Bauer, Mario Lucic, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. arXiv preprint arXiv:1811.12359, 2018.
 Marcus [2018] Gary Marcus. Deep learning: A critical appraisal. CoRR, abs/1801.00631, 2018. URL http://arxiv.org/abs/1801.00631.
 Matthey et al. [2017] Loic Matthey, Irina Higgins, Demis Hassabis, and Alexander Lerchner. dsprites: Disentanglement testing sprites dataset, 2017. URL https://github.com/deepmind/dspritesdataset/.
 Nash et al. [2017] Charlie Nash, Ali Eslami, Chris Burgess, Irina Higgins, Daniel Zoran, Theophane Weber, and Peter Battaglia. The multientity variational autoencoder. NIPS Workshops, 2017.
 Odena et al. [2016] Augustus Odena, Vincent Dumoulin, and Chris Olah. Deconvolution and checkerboard artifacts. Distill, 2016. doi: 10.23915/distill.00003. URL http://distill.pub/2016/deconvcheckerboard.
 Parmar et al. [2018] Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, and Alexander Ku. Image transformer. CoRR, abs/1802.05751, 2018. URL http://arxiv.org/abs/1802.05751.
 Perez et al. [2017] Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron C. Courville. Film: Visual reasoning with a general conditioning layer. CoRR, abs/1709.07871, 2017. URL http://arxiv.org/abs/1709.07871.
 Reed et al. [2016] Scott E. Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learning what and where to draw. CoRR, abs/1610.02454, 2016. URL http://arxiv.org/abs/1610.02454.

Rezende et al. [2014]
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra.
Stochastic backpropagation and approximate inference in deep generative models.
ICML, 32(2):1278–1286, 2014.  Ridgeway and Mozer [2018] Karl Ridgeway and Michael C Mozer. Learning deep disentangled embeddings with the fstatistic loss. NIPS, 2018.
 Ulyanov et al. [2017] Dmitry Ulyanov, Andrea Vedaldi, and Victor S. Lempitsky. Deep image prior. CoRR, abs/1711.10925, 2017. URL http://arxiv.org/abs/1711.10925.
 Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. CoRR, abs/1706.03762, 2017. URL http://arxiv.org/abs/1706.03762.
 Watters et al. [2017] Nicholas Watters, Daniel Zoran, Theophane Weber, Peter Battaglia, Razvan Pascanu, and Andrea Tacchetti. Visual interaction networks: Learning a physics simulator from video. In Advances in Neural Information Processing Systems 30, pages 4539–4547. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/7040visualinteractionnetworkslearningaphysicssimulatorfromvideo.pdf.
 Wojna et al. [2017] Zbigniew Wojna, Alexander N. Gorban, DarShyang Lee, Kevin Murphy, Qian Yu, Yeqing Li, and Julian Ibarz. Attentionbased extraction of structured information from street view imagery. CoRR, abs/1704.03549, 2017. URL http://arxiv.org/abs/1704.03549.
 Zhao et al. [2015] Junbo Zhao, Michael Mathieu, Ross Goroshin, and Yann LeCun. Stacked whatwhere autoencoders. arXiv, abs/1506.02351, 2015. URL https://arxiv.org/abs/1506.02351.
Appendix A Experiment Details
For all VAE models we used a Bernoulli decoder distribution, parametrized by its logits. It is with respect to this distribution that the reconstruction error (negative log likelihood) was computed. This could accomodate our datasets, since they were normalized to have pixel values in
. We also explored using a Gaussian distribution with fixed variance (for which the NLL is equivalent to scaled MSE), and found that this produces qualitatively similar results and in fact improves stability. Hence while a Bernoulli distribution usually works, we suggest the reader wishing to experiment with these models starts with a Gaussian decoder distribution with mean parameterized by the decoder network output and variance constant at
.In all networks we used ReLU activations, weights initialized by a truncated normal (see
[Ioffe and Szegedy, 2015]), and biases initialized to zero. We use no other neural network tricks (no BatchNorm or dropout), and all models were trained with the Adam optimizer
[Kingma and Ba, 2015]. See below for learning rate details.a.1 VAE Hyperparameters
For all VAE models except VAE (shown only in Figure 4), we use a standard VAE loss, namely with a KL term coefficient . For FactorVAE we also use , as in Kim and Mnih [2017].
For the VAE, VAE, CoordConv VAE and ablation study we used the network parameters in Table 1. We note that, while the Spatial Broadcast decoder uses fewer parameters than the DeConv decoder, it does require about more memory to store the weights. However, for the 3D ObjectinRoom dataset we included three additional deconv layers in the Spatial Droadcast decoder (without these additional layers the decoder was not powerful enough to give good reconstructions on that dataset).
All of these models were trained using a learning rate of
on with batch size 16. All convolutional and deconvolutional layers have “same” padding, i.e. have zeropadded input so that the output shape is
in the case of convolution and in the case of deconvolution.a.2 FactorVAE
For the FactorVAE model, we used the hyperparameters described in the FactorVAE paper [Kim and Mnih, 2017]. Those network parameters are reiterated in Table 2. Note that the Spatial Broadcast parameters are the same as for the other models in Table 1. For the optimization hyperparameters we used , a learning rate of for the VAE updates, a learning rate of for the discriminator updates, and batch size 32. These parameters generally gave stable results.
However, when training the FactorVAE model on colored sprites we encountered instability during training. We subsequently did a number of hyperparameter sweeps attempting to improve stability, but to no avail. Ultimately, we used the hyperparameters in Table 2, though even with limited training steps (see Appendix Section A.4) about of seeds diverged before training completed for both Spatial Broadcast and Deconv decoder.
a.3 Datasets
All datasets were rendered in images of size and normalized to .
Colored Sprites:
For this dataset, we use the binary dsprites dataset opensourced in [Matthey et al., 2017], but multiplied by colors sampled in HSV space uniformly within the region , , . Sans color, there are 737,280 images in this dataset. However, we sample the colors online from a continuous distribution, effectively making the dataset size infinite.
Chairs:
This dataset is opensourced in [Aubry et al., 2014]. This dataset, unlike all others we use, has only a single channel in its images. It contains 86,366 images.
3D ObjectinRoom:
This dataset was used extensively in the FactorVAE paper [Kim and Mnih, 2017]. It consists of an object in a room and has 6 factors of variation: Camera angle, object size, object shape, object color, wall color, and floor color. The colors are sampled uniformly from a continuous set of hues in the range . This dataset contains 480,000 images, procedurally generated as all combinations of 10 floor hues, 10 wall hues, 10 object hues, 8 object sizes, 4 object shapes, and 15 camera angles.
Circles Datasets:
To more thoroughly explore datasets with a variety of distributions, factors of variation, and heldout test sets we wrote our own procedural image generator for circular objects in PyGame (rendered with an antialiasing factor of 5). We used this to generate the data for results in Section 4.4. In these datasets we control subsets of the following factors of variation: Xposition, Yposition, Size, Color. We generated five datasets in this way, which we call XY, XH, RG, XYH Small, and XYH Tiny, and can be seen in Figures (Fig 13), (Fig 16), (Fig 19), (Fig 6), and (Fig 10) respectively.
Table 3 shows the values of these factors for each dataset. Note that for some datasets we define the color distribution in RGB space, and for others we define it in HSV space.
To create the datasets with dependent factors, we hold out one quarter of the dataset (the intersection of half of the ranges of each of the two factors), either centered within the data distribution or in the corner.
For each dataset we generate 500,000 randomly sampled training images.
X  Y  Size  (H, S, V)  (R, G, B)  
3 XY  [0.2, 0.8]  [0.2, 0.8]  0.2  N/A  (1.0, 1.0, 1.0) 
XH  [0.2, 0.8]  0.5  0.3  ([0.2, 0.8], 1.0, 1.0)  N/A 
RG  0.5  0.5  0.5  N/A  ([0.4, 0.8], [0.4, 0.8], 1.0) 
XYH Small  [0.2, 0.8]  [0.2, 0.8]  0.1  ([0.2, 0.8], 1.0, 1.0)  N/A 
XYH Tiny  [0.2, 0.8]  [0.2, 0.8]  0.075  ([0.2, 0.8], 1.0, 1.0)  N/A 
a.4 Training Steps
The number of training steps for each model on each dataset can be found in Table 4. In general, for each dataset we used enough training steps so that all models converged. Note that while the training iterations is different for FactorVAE than for the other models on colored sprites (due to instability of FactorVAE), this has no bearing on our results because we do not compare across models. We only compare across decoder architectures, and we always used the same training steps for both DeConv and Spatial Broadcast decoders within each model.
VAE  FactorVAE  
3 Colored Sprites  
Chairs  N/A  
3D Objects  N/A  
Circles 
Appendix B Ablation Study
One aspect of the Spatial Broadcast decoder is the concatenation of constant coordinate channels to its tiled input latent vector. While our justification of its performance emphasizes the simplicity of computation it affords, it may seem possible that the coordinate channels are only used to provide positional information and the simplicity of this positional information (linear meshgrid) is irrelevant. Here we perform an ablation study to demonstrate that this is not the case; the organization of the coordinate channels is important. For this experiment, we randomly permute the coordinate channels through space. Specifically, we take the shape coordinate channels and randomly permute the entries. We keep each pair together to ensure that after the shuffling each location does still have a unique pair of coordinates in the coordinate channels. Importantly, we only shuffle the coordinate channels once, then keep them constant throughout training.
Figure 9 shows reconstructions and traversals for two replicas (with different shuffled coordinate channels). Both disentangling and reconstruction accuracy are significantly reduced.
Appendix C Disentangling Tiny Objects
In the dataset of small colored circles shown in Figure 6 the circle diameter is times the framewidth. We also generated a dataset with circles times the framewidth, and Figure 10 shows similar results on this dataset (albeit more difficult for the eye to make out). We were surprised to see disentangling of such tiny objects and have not explored the lower object size limit for disentangling with the Spatial Broadcast decoder.
Appendix D CoordConv VAE
CoordConv VAE [Liu et al., 2018] has been proposed as a decoder architecture to improve the continuity of VAE representations. CoordConv VAE appends coordinate channels to every feature layer of the standard deconvolutional decoder, yet does not spatially tile the latent vector, hence retains upsampling deconvolutions.
Figure 11 shows analysis of this model on the colored sprites dataset. While the latent space does appear to be continuous with respect to object position, it is quite entangled (far more so than a Spatial Broadcast VAE). This is not very surprising, since CoordConv VAE uses upscale deconvolutions to go all they way from spatial shape to spatial shape , while in Table 10 we see that introducing upscaling hurts disentangling in a Spatial Broadcast VAE.


Appendix E Extra traversals for datasets without positional variation
As we acknowledge in the main text, a single latent traversal plot only shows local disentangling at one point in latent space. Hence to support our claim in Section 4.2 that the Spatial Broadcast VAE disentangled the Chairs and 3D objects datasets, we show in Figure 12 traversals about a second seed in each dataset for the same models as in Figure 5.
Appendix F Architecture Hyperparameters
In order to remain objective when selecting model hyperparameters for the Spatial Broadcast and Deconv decoders, we chose hyperparameters based on minimizing the ELBO loss, not considering any information about disentangling. After finding reasonable encoder hyperparameters, we performed largescale (25 replicas each) sweeps over a few decoder hyperparameters for both the DeConv and Spatial Broadcast decoder on the colored sprites dataset. These sweeps are revealing of hyperparameter sensitivity, so we report the following quantities for them:

ELBO. This is the evidence lower bound (total VAE loss). It is the sum of the negative log likelihood (NLL) and KLdivergence.

NLL. This is the negative log likelihood of an image with respect to the model’s reconstructed distribution of that image. It is a measure of reconstruction accuracy.

KL. This is the KL divergence of the VAE’s latent distribution with its Gaussian prior. It measures how much information is being encoded in the latent space.

Latents Used. This is the mean number of latent coordinates with standard deviation less than . Typically, a VAE will have some unused latent coordinates (with standard deviation near ) and some used latent coordinates. The threshold is arbitrary, but this quantity does provide a rough idea of how many factors of variation the model may be representing.

MIG. The MIG metric.

Factor VAE. This is the metric described in the FactorVAE paper [Kim and Mnih, 2017]. We found this metric to be less consistent than the MIG metric (and equally flawed with respect to rotated coordinates), but it qualitatively agrees with the MIG metric most of the time.
f.1 ConvNet Depth
Table 5 shows results of sweeping over ConvNet depth in the Spatial Broadcast decoder. This reveals a consistent trend: As the ConvNet deepens, the model moves towards lower rate/higher distortion. Consequently, latent space information and reconstruction accuracy drop. Traversals with deeper nets show the model dropping factors of variation (the dataset has 8 factors of variation).
Table 6 shows a noisier but similar trend when increasing DeConvNet depth in the DeConv decoder.
ConvNet  ELBO  NLL  KL  Latents Used  MIG  Factor VAE 

2Layer  339 (2.3)  312 (2.7)  27.5 (0.54)  8.33 (0.37)  0.076 (0.038)  0.187 (0.027) 
3Layer  329 (4.1)  305 (4.6)  24.4 (0.54)  7.22 (0.34)  0.147 (0.057)  0.208 (0.084) 
4Layer  341 (6.8)  318 (7.7)  22.6 (0.97)  5.93 (0.51)  0.157 (0.045)  0.226 (0.046) 
5Layer  340 (8.8)  317 (9.7)  22.7 (0.99)  5.70 (0.37)  0.173 (0.059)  0.218 (0.030) 
ConvNet  ELBO  NLL  KL  Latents Used  MIG  Factor VAE 

3Layer  372 (8.6)  346 (8.9)  26.8 (0.40)  9.20 (0.04)  0.031 (0.018)  0.144 (0.031) 
4Layer  349 (9.4)  322 (10.0)  27.1 (0.88)  8.90 (0.24)  0.025 (0.015)  0.139 (0.009) 
5Layer  340 (9.8)  314 (10.4)  26.0 (1.00)  7.95 (0.64)  0.056 (0.32)  0.184 (0.053) 
6Layer  349 (15.0)  326 (16.1)  23.3 (1.42)  6.36 (0.81)  0.056 (0.019)  0.199 (0.029) 
f.2 MLP Depth
The Spatial Broadcast decoder as presented in this work is fully convolutional. It contains no MLP. However, motivated by the need for more depth on the 3D ObjectinRoom dataset, we did explore applying an MLP to the input vector prior to the broadcast operation. We found that including this MLP had a qualitatively similar effect as increasing the number of convolutional layers on the colored sprited dataset, decreasing latent capacity and giving poorer reconstructions. These results are shown in Table 7.
However, on the 3D ObjectinRoom dataset adding the MLP did improve the model when using ConvNet depth 3 (the same as for colored sprites). Results of a sweep over depth of a prebroadcast MLP are shown in Table 9. As mentioned in Section 4.2, we were able to achieve the same effect by instead increasing the ConvNet depth to 6, but for those interested in computational efficiency using a prebroadcast MLP may be a better choice for datasets of this sort.
In the DeConv decoder, increasing the MLP layers again has a broadly similar effect as increasing the ConvNet layers, as shown in Table 8.
MLP  ELBO  NLL  KL  Latents Used  MIG  Factor VAE 

0Layer  329 (4)  305 (5)  24.5 (0.54)  7.21 (0.34)  0.147 (0.057)  0.208 (0.084) 
1Layer  330 (6)  307 (6)  23.9 (0.72)  6.68 (0.46)  0.164 (0.043)  0.200 (0.034) 
2Layer  349 (15)  327 (17)  21.5 (2.13)  6.03 (0.66)  0.210 (0.048)  0.232 (0.045) 
3Layer  392 (23)  399 (114)  15.7 (2.93)  4.17 (1.23)  0.160 (0.034)  0.275 (0.064) 
MLP  ELBO  NLL  KL  Latents Used  MIG  Factor VAE 

1Layer  347 (13)  321 (14)  25.8 (1.3)  7.97 (0.72)  0.052 (0.020)  0.174 (0.043) 
2Layer  352 (17)  328 (18)  23.7 (1.7)  6.68 (0.78)  0.051 (0.024)  0.196 (0.024) 
3Layer  365 (19)  345 (21)  19.7 (2.4)  5.24 (0.73)  0.144 (0.062)  0.243 (0.043) 
Comments
There are no comments yet.