Log In Sign Up

Anysize GAN: A solution to the image-warping problem

by   Connah Kendrick, et al.

We propose a new type of General Adversarial Network (GAN) to resolve a common issue with Deep Learning. We develop a novel architecture that can be applied to existing latent vector based GAN structures that allows them to generate on-the-fly images of any size. Existing GAN for image generation requires uniform images of matching dimensions. However, publicly available datasets, such as ImageNet contain thousands of different sizes. Resizing image causes deformations and changing the image data, whereas as our network does not require this preprocessing step. We make significant changes to the standard data loading techniques to enable any size image to be loaded for training. We also modify the network in two ways, by adding multiple inputs and a novel dynamic resizing layer. Finally we make adjustments to the discriminator to work on multiple resolutions. These changes can allow multiple resolution datasets to be trained on without any resizing, if memory allows. We validate our results on the ISIC 2019 skin lesion dataset. We demonstrate our method can successfully generate realistic images at different sizes without issue, preserving and understanding spatial relationships, while maintaining feature relationships. We will release the source codes upon paper acceptance.


page 3

page 8

page 10

page 11


MatchingGAN: Matching-based Few-shot Image Generation

To generate new images for a given category, most deep generative models...

FIGR: Few-shot Image Generation with Reptile

Generative Adversarial Networks (GAN) boast impressive capacity to gener...

Unsupervised Change Detection in Satellite Images with Generative Adversarial Network

Detecting changed regions in paired satellite images plays a key role in...

SizeGAN: Improving Size Representation in Clothing Catalogs

Online clothing catalogs lack diversity in body shape and garment size. ...

Vector Image Generation by Learning Parametric Layer Decomposition

Deep image generation is becoming a tool to enhance artists and designer...

LocoGAN – Locally Convolutional GAN

In the paper we construct a fully convolutional GAN model: LocoGAN, whic...

Barbershop: GAN-based Image Compositing using Segmentation Masks

Seamlessly blending features from multiple images is extremely challengi...

1 Introduction

Generative Adversarial Networks [6]

are neural networks that work in tandem, one generator that generates fake data, and the other is a discriminator, which takes both real data and the fake generated data, attempting to discern real and fake. GANs function by having the networks compete against each other with the discriminator causing the generator to synthesise more realistic images. In many research areas, this has shown a drastic increase in the performance of deep learning and paved the way for better results, such as in skin lesion diagnosis


, image inpainting

[16], and object classification [12].

The state-of-the-art GANs require standard size input images and generate fixed-size output images. These restrict and limit the capability of the networks. The resizing process changes the properties of the images, and in most of the cases, removing/editing critical pieces of information from the image, such as resizing from a standard high definition (16:9) image to a square image (9:9), as shown in Figure 1, which in many cases remove and change image data. Furthermore, this causes significant image distortion, where critical objects in the scene become warped due to transformations of the image, as shown in Figure 1. With the diversity of images in a large dataset, such as ImageNet [18], the problem has become imminent. In its raw format (as shown in Table 1), it has 773,565 images and with 77,990 different image sizes. Table 1 highlights the size variation of images that a single dataset can have, such as ImageNet has a mean of 10 images for each resolution, but a SD of 750 meaning many images are quite unique in their resolution. Whereas, other dataset, such as Morph only contains two image resolutions. The issues with ImageNet also cause a problem with some images being wider and higher, as shown in Figure 1, resizing these cause the lizard and dog to change appearance drastically. However, the square images would resize better in traditional deep learning. Attempting to focus on one resolution would reduce the dataset significantly, thus making the processing of resizing necessary, even with the changes to the data it causes. This also directly affects the application of the networks, such as image inpainting for restoration – where many old paintings are with arbitrary sizes. To restore these paintings using current inpainting GANs, we would have to reshape and deform them first. Similarly, with object recognition and medical imaging, the variations of capture devices produce images of varying sizes and quality. If our solutions are to be genuinely integrated and effective, we must adapt to work with all the available data, in its native resolutions.

Dataset Total Images Resolution MeanSD Wider Higher Square
Morph [9] 55,134 2 27,567 12,762 0 2 0
CelebA [14] 202,599 62,091 3.2629 34.2297 14,574 47,035 482
ImageNet [18] 773,565 71,990 10.7455 750.0500 43,612 27,796 582
ISIC [20] 25,330 29 873.4828 2956.3837 2 26 1
Gwern Faces [4] 302,623 648 467.0108 3223.2380 362 271 15
Table 1: A breakdown of the datasets and image resolutions. Mean

SD represents the mean and Standard Deviation (SD) of the number of images per resolution. Wider refers to how many resolutions in the dataset have a greater width than height. Higher refers to the number of resolutions that have a greater height than width. Square refers to resolution that have equal width and height.

We propose a novel architecture that adds additional layers to the existing generator to generate images at different resolutions, based upon those in the training set. Through progressive resizing, while allowing underlying GAN structures to remain almost identical, for example, ResNet [8] as a backbone. We demonstrate basic resizing and progressive resizing, against the state-of-the-art GANs, which can only produce a single resolution image. The novelty comes from the technique, of passing the generator addition information with the latent vector, to determine image resolution on the fly, allowing preservation of aspect ratios, and full images in memory. In addition to the adjustments of image loading sequences to support the jagged nature of different sized images. We also perform an analysis of the current diversity of image datasets resolution to highlight why this is required.

Figure 1: An Illustration of the standard normalisation procedure which removes, duplicates and changes spatial data within an image.

2 Related work

This section describes existing GAN and provide insight on how the state-of-the-art GANs are structured. For certain GANs, it is possible to use different sized images as they focus on Fully Convolutional (FC) structures, where an image is an input into the GAN and modified by the generator and fed into the discriminator (that does not support a dynamic range of images). Usually, this technique is used in image inpainting [16]. However, using different size images remain unexplored and resizing is still used as a crucial step in GANs. In order to work with images of different sizes, major changes to the workflow must be performed, in which we show our solution in Section 3. For this paper, we focus on the use of latent vector-based GANs.

Existing GANs implement latent vectors, which are a 1D vector of random values. The generator takes the vector, usually feeding it into a Multi-Layered Perceptron (MLP), allowing it to expand the input vector. After the MLP, the layer is reshaped, allowing for convolutional processing to occur and images to be produced. The use of dense layers and convolutional structure means these type of GANs are limited to generating images of one size.

In early works of GANs, focus on small-scale images are done in abundance, focus on the MNIST [13] and fashion MNIST datasets [22]. Later works of GANs concentrate on a vast array of tasks thus use different datasets. For this work, we focus on GAN architecture, focusing on the use of latent vectors.

In terms of standard convolutional network, the issue of feeding variable sized images into an MLP has been resolved. MLPs require a fixed-sized input, and in previous works, standard practice was to crop and/or warp images, which as previously shown deforms the images produces artefacts and limiting network performance. He et al. [7] created a method call Spatial Pyramid Pooling (SPP). In-which, the authors used multi-scale pooling to reduce images of any size into a consistently sized vector. SPP splits input images into bins, which have no impact from the size, the pooling can then collect feature from each bin. The results of each bin can then be concatenated together to produce a singular input vector of a fixed length, regardless of input image shape. SPP showed an increase in system accuracy, in classification by maintaining image aspect ratio. Another technique for this is Global Average Pooling (GAP) [5], we takes the average of each convolutional filter output and averages the result. Because the number of filters remains the same, GAP always produces the same sized vector.

The DCGAN [17]

was a GAN designed to experiment with deeper neural networks in adversarial learning. The focus of the network is also on producing larger images. The work showed that GANs are capable of unsupervised learning on a wide range of images, such as face, bedrooms and multi-class datasets, such as cifar-10.

A significant breakthrough with GANs was the creation of the Progressive Growing of GANs by Karras et al. [10]. They created a network with a unique stackable nature. The network in its initial form generates a small image; the GAN then can “stack” addition blocks that up-samples the image size. By stacking the blocks, the image doubles in size each time, allowing a choice of images size to be acquired. This technique produces high-quality images with some control over the size, but once trained the network still only produces single sized images. Furthermore, the network up-samples unilaterally, meaning, in general, the image will stay a square shape. The progressive growing of GANs is one of the most potent GANs for latent vector production of facial images and is widely considered state-of-the-art. However, StyleGan [11] is follow a similar procedure, but adds style input at each upsample layer and some noise. By adding the style information and noise StyleGan produces high quality realistic facial images.

3 Methodology

Figure 2: An Illustration of the Anysize GAN network structure. This illustration serves as an example of the flow. Please note that for our trained network, we perform 5 resizes and ResNet blocks. The discriminator highlights the use of GAP.

To resolve the issue of using a GAN to generate variable sized images, we approach the problem, by changing a standard GAN in four ways:

Firstly, we adjust the discriminator of the GAN. This is a crucial stage, as the discriminator implements a fully connected layer to classify if the image is real or fake. Typically, most GANs flattens the convolution out to a single vector. However, because we use varying size images, the connections are not a fixed-size. Thus, MLP cannot be used. To resolve this, we implement Global Average Pooling (GAP)

[5], which, similar to SPP, reduces a varied sized output to a fixed-sized layer. This modification allows a single discriminator to adapt to all variations of image resolution, and it is based upon the number of output layers from the last convolution to make a MLP.

Secondly, we must adjust the generator to be able to generate images of differing resolutions. To do this, we adjust the generator to a multiple-input network as shown in Figure 2, the first input being the latent vector of random values. The second input to the network is an array of 2D values representing the width and height of the desired image. This will allow external control over the GAN to dictate the size of the generated images. This is one of the significant changes of Anysize GAN, compare to traditional GANs.


Thirdly, we edited the existing convolutional structure of a GAN (ResNet based), to remove all size editing properties from the network, such as upsampling, pooling and strides convolutions. In-addition, we padded convolutions, so the input shape was equal to the output. We then implemented bilinear interpolation (Equation

1) image resizing within the neural network, using the additional inputs to guide the resizing. The resizing was done progressively throughout the network, with a resize layer in-between ResNet blocks. We experimented with single resize but found it less effective. Similarly to the use of nearest neighbour [15] as shown in Equation 2

for resizing showed poorer performance. This technique is not entirely “new”, as demonstrated by the Tensorflow documentation; Traditional upsampling layer can implement both nearest neighbour and Bilinear to resize the convolutional blocks. The novelty of this layer is that, unlike upsampling, our method takes in the exact image size we want, rather than a factor to increase the image size of, in whole integers. In addition, ours naturally allows for downsampling as well. To ensure the resizing is progressively performed, we divide the size we want by how many resizing layers are left, so if we have 5 resize layers, the fifth (closest to the start of the generator) is divided by 5, where the first (closest to the end of the generator) is divide by 1 (the desired size).

Finally, we must adjust the image loading and training loop. Traditionally, the images are loaded into a Numpy matrix and then converted into tensors. However, this is possible as all the images are uniform, so fit a standard matrix, which both Numpy and tensors can work with. The issue becomes apparent when using different sized images, as matrices cannot have a jagged structure, e.g. a 1,128,128,3 can be stacked with another to make a 2,128,128,3 matrix. However, you cannot stack a 1,128,128,3 with a 1,64,64,3 matrix as the sizes no longer match. This means we are required to group according to resolutions. We do this by training each batch as identically-sized images. To do this, we build up to lists:

  • A list of all available image resolutions

  • For each resolution, a list of all images is grouped

These lists allow us to keep track of which sizes are available and which images are left to train on before the epoch is completed. This meant adjustments to the training loop to allow the cycle through resolutions and images. As a pre-processing step, if a resolution did not have enough images to make a single batch, it was removed from the training process. During training, to avoid network bias, we performed two pre-emptive measures:

  • We force the generator to generate same sized images as real images being fed into the discriminator for that batch-size to ensure fairness.

  • We cycle the resolutions after every batch, if available. This means the network does not see all the images of one resolution then the next, for example, the first batch could be 128*128 then the next 33*71. This helps the network generalise, better than training on all of one size then another.

As shown by Figure 1, there are many images resolutions to work with, where some of the images are also too large even for batch sizes of one. However, some of these resolutions have equal aspect ratios, so we employ a ratio maintaining resizing system. We do this by working out how much to reduce the largest side of the images down to a maximum value, we then reduce the opposite side by equally representative size, as shown in Equation 3.



  • is the biggest length side of the image, width or height.

  • is the smallest length side of the image, width or height.

  • is the maximum length of one side on the image. We used 128, due to memory limitations.

  • is the new length of the smallest side because resizing require the whole number, we round to an integer.

  • is the new length of the largest side of the image.

By using Equations 3 and 4, we get manageable sized images, that maintain the aspect ratios and, thus, some spatial information. If in Equation 3, the Large is smaller than the MaxValue, we do not resize the smaller side of the image.

By combining the four steps into a single cohesive network, a latent vector-based GAN loses the limitation of fixed image generational size. We demonstrate the effectiveness on the ISIC 2019 dataset [20]. The ISIC dataset is a skin lesion dataset, containing images of eight types of skin lesions. The key reason for demonstrating on this dataset is due to the limitation in medical imaging, which use a diversity of the equipments. The images collected are of different sizes.

4 Experimental Setup

The networks were trained using Tensorflow [1]

with Keras

[3] as the foreground API. Due to memory limitations, we employed image resizing but preserved the image aspect ratio. We preserve the aspect ratio, by reducing the largest side of the image down to pixels, and the other side by an identical factor, for example, a image would become , but an image would remain unchanged. However, this has side effect of many images sharing identical aspect ratios, but not resize by maintaining the original aspect ratio. For training the networks, we used a machine with a RTX 2080 Ti (11GB) GPU, 128GB RAM and a Intel i7-7820x CPU on Windows 10. We used python 3.6 with Tensorflow version 1.13.1 and Keras 2.2.4 to design and run the models. We will make the source codes available, once the paper has been accepted.

5 Results

We demonstrate the results of Anysize GAN on the ISIC 2019 dataset [20], showing that a single network can generate both realistic images of different sizes. To validate the image generated by Anysize GAN, we perform a comparison the DCGAN [17] as this network has been validated on the ISIC dataset [2]. To ensure a fair comparison, we train the DCGAN with images (the maximum resolution of our Anysize GAN), but still train our network with different resolution up to . To compare the results qualitatively and quantitatively, for Anysize, we dictate the network to generate images.

Anysize DCGAN
Table 2: Our Anysize GAN (left) vs the DCGAN (right) both trained for 180 epochs

As shown in Table 2, both networks have similarly visible results. However, with closer inspection, DCGAN has a squared, “patchy” appearance. However, Anysize GAN manages to create a smoother surface around the edges of the camera lens. It also manages to reduce the patch effect in the generated image. The improvements to the image are from the gained spatial awareness from the Anysize network, coupled with the reduction in network working with the deformed image due to resizing. The Inception Score (IS) [19] is used to measure the performance quantitatively. As shown in Table 3, the IS value of the images generated by Anysize GAN is closer to the original ISIC dataset. This implies that the images generated by Anysize GAN are more closely resembled to the images in ISIC.

Images IS
ISIC original 4.3768 0.2835
Anysize GAN 3.6063 0.07010
DCGAN 2.6734 0.04267
Table 3: Result of the Anysize GAN and DCGAN on 10,000 generated images and the original ISIC 2019 dataset.

To demonstrate how the network has achieved our goals, we demonstrate the network in two scenarios. Firstly, as shown in Table 4, the network generating images with both a random latent vector and a random size from the available sizes in the training set. Table 4 shows that the network has achieved the goal and can successfully generate diverse images of different sizes on command. Secondly, to further demonstrate our network, we want to show that the size does not affect the features of the images, as this is key for conditional style GANs and for controlling the generation. To do this, we generate a latent vector and feed it to the network, but asking for different sizes of that image. As shown in Table 5, our network can maintain features while still creating realistic images. This has the potential of natural augmentation of key generated images as it views the image from multiple sizes and in areas, such as character and facial generation.

Table 4: Images from the Anysize network, with random latent vectors. These images demonstrate that our system can not only generate realistic and diverse images, but a different sizes as well.
size Training samples example 1 example 2 example 3
84*128 153
85*128 1631
89*128 24
95*128 85
96*128 10,899
111*128 70
128*128 12,414
Table 5: Images from the network, with constant latent vector, but with different size input

6 Conclusion

In this paper, we have highlighted the current restrictions of latent vector GANs as follow:

  • Loss of spatial information due to resizing

  • Lack of diversity due to images of fixed size

  • Lack of general image understanding due to the warping of images

  • removes the natural image qualities from training

To resolve these, we have made drastic changes to existing GAN structures. by:

  • Implement a dynamic resizing layer

  • Multiple input layers for the size dictation

  • Used GAP to allow the discriminator to process different size images

  • Overhaul the standard image loading sequence

We currently show results on ISIC. We demonstrate with ISIC that the GAN can be used to generate realistic images of different sizes. This is key for medical fields, where accuracy and data preservation are key to detecting the most subtle problems. However, where the devices between the different hospitals vary significantly in resolution and quality. Anysize shows, it is capable of handling this data, with its required aspect ratios and provides realistic data, that could benefit all the systems equally because of its ability to match its data style. This methodology has shown that the spatial preservation of image data within a network can be maintained and allow datasets with large image diversity to be processed naturally. However, the work is only currently a proof-of-concept and demonstration that this is feasible, improve the quality of images generated is still possible and opens up new avenues of research.

7 Future works

Further, improvements to Anysize are required, new CNN, such as capsule networks that has a better understanding of spatial data could drastically improve image generation. Furthermore, as per our training regime, we cycle through the different image sizes train one batch of one size then a batch of another, but research into if this is the best way should be performed in a full qualitative and quantitative study.

The most significant improvement can be by adjusting the network structure, to resemble other GANs that provide more stable and realistic results as we have been limited by both memory and time to train, as this is a proof-of-concept.

Other avenues of research require high-end GPU machines to allows for quick processing of data in a realistic time-frame, such as StyleGan [11]. This could yield more realistic images in the long term but requires significant processing requirement.


  • [1] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, X. Zheng, G. Brain, I. Osdi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng (2016)

    TensorFlow : A System for Large-Scale Machine Learning

    Osdi. External Links: 1605.08695, ISBN 9781931971331 Cited by: §4.
  • [2] C. Baur, S. Albarqouni, and N. Navab (2018) Generating highly realistic images of skin lesions with GANs.

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

    11041 LNCS, pp. 260–267.
    External Links: Document, 1809.01410, ISBN 9783030012007, ISSN 16113349 Cited by: §5.
  • [3] F. Chollet (2016) Keras. GitHub, Mountain View, California. External Links: Link Cited by: §4.
  • [4] t. D. community Anonymous, B. G, and G. A Danbooru 2017: A large-scale crowdsourced and tagged anime illustration dataset. Cited by: Table 1.
  • [5] Y. Cui, F. Zhou, J. Wang, X. Liu, Y. Lin, and S. Belongie (2017)

    Kernel pooling for convolutional neural networks


    Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017

    2017-Janua, pp. 3049–3058.
    External Links: Document, ISBN 9781538604571 Cited by: §2, §3.
  • [6] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative Adversarial Nets. Advances in Neural Information Processing Systems 27, pp. 2672–2680. External Links: Document, arXiv:1406.2661v1, ISBN 1406.2661, ISSN 10495258, Link Cited by: §1.
  • [7] K. He, X. Zhang, S. Ren, and J. Sun (2014) Spatial pyramid pooling in deep convolutional networks for visual recognition. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 8691 LNCS (PART 3), pp. 346–361. External Links: 1406.4729, ISBN 9783319105772, ISSN 16113349 Cited by: §2.
  • [8] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2016-Decem, pp. 770–778. External Links: 1512.03385, ISBN 9781467388504, ISSN 10636919 Cited by: §1.
  • [9] R. J. Karl and T. Tamirat (2006) MORPH: A Longitudinal Image Database of Normal Adult Age-Progression. IEEE 7th International Conference on Automatic Face and Gesture Recognition, Southampton, UK, pp. 341–345. External Links: ISBN 0769525032 Cited by: Table 1.
  • [10] T. Karras, T. Aila, S. Laine, and J. Lehtinen (2017) Progressive Growing of GANs for Improved Quality, Stability, and Variation. pp. 1–26. External Links: 1710.10196, Link Cited by: §2.
  • [11] T. Karras and T. Aila Analyzing and Improving the Image Quality of StyleGAN. External Links: arXiv:1912.04958v1 Cited by: §2, §7.
  • [12] J. D. Katsilometes (2005) How good is my GAN?. Improving and Optimizing Operations: Things That Actually Work - Plant Operators’ Forum 2004 (5302), pp. 27–37. External Links: ISBN 0873352351 Cited by: §1.
  • [13] Y. LeCun and C. Cortes (2010) {MNIST} handwritten digit database. Note: External Links: Link Cited by: §2.
  • [14] Z. Liu, P. Luo, X. Wang, and X. Tang (2015) Deep learning face attributes in the wild. Proceedings of the IEEE International Conference on Computer Vision 2015 Inter, pp. 3730–3738. External Links: Document, 1411.7766, ISBN 9781467383912, ISSN 15505499 Cited by: Table 1.
  • [15] R. Olivier and C. Hanqiang (2012) Nearest Neighbor Value Interpolation. International Journal of Advanced Computer Science and Applications 3 (4), pp. 1–6. External Links: Document, ISSN 2158107X Cited by: §3.
  • [16] D. Pathak, J. Donahue, and A. A. Efros (2018) Context Encoders: Feature Learning by Inpainting. pp. 1–9. Cited by: §1, §2.
  • [17] A. Radford, L. Metz, and S. Chintala (2016) Unsupervised representation learning with deep convolutional generative adversarial networks. 4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings, pp. 1–16. External Links: 1511.06434 Cited by: §2, §5.
  • [18] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei (2015) ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision 115 (3), pp. 211–252. External Links: Document, 1409.0575, ISSN 15731405, Link Cited by: Table 1, §1.
  • [19] T. Salimans, I. Goodfellow, V. Cheung, A. Radford, and X. Chen Improved Techniques for Training GANs. pp. 1–10. External Links: arXiv:1606.03498v1 Cited by: §5.
  • [20] P. Tschandl, C. Rosendahl, and H. Kittler (2018) Data descriptor: The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific Data 5, pp. 1–9. External Links: Document, 1803.10417, ISSN 20524463 Cited by: Table 1, §3, §5.
  • [21] J. Wu, E. Z. Chen, R. Rong, X. Li, D. Xu, and H. Jiang (2019) Skin Lesion Segmentation with C-UNet. 72, pp. 2785–2788. External Links: Document Cited by: §1.
  • [22] H. Xiao, K. Rasul, and R. Vollgraf (2017) Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. pp. 1–6. External Links: 1708.07747, Link Cited by: §2.