Log In Sign Up

Modeling Gestalt Visual Reasoning on the Raven's Progressive Matrices Intelligence Test Using Generative Image Inpainting Techniques

by   Tianyu Hua, et al.
Vanderbilt University

Psychologists recognize Raven's Progressive Matrices as a very effective test of general human intelligence. While many computational models have been developed by the AI community to investigate different forms of top-down, deliberative reasoning on the test, there has been less research on bottom-up perceptual processes, like Gestalt image completion, that are also critical in human test performance. In this work, we investigate how Gestalt visual reasoning on the Raven's test can be modeled using generative image inpainting techniques from computer vision. We demonstrate that a self-supervised inpainting model trained only on photorealistic images of objects achieves a score of 27/36 on the Colored Progressive Matrices, which corresponds to average performance for nine-year-old children. We also show that models trained on other datasets (faces, places, and textures) do not perform as well. Our results illustrate how learning visual regularities in real-world images can translate into successful reasoning about artificial test stimuli. On the flip side, our results also highlight the limitations of such transfer, which may explain why intelligence tests like the Raven's are often sensitive to people's individual sociocultural backgrounds.


page 4

page 8

page 9


Visual-Imagery-Based Analogical Construction in Geometric Matrix Reasoning Task

Raven's Progressive Matrices is a family of classical intelligence tests...

A Data Augmentation Method by Mixing Up Negative Candidate Answers for Solving Raven's Progressive Matrices

Raven's Progressive Matrices (RPMs) are frequently-used in testing human...

Generating Correct Answers for Progressive Matrices Intelligence Tests

Raven's Progressive Matrices are multiple-choice intelligence tests, whe...

Blackbird's language matrices (BLMs): a new benchmark to investigate disentangled generalisation in neural networks

Current successes of machine learning architectures are based on computa...

Measuring an Artificial Intelligence System's Performance on a Verbal IQ Test For Young Children

We administered the Verbal IQ (VIQ) part of the Wechsler Preschool and P...

One-shot Visual Reasoning on RPMs with an Application to Video Frame Prediction

Raven's Progressive Matrices (RPMs) are frequently used in evaluating hu...

Learning Perceptual Inference by Contrasting

"Thinking in pictures," [1] i.e., spatial-temporal reasoning, effortless...


Consider the matrix reasoning problem in Figure 1; the goal is to select the answer choice from the bottom that best fits in the blank portion on top. Such problems are found on many different human intelligence tests [24, 33], including on the Raven’s Progressive Matrices tests, which are considered to be the most effective single measure of general intelligence across all psychometric tests [27].

As you may have guessed, the solution to this problem is answer choice #2. While this problem may seem quite simple, what is interesting about it is that there are multiple ways to solve it. For example, one might take a top-down, deliberative approach by first deciding that the top two elements are reflected across the horizontal axis, and then reflecting the bottom element to predict an answer–often called an Analytic approach [20, 22]. Alternatively, one might just “see” the answer emerge in the empty space, in a more bottom-up, automatic fashion–often called a Gestalt or figural approach.

Figure 1: Example problem like those on the Raven’s Progressive Matrices tests [17].

While many computational models explore variations of the Analytic approach, less attention has been paid to the Gestalt approach, though both are critical in human intelligence. In human cognition, Gestalt principles refer to a diverse set of capabilities for detecting and predicting perceptual regularities such as symmetry, closure, similarity, etc. [32]. Here, we investigate how Gestalt reasoning on the Raven’s test can be modeled with generative image inpainting techniques from computer vision:

  • [nolistsep,noitemsep]

  • We describe a concrete framework for solving Raven’s problems through Gestalt visual reasoning, using a generic image inpainting model as a component.

  • We demonstrate that our framework, using an inpainting model trained on photorealistic object images from ImageNet, achieves a score of 27/36 on the Raven’s Colored Progressive Matrices test.

  • We show that test performance is sensitive to the inpainting model’s training data. Models trained on faces, places, and textures get scores of 11, 17, and 18, respectively, and we offer some potential reasons for these differences.

Background: Gestalt Reasoning

Figure 2: Images eliciting Gestalt “completion” phenomena.

In humans, Gestalt phenomena have to do with how we integrate low-level perceptual elements into coherent, higher-level wholes [32]. For example, the left side of Figure 2 contains only scattered line segments, but we inescapably see a circle and rectangle. The right side of Figure 2 contains one whole key and one broken key, but we see two whole keys with occlusion.

In psychology, studies of Gestalt phenomena have enumerated a list of principles (or laws, perceptual/reasoning processes, etc.) that cover the kinds of things that human perceptual systems do [34, 15]. Likewise, work in image processing and computer vision has attempted to define these principles mathematically or computationally [8].

In more recent models, Gestalt principles are seen as emergent properties that reflect, rather than determine, perceptions of structure in an agent’s visual environment. For example, early approaches to image inpainting—i.e., reconstructing a missing/degraded part of an image—used rule-like principles to determine the structure of missing content, while later, machine-learning-based approaches attempt to learn structural regularities from data and apply them to new images

[26]. This seems reasonable as a model of Gestalt phenomena in human cognition; after years of experience with the world around us, we see Figure 2 (left) as partially occluded/degraded views of whole objects.

Background: Image Inpainting

Machine-learning-based inpainting techniques typically either borrow information from within the occluded image itself [4, 2, 30] or from a prior learned from other images [12, 35, 37]. The first type of approach often uses patch similarities to propagate low-level features, such as the texture of grass, from known background regions to unknown patches. Of course, such approaches suffer on images with low self-similarity or when the missing part involves semantic-level cognition, e.g., a part of a face.

The second approach aims to generalize regularities in visual content and structure across different images, and several impressive results have recently been achieved with the rise of deep-learning-based generative models. For example, Li and colleagues (li2017generative) use an encoder-decoder neural network structure, regulated by an adversarial loss function, to recover partly occluded face images. More recently, Yu and colleagues (yu2018generative) designed an architecture that not only can synthesize missing image parts but also explicitly utilizes surrounding image feature as context to make inpainting more precise. In general, most recent neural-network-based image inpainting algorithms represent some combination of variational autoencoders (VAE) and generative adversarial networks (GAN) and typically contain an encoder, a decoder, and an adversarial discriminator.

Generative Adversarial Networks (GAN)

Generative adversarial networks combine generative and discriminative models to learn very robust image priors [10]

. In a typical formulation, the generator is a transposed convolutional neural network while the discriminator is a regular convolutional neural network. During training, the generator is fed random noise and outputs a generated image. The generated image is sent alongside a real image to the discriminator, which outputs a score to evaluate how real or fake the inputs are. The error between the output score and ground truth score is back-propagated to adjust the weights.

This training scheme forces the generator to produce images that will fool the discriminator into believing they are real images. In the end, training converges at an equilibrium where the generator cannot make the synthesized image more real, while the discriminator fails to tell whether an image is real or generated. Essentially, the training process of GANs forces the generated images to lay within the same distribution (in some latent space) as real images.

Variational autoencoders (VAE)

Autoencoders are deep neural networks, with a narrow bottleneck layer in the middle, that can reconstruct high dimensional data from original inputs. The bottleneck will capture a compressed latent encoding that can then be used for tasks other than reconstruction. Variational autoencoders use a similar encoder-decoder structure but also encourage continuous sampling within the bottleneck layer so that the decoder, once trained, functions as a generator



Figure 3: Architecture of VAE-GAN

While a GAN’s generated image outputs are often sharp and clear, a major disadvantage is that the training process can be unstable and prone to problems [10, 21]. Even if training problems can be solved, e.g., [1], GANs still lack encoders that map real images to latent variables. Compared with GANs, VAE-generated images are often a bit blurrier, but the model structure in general is much more mathematically elegant and more easily trainable. To get the best of both worlds, Larsen and colleagues (larsen2015autoencoding) proposed an architecture that attaches an adversarial loss to a variational autoencoder, as shown in Figure 3.

Figure 4: Reasoning framework for solving Raven’s test problems using Gestalt image completion, using any pre-trained encoder-decoder-based image inpainting model. Elements , , and from the problem matrix form the initial input, combined into a single image, along with a mask that indicates the missing portion. These are passed through the encoder , and the resulting image features in latent variable space are passed into the decoder . This creates a new complete matrix image ; the portion corresponding to the masked location is the predicted answer to the problem. This predicted answer , along with all of the answer choices , are again passed through the encoder to obtain feature representations in latent space, and the answer choice most similar to is selected as the final solution.

Our Gestalt Reasoning Framework

Figure 5: Examples of inpainting produced by same VAE-GAN model [35] trained on four different datasets. Left to right: ImageNet (objects), CelebA (faces), Places (scenes), and DTD (textures).

In this section, we present a general framework for modeling Gestalt visual reasoning on the Raven’s test or similar types of problems. Our framework is intended to be agnostic to any type of encoder-decoder-based inpainting model. For our experiments, we adopt a recent VAE-GAN inpainting model [35]; as we use the identical architecture and training configuration, we refer readers to the original paper for more details about the inpainting model itself.

Our framework makes use of a pre-trained encoder and corresponding decoder (where and indicate the encoder’s and decoder’s learned parameters, respectively). The partially visible image to be inpainted, in our case, is a Raven’s problem matrix with the fourth cell missing, accompanied with a mask, which is passed as input into the encoder . Then outputs an embedded feature representation , which is sent as input to the generator . Note that the learned feature representation

could be of any form—a vector, matrix, tensor or any other encoding as long as it represents the latent features of input images.

The generator then outputs a generated image, and we cut out the generated part as the predicted answer. Finally, we choose the most similar candidate answer choice by computing the distance among feature representations of the various images (the prediction versus each answer choice), computed using the trained encoder again.

This process is illustrated in Figure 4. More concisely, let , , , be the three elements of the original problem matrix, be the image mask, and be the input comprised of these four images. Then, the process of solving the problem to determine the chosen answer can be written as:

where h and w are height and width of the reconstructed image, and is the answer choice space.

Inpainting Models

For our experiments, we used the same image inpainting model [35] trained on four different datasets. The first model, which we call Model-Objects, we trained from scratch so that we could evaluate Raven’s test performance at multiple checkpoints during training. The latter three models, which we call Model-Faces, Model-Scenes, and Model-Textures, we obtained as pre-trained models [35]. Details about each dataset are given below.

Note: The reader may wonder why we did not train an inpainting model on Raven’s-like images, i.e., black and white illustrations of 2D shapes. Our rationale follows the spirit of human intelligence testing: people are not meant to practice taking Raven’s-like problems. If they do, the test is no longer a valid measure of their intelligence [11]. Here, our goal was to explore how “test-naive” Gestalt image completion processes would fare. (There are many more nuances to these ideas, of course, which we discuss further in Related Work.)

Model-Objects. The first model, Model-Objects, was trained on the Imagenet dataset [25]. We trained this model from scratch. We began with the full ImageNet dataset containing

14M images non-uniformly spanning 20,000 categories such as “windows,” “balloons,” and “giraffes. The model converged prior to one full training epoch on the randomized dataset; we halted training around 300,000 iterations, with a batch size of 36 images per iteration. The best Raven’s performance was found at around 80,000 iterations, which means that the final model we used saw only about

3M images in total during training.

Model-Faces. Our second model, Model-Faces, was trained on the Large-scale CelebFaces Attributes (CelebA) dataset [18], which contains around 200,000 images of celebrity faces, covering around 10,000 individuals.

Model-Scenes. Our third model, Model-Scenes, was trained on the Places dataset [38], which contains around 10M images spanning 434 categories, grouped into three macro-categories: indoor, nature, and urban.

Model-Textures. Our fourth model, Model-Textures, was trained on the Describable Textures Dataset (DTD) [7], which contains 5640 images, divided into 47 categories, of textures taken from real objects, such as knitting patterns, spiderwebs, or an animal’s skin.


Figure 6: Caption

four networks - imagenet, faces, and places, dtd

FIRST THING: table of actual Raven’s results: for each network, show score on each set (A, AB, B, C, D, E)

rows: each network columns: six columns, one for each set. plus one column for CPM total (A, AB, B), and then one column for SPM total (A, B, C, D, E). cell: number correct (out of 12)

imagenet: pick best one, and explain what we did others: explain that its using pre-trained versions

SECOND THING: imagenet training graph (CPM accuracy as a function of training iteration) and also loss graph

THIRD THING: show results from four networks on all five example problems

if NO differences, then perhaps come up with example problems to try to showcase differences

show results of each network running on photographs of a face, a place, and an object


Figure 7: Caption
Figure 8: Caption

Whether neural networks can learn abstract reasoning or whether they merely rely on superficial statistics is a topic of recent debate…

initial the random initialized architecture can correctly predict 10(8 on average actually) out of 36 (random should be 1/6) this should be attribute to cnn structure. [30]

The ultimate way, most similar with humans situation, to use this test as a test of machine intelligent is to invite the bot to sit down and use verbal instructions to show how the problem works and so on.

Reference Inputs Type Approach Evaluation
3D Objects on Turntable
3D Object
Intel Egocentric
Table 1: Computational models of various aspects of problem-solving on the Raven’s Progressive Matrices test or similar.

Related Work on the Raven’s Test

Over the decades, there have been many exciting efforts in AI to computationally model various aspects of problem solving for matrix reasoning and similar geometric analogy problems, beginning with Evans’ classic ANALOGY program [9]. In this section, we review some major themes that seem to have emerged across these efforts, situate our current work within this broader context, and point out important gaps that remain unfilled.

Note that we do not attempt to list the “test scores” achieved by various models for two reasons. First, these models have collectively explored so many problem variants, problem contents, pre-processing methods, model constraints, etc., that it is exceedingly difficult to make apples-to-apples comparisons among them.

Second, and more importantly, we feel that better scientific knowledge has come from the systematic, within-model experiments presented by many of these studies than from the absolute levels of performance they achieve. Raven’s is not now (and probably never will be) a task that is of practical utility for AI systems in the world to be solving well, and so treating it as a black-box benchmark is of limited value. However, the test continues to be enormously profitable as a research tool for generating insights into the organization of intelligence, both in humans and in artificial systems.

Knowledge-based versus data-driven. Early models took a knowledge-based approach, meaning that they contained explicit, structured representations of certain key elements of domain knowledge. For example, Carpenter and colleagues (carpenter1990one) built a system that matched relationships among problem elements according to one of five predefined rules. Knowledge-based models tend to focus on what an agent does with its knowledge during reasoning; where this knowledge might come from remains an open question.

On the flip side, a recently emerging crop of data-driven models extract domain knowledge from a training set containing example problems that are similar to the test problems the model will eventually solve, e.g., [14]. Data-driven models tend to focus on interactions between training data, learning architectures, and learning outcomes; how knowledge might be represented in a task-general manner and used flexibly during reasoning and decision-making remain open questions.

, which followed a deliberative, Analytic approach. One of the first works to address the Raven’s test specifically was Hunt’s One of the first algorithmic descriptions appeared in 1974, when Hunt described two

[6, 23, 17, 19, 29]

List previous computational models, and point out that NONE of them have modeled human reasoning using Gestalt perceptual principles

1. The IQ of Neural Networks [14] They generated a raven-like geometry shape dataset where six images are treated as input to a neural net, two being the question cells and four being the answer choices. In the four candidate answer choices only one correct answer image will complete the progression of the two question cells.

If we put the question and answer choice in a row, we can observe progression in degrees of rotation, size of geometry, reflection, number of geometry object, shades of color, or addition relation between the three cells. The neural based model trained with these data will either predict the probability of different answer cells or generate the third cell. The model, trained and tested with this dataset, achieved, as the author puts it, top 5% of human performance. It’s not mentioned what is the score for raven’s matrix.

2. Measuring abstract reasoning in neural networks [3] This work exams the ability of different neural structure such as CNN-MLP, ResNet, LSTM, WReN to generalized to a raven-like dataset. Experiments show WReN(Wild Relation Network) have the most inductive bias towards relational reasoning tasks.

3. Learning to make analogies by contrasting abstract relational structure [13] This paper points out a different training sequence will facilitate the learning of abstract relational structure. For example, contrasting a row of cells with progression in object quantity with a row of cells with progression in object darkness, a dataset arranged in this manner greatly increase the model’s ability to generalize from trivial specific aspects of input image to a more general conceptual common ground between both rows.

4. Improving Generalization for Abstract Reasoning Tasks Using Disentangled Feature Representations [28] This paper demonstrates a two stage training paradigm, first learn a feature extractor which encodes a disentangled feature representation in an unsupervised manner, then deploy a relational reasoning module, with correct answer as supervision signal, in the latent disentangled feature space. Compared with training the model end to end without disentanglement, the new paradigm exhibits a reasonable amount of superiority. This work demonstrates a preliminary result of this two stage learning paradigm.

5. Are Disentangled Representations Helpful for Abstract Visual Reasoning? [31]

In this paper, they use an RPM-like 3 by 3 visual reasoning matrix generated from dSprites dataset to test extensively whether disentangled representation truly facilitates down stream abstract reasoning tasks, compared with training both encoder and relational reasoning module end to end using WReN(Wild Relation Network). The paper shows that for modeling reasoning, the two stage paradigm leads to quicker learning with fewer examples.

6. Raven: A dataset for relational and analogical visual reasoning [36] created a raven-like data with structural annotations for augmentation purpose and human achievement for comparison. By utilizing the annotation as augmentation, models tested in [3] all experience a boost in test accuracy.

All previous efforts assume that as long as a model achieves a perfect score on raven or raven-like tests, regardless of what training inputs are, the amount of intelligence the model have will be reflected. However, in human’s case, it’s been studied that the reliability of raven test is highly dependent on the test taker’s ignorance of the test style [11]

. The test is no longer valid for anyone who has been trained with millions of questions close to Raven’s. Though these positive results does exhibit great potential of abstract reasoning in neuron networks, the fact that these models takes tens of thousands of images to generalize reveals great disparity when compared with human mind.


Figure 9: Caption


In conclusion, …


  • [1] M. Arjovsky, S. Chintala, and L. Bottou (2017) Wasserstein gan. arXiv preprint arXiv:1701.07875. Cited by: VAE-GAN.
  • [2] C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman (2009) PatchMatch: a randomized correspondence algorithm for structural image editing. In ACM Transactions on Graphics (ToG), pp. 24. Cited by: Background: Image Inpainting.
  • [3] D. G. Barrett, F. Hill, A. Santoro, A. S. Morcos, and T. Lillicrap (2018) Measuring abstract reasoning in neural networks. arXiv preprint arXiv:1807.04225. Cited by: Related Work on the Raven’s Test, Related Work on the Raven’s Test.
  • [4] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester (2000) Image inpainting. In 27th annual conference on Computer graphics and interactive techniques, pp. 417–424. Cited by: Background: Image Inpainting.
  • [5] D. A. Bors and F. Vigneau (2003) The effect of practice on raven’s advanced progressive matrices. Learning and Individual Differences 13 (4), pp. 291–312. Cited by: Related Work on the Raven’s Test.
  • [6] P. A. Carpenter, M. A. Just, and P. Shell (1990) What one intelligence test measures: a theoretical account of the processing in the raven progressive matrices test.. Psychological review 97 (3), pp. 404. Cited by: Table 1, Related Work on the Raven’s Test.
  • [7] M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi (2014) Describing textures in the wild. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 3606–3613. Cited by: Inpainting Models.
  • [8] A. Desolneux, L. Moisan, and J. Morel (2007) From gestalt theory to image analysis: a probabilistic approach. Vol. 34, Springer Science & Business Media. Cited by: Background: Gestalt Reasoning.
  • [9] T. G. Evans (1968) A program for the solution of a class of geometric-analogy intelligence-test questions. Semantic Information Processing, pp. 271–353. Cited by: Related Work on the Raven’s Test.
  • [10] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: Generative Adversarial Networks (GAN), VAE-GAN.
  • [11] T. R. Hayes, A. A. Petrov, and P. B. Sederberg (2015) Do we really become smarter when our fluid-intelligence test scores improve?. Intelligence 48, pp. 1–14. Cited by: Inpainting Models, Related Work on the Raven’s Test.
  • [12] J. Hays and A. A. Efros (2008) Scene completion using millions of photographs. Communications of the ACM 51 (10), pp. 87–94. Cited by: Background: Image Inpainting.
  • [13] F. Hill, A. Santoro, D. G. Barrett, A. S. Morcos, and T. Lillicrap (2019) Learning to make analogies by contrasting abstract relational structure. arXiv preprint arXiv:1902.00120. Cited by: Related Work on the Raven’s Test.
  • [14] D. Hoshen and M. Werman (2017) Iq of neural networks. arXiv preprint arXiv:1710.01692. Cited by: Related Work on the Raven’s Test, Related Work on the Raven’s Test.
  • [15] G. Kanizsa (1979) Organization in vision: essays on gestalt perception. Praeger Publishers. Cited by: Background: Gestalt Reasoning.
  • [16] D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: Variational autoencoders (VAE).
  • [17] M. Kunda, K. McGreggor, and A. K. Goel (2013) A computational model for solving problems from the raven’s progressive matrices intelligence test using iconic visual representations. Cognitive Systems Research 22, pp. 47–66. Cited by: Figure 1, Table 1, Related Work on the Raven’s Test.
  • [18] Z. Liu, P. Luo, X. Wang, and X. Tang (2015) Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision, pp. 3730–3738. Cited by: Inpainting Models.
  • [19] A. Lovett and K. Forbus (2017) Modeling visual problem solving as analogical reasoning.. Psychological review 124 (1), pp. 60. Cited by: Table 1, Related Work on the Raven’s Test.
  • [20] R. Lynn, J. Allik, and P. Irwing (2004) Sex differences on three factors identified in raven’s standard progressive matrices. Intelligence 32 (4), pp. 411–424. Cited by: Introduction.
  • [21] X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley (2016) Least squares generative adversarial networks. 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2813–2821. Cited by: VAE-GAN.
  • [22] V. Prabhakaran, J. A. Smith, J. E. Desmond, G. H. Glover, and J. D. Gabrieli (1997) Neural substrates of fluid reasoning: an fmri study of neocortical activation during performance of the raven’s progressive matrices test. Cognitive psychology 33 (1), pp. 43–63. Cited by: Introduction.
  • [23] D. Rasmussen and C. Eliasmith (2011) A neural model of rule generation in inductive reasoning. Topics in Cognitive Science 3 (1), pp. 140–153. Cited by: Related Work on the Raven’s Test.
  • [24] G. H. Roid and L. J. Miller (1997) Leiter international performance scale-revised (leiter-r). Wood Dale, IL: Stoelting. Cited by: Introduction.
  • [25] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. (2015) Imagenet large scale visual recognition challenge. Int. journal of computer vision 115 (3), pp. 211–252. Cited by: Inpainting Models.
  • [26] C. Schönlieb (2015) Partial differential equation methods for image inpainting. Cambridge University Press. Cited by: Background: Gestalt Reasoning.
  • [27] R. E. Snow, P. C. Kyllonen, and B. Marshalek (1984) The topography of ability and learning correlations. Advances in the psychology of human intelligence 2 (S 47), pp. 103. Cited by: Introduction.
  • [28] X. Steenbrugge, S. Leroux, T. Verbelen, and B. Dhoedt (2018) Improving generalization for abstract reasoning tasks using disentangled feature representations. arXiv preprint arXiv:1811.04784. Cited by: Related Work on the Raven’s Test.
  • [29] C. Strannegård, S. Cirillo, and V. Ström (2013) An anthropomorphic method for progressive matrix problems. Cognitive Systems Research 22, pp. 35–46. Cited by: Table 1, Related Work on the Raven’s Test.
  • [30] D. Ulyanov, A. Vedaldi, and V. Lempitsky (2018) Deep image prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9446–9454. Cited by: Background: Image Inpainting, Discussion.
  • [31] S. van Steenkiste, F. Locatello, J. Schmidhuber, and O. Bachem (2019) Are disentangled representations helpful for abstract visual reasoning?. arXiv preprint arXiv:1905.12506. Cited by: Related Work on the Raven’s Test.
  • [32] J. Wagemans, J. H. Elder, M. Kubovy, S. E. Palmer, M. A. Peterson, M. Singh, and R. von der Heydt (2012) A century of gestalt psychology in visual perception: i. perceptual grouping and figure–ground organization.. Psychological bulletin 138 (6), pp. 1172. Cited by: Background: Gestalt Reasoning, Introduction.
  • [33] D. Wechsler (2008) Wechsler adult intelligence scale–fourth edition (wais–iv). San Antonio, TX: NCS Pearson 22, pp. 498. Cited by: Introduction.
  • [34] M. Wertheimer (1923) Untersuchungen zur lehre von der gestalt. ii. Psychological Research 4 (1), pp. 301–350. Cited by: Background: Gestalt Reasoning.
  • [35] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang (2018) Generative image inpainting with contextual attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5505–5514. Cited by: Background: Image Inpainting, Figure 5, Inpainting Models, Our Gestalt Reasoning Framework.
  • [36] C. Zhang, F. Gao, B. Jia, Y. Zhu, and S. Zhu (2019) Raven: a dataset for relational and analogical visual reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5317–5327. Cited by: Related Work on the Raven’s Test.
  • [37] C. Zheng, T. Cham, and J. Cai (2019) Pluralistic image completion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1438–1447. Cited by: Background: Image Inpainting.
  • [38] B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba (2017)

    Places: a 10 million image database for scene recognition

    IEEE transactions on pattern analysis and machine intelligence 40 (6), pp. 1452–1464. Cited by: Inpainting Models.