RSVQA: Visual Question Answering for Remote Sensing Data

This paper introduces the task of visual question answering for remote sensing data (RSVQA). Remote sensing images contain a wealth of information which can be useful for a wide range of tasks including land cover classification, object counting or detection. However, most of the available methodologies are task-specific, thus inhibiting generic and easy access to the information contained in remote sensing data. As a consequence, accurate remote sensing product generation still requires expert knowledge. With RSVQA, we propose a system to extract information from remote sensing data that is accessible to every user: we use questions formulated in natural language and use them to interact with the images. With the system, images can be queried to obtain high level information specific to the image content or relational dependencies between objects visible in the images. Using an automatic method introduced in this article, we built two datasets (using low and high resolution data) of image/question/answer triplets. The information required to build the questions and answers is queried from OpenStreetMap (OSM). The datasets can be used to train (when using supervised methods) and evaluate models to solve the RSVQA task. We report the results obtained by applying a model based on Convolutional Neural Networks (CNNs) for the visual part and on a Recurrent Neural Network (RNN) for the natural language part to this task. The model is trained on the two datasets, yielding promising results in both cases.



There are no comments yet.


page 1

page 4

page 5

page 6

page 9

page 10

page 11


How to find a good image-text embedding for remote sensing visual question answering?

Visual question answering (VQA) has recently been introduced to remote s...

From Easy to Hard: Learning Language-guided Curriculum for Visual Question Answering on Remote Sensing Data

Visual question answering (VQA) for remote sensing scene has great poten...

The Winning Solution to the iFLYTEK Challenge 2021 Cultivated Land Extraction from High-Resolution Remote Sensing Image

Extracting cultivated land accurately from high-resolution remote images...

A review over the applicability of image entropy in analyses of remote sensing datasets

Entropy is the measure of uncertainty in any data and is adopted for max...

X-LineNet: Detecting Aircraft in Remote Sensing Images by a pair of Intersecting Line Segments

In the field of aircraft detection, tremendous progress has been gained ...

Tile2Vec: Unsupervised representation learning for remote sensing data

Remote sensing lacks methods like the word vector representations and pr...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Remote sensing data is widely used as an indirect

source of information. From land cover/land use to crowd estimation, environmental or urban area monitoring, remote sensing images are used in a wide range of tasks of high societal relevance. For instance, remote sensing data can be used as a source of information for 6 of the 17 sustainable development goals as defined by the United Nations

[1]. Due to the critical nature of the problems that can be addressed using remote sensing data, significant effort has been made to increase its availability in the last decade. For instance, Sentinel-2 satellites provide multispectral data with a relatively short revisiting time, in open-access. However, while substantial effort has been dedicated to improving the means of direct information extraction from Sentinel-2 data in the framework of a given task (e.g. classification [11, 22]), the ability to use remote sensing data as a direct

source of information is currently limited to experts within the remote sensing and computer vision communities. This constraint, imposed by the technical nature of the task, reduces both the scale and variety of the problems that could be addressed with such information as well as the number of potential end-users. This is particularly true when targeting specific applications (detecting particular objects,

e.g. thatched roofs or buildings in a developing country [30]) which would today call for important research efforts. The targeted tasks are often multiple and changing in the scope of a project calls for strong expert knowledge, limiting the information which can be extracted from remote sensing data. To address these constraints, we introduce the problem of visual question answering (VQA) for remote sensing data.

Fig. 1: Example of tasks achievable by a visual question answering model for remote sensing data.

VQA is a new task in computer vision, introduced in its current form by [3]. The objective of VQA is to answer a free-form and open-ended question about a given image. As the questions can be unconstrained, a VQA model applied to remote sensing data could serve as a generic solution to classical problems involving remote sensing data (e.g. ”Is there a thatched roof in this image?” for thatched roof detection), but also very specific tasks involving relations between objects of different nature (e.g. ”Is there a thatched roof on the right of the river?”). Examples of potential questions are shown in Figure 1.

To the best of our knowledge, this is the first time (after the first exploration in [23]

) that VQA has been applied to extract information from remote sensing data. It builds on the task of generating descriptions of images through combining image and natural language processing to provide the user with easily accessible, high-level semantic information. These descriptions are then used for image retrieval and intelligence generation

[26]. As seen in this introduction, VQA systems rely on the recent advances in deep learning. Deep learning based methods, thanks to their ability to extract high-level features, have been successfully developed for remote sensing data as reviewed in [40]

. Nowadays, this family of methods is used to tackle a variety of tasks; for scene classification, an early work by


evaluated the possibility to adapt networks pre-trained on large natural image databases (such as ImageNet


) to classify hyperspectral remote sensing images. More recently,

[33] used an intermediate high level representation using recurrent attention maps to classify images. Object detection is also often approached using deep learning methods. To this effect, [35] introduced an object detection dataset and evaluated classical deep learning approaches. Methods taking into account the specificity of remote sensing data have been developed, such as [21] which proposed to modify the classical approach by generating rotatable region proposal which are particularly relevant for top-view imagery. Deep learning methods have also been developed for semantic segmentation. In [31], the authors evaluated different strategies for segmenting remote sensing data. More recently, a contest organized on the dataset of building segmentation created by [25] has motivated the development of a number of new methods to improve results on this task [16]. Similarly, [7] introduced a contest including three tasks: road extraction, building detection and land cover classification. Best results for each challenge were obtained using deep neural networks: [39], [13], [28].

Natural language processing has also been used in remote sensing. For instance, [36]

used a convolutional neural network (CNN) to generate classification probabilities for a given image, and used a recurrent neural network (RNN) to generate its description. In a similar fashion,

[26] used a CNN to obtain a multi semantic level representation of an image (object, land class, landscape) and generate a description using a simple static model. More recently, [37] uses an encoder/decoder type of architecture where a CNN encodes the image and a RNN decodes it to a textual representation, while [32] projects the textual representation and the image to a common space. While these works are use cases of natural language processing, they do not enable interactions with the user as we propose with VQA.

A VQA model is generally made of 4 distinct components: 1) a visual feature extractor, 2) a language feature extractor, 3) a fusion step between the two modalities and 4) a prediction component. Since VQA is a relatively new task, an important number of methodological developments have been published in both the computer vision and natural language processing communities during the past 5 years, reviewed in [34]

. VQA models are able to benefit from advances in the computer vision and automatic language processing communities for the features extraction components. However, the multi-modal fusion has been less explored and therefore, an important amount of work has been dedicated to this step. First VQA models relied on a non-spatial fusion method,


a point-wise multiplication between the visual and language feature vectors

[3]. Being straightforward, this method does not allow every component from both feature vectors to interact with each other. This interaction would ideally be achieved by multiplying the first feature vector by the transpose of the other, but this operation would be computationally intractable in practice. Instead, [9] proposed a fusion method which first selects relevant visual features based on the textual feature (attention step) and then, combines them with the textual feature. In [4], the authors used Tucker decomposition to achieve a similar purpose in one step. While these attention mechanisms are interesting for finding visual elements aligned with the words within the question, they require the image to be divided in a regular grid for the computation of the attention, and this is not suitable to objects of varying size. A solution is presented in [2], which learns an object detector to select relevant parts of the image. In this research, we use a non-spatial fusion step to keep the model part relatively simple. Most traditional VQA works are designed for a specific dataset, either composed of natural images (with questions covering an unconstrained range of topics) or synthetic images. While interesting for the methodological developments that they have facilitated, these datasets limit the potential applications of such systems to other problems. Indeed, it has been shown in [27] that VQA models trained on a specific dataset do not generalize well to other datasets. This generalization gap raises questions concerning the applicability of such models to specific tasks.

A notable use-case of VQA is helping visually impaired people through natural language interactions [5]. Images acquired by visually impaired people represent an important domain shift, and as such a challenge for the applicability of VQA models. In [12], the authors confirm that networks trained on generic datasets do not generalize to their specific one. However, they manage to obtain much better results by fine-tuning or training models from scratch on their task-specific dataset.

In this study, we propose a new application for VQA, specifically for the interaction with remote sensing images. To this effect, we propose the first remote sensing-oriented VQA datasets, and evaluate the applicability of this task on remote sensing images. We propose a method to automatically generate remote sensing-oriented VQA datasets from already available human annotations in section II and generate two datasets. We then use this newly-generated data to train our proposed RSVQA model with a non-spatial fusion step described in section III. Finally, the results are evaluated and discussed in section IV.

Our contribution are the following:

  • a method to generate remote sensing-oriented VQA datasets;

  • 2 datasets;

  • the proposed RSVQA model.

This work extends the preliminary study of [23] by considering and disclosing a second larger dataset consisting of very high resolution images. This second dataset helps testing the spatial generalization capability of VQA and provides an extensive discussion highlighting remaining challenges. The method to generate the dataset, the RSVQA model and the two datasets are available on

Ii Datasets

Ii-a Method

(a) Question construction procedure. Dash lines represent optional paths
(b) Construction path for sample questions.
Fig. 2: Illustration of the question construction procedure.

As seen in the introduction, a main limiting factor for VQA is the availability of task-specific datasets. As such, we aim at providing a collection of remote sensing images with questions and answers associated to them. To do so, we took inspiration from [17], in which the authors build a dataset of question/answer pairs about synthetic images following an automated procedure. However, in this study we are interested in real data (discussed in subsection II-B). Therefore, we use the openly accessible OpenStreetMap data containing geo-localized information provided by volunteers. By leveraging this data, we can automatically extract the information required to obtain question/answer pairs relevant to real remotely sensed data and create a dataset made of (image, question, answer) triplets.

The first step of the database construction is to create the questions. The second step is to compute the answers to the questions, using the OSM features belonging to the image footprint. Note that multiple question/answer pairs are extracted for each image.

Ii-A1 Question contruction

Our method to construct the questions is illustrated in Figure 2. It consists of four main components:

  1. choice of an element category (highlighted in red in Figure 2(a));

  2. application of attributes to the element (highlighted in green in Figure 2(a));

  3. selection based on the relative location to another element (highlighted in green in Figure 2(a))

  4. construction of the question (highlighted in blue in Figure 2(a)).

Examples of question constructions are shown in Figure 2(b). These four components are detailed in the following.

Element category selection: First, an element category is randomly selected from the element catalog. This catalog is built by extracting the elements from one of the following OSM layers: road, water area, building and land use. While roads and water areas are directly treated as elements, buildings and land use related objects are defined based on their ”type” field, as defined in the OSM data specification. Examples of land use objects include residential area, construction area, religious places, …Buildings are divided in two categories: commercial (e.g. retail, supermarket, …) and residential (e.g. house, apartments, …).

Attributes application: The second (optional) step is to refine the previously selected element category. To do so, we randomly select from one of the two possible attribute categories:

  • Shape: each element can be either square, rectangular or circular. Whether an element belongs to one of these shape types is decided based on basic geometrical properties (i.e. hard thresholds on area-to-perimeter ratio and area-to-circumscribed circle area ratio).

  • Size: using hard thresholds on the surface area, elements can be considered ”small”, ”medium” or ”large”. As we are interested in information at different scales in the two datasets, we use different threshold values, which are described in Table I.

Scale Small Medium Large
Low resolution 3000m 10000m 10000m
High resolution 100m 500m 500m
TABLE I: Thresholds for size attributes according to the dataset scale. When dealing with low resolution data, visible objects of interest are larger. To deal with this disparity, we adapt the size thresholds to the resolution of the images.

Relative position: Another possibility to refine the element is to look at its relative position compared to another element. We define 5 relations: ”left of”, ”top of”, ”right of”, ”bottom of”, ”next to”. Note that these relative positions are understood in the image space (i.e. geographically). The special case of ”next to” is handled as a hard threshold on the relative distance between the two objects (less than 1000m). When looking at relative positions, we select the second element following the procedure previously defined.

Question construction: At this point of the procedure, we have an element (e.g. road), with an optional attribute (e.g. small road) and an optional relative position (e.g. small road on the left of a water area). The final step is to generate a ”base question” about this element. We define 5 types of questions of interest (”Question catalog” in Figure 2(a)), from which a specific type is randomly selected to obtain a base question. For instance, in the case of comparison questions, we randomly choose among ”less than”, ”equals to” and ”more than” and construct a second element.

This base question is then turned into a natural language question using pre-defined templates for each question type and object. For some question types (e.g. count), more than one template is defined (e.g. ’How many       are there?’, ’What is the number of      ?’ or ’What is the amount of      ?’). In this case, the template to be used is randomly selected. The stochastic process ensures the diversity, both in the question types and the question templates used.

Ii-A2 Answer construction

: To obtain the answer to the constructed question, we extract the objects from the OSM database corresponding to the image footprint. The objects corresponding to the element category and its attributes are then selected and used depending on the question type:

  • Count: In the case of counting, the answer is simply the number of objects .

  • Presence: A presence question is answered by comparing the number of objects to 0.

  • Area: The answer to a question about the area is the sum of the areas of the objects .

  • Comparison: Comparison is a specific case for which a second element and the relative position statement is needed. This question is then answered by comparing the number of objects to the ones of the second element.

  • Rural/Urban: The case of rural/urban questions is handled in a specific way. In this case, we do not create a specific element, but rather count the number of buildings (both commercial or residential). This number of buildings is then thresholded to a predefined number depending on the resolution of the input data (to obtain a density) to answer the question. Note that we are using a generic definition of rural and urban areas but this can be easily adapted using the precise definition of each country.

Ii-B Data

Following the method presented in subsection II-A, we construct two datasets with different characteristics.  
Low resolution (LR):

Fig. 3: Images selected for the LR dataset over the Netherlands. Each point represent one Sentinel-2 image which was later split into tiles. Red points represent training samples, green pentagon represents the validation image, and blue triangle is for the test image. Note that one training image is not visible (as it overlaps with the left-most image).

this dataset is based on Sentinel-2 images acquired over the Netherlands. Sentinel-2 satellites provide 10m resolution (for the visible bands used in this dataset) images with frequent updates (around 5 days) at a global scale. These images are openly available through ESA’s Copernicus Open Access Hub111

To generate the dataset, we selected 9 Sentinel-2 tiles covering the Netherlands with a low cloud cover (selected tiles are shown in Figure 3). These tiles were divided in images of size (covering ) retaining the RGB bands. From these, we constructed questions and answers following the methodology presented in subsection II-A. We split the data in a training set ( of the original tiles), a validation set () and a test set () at the tile level (the spatial split is shown in Figure 3). This allows to limit spatial correlation between the different splits.

High resolution (HR):

Fig. 4: Extent of the HR dataset with a zoom on the Portland, Manhattan (New York City) and Philadelphia areas. Each point represent one image (generally of size ) which was later split into tiles. The images cover the New York City/Long Island region, Philadelphia and Portland. Red points represent training samples, green pentagons represent validation samples, and blue indicators are for the test sets (blue triangles for test set 1, blue stars for test set 2).

this dataset uses 15cm resolution aerial RGB images extracted from the High Resolution Orthoimagery (HRO) data collection of the USGS. This collection covers most urban areas of the USA, along with a few areas of interest (e.g. national parks). For most areas covered by the dataset, only one tile is available with acquisition dates ranging from year 2000 to 2016, with various sensors. The tiles are openly accessible through USGS’ EarthExplorer tool222

From this collection, we extracted tiles belonging to the North-East coast of the USA (see Figure 4) that were split into images of size (each covering ). We constructed questions and answers following the methodology presented in subsection II-A. We split the data in a training set ( of the tiles), a validation set (), and test sets ( for test set 1, for test set 2). As it can be seen in Figure 4, test set 1 covers similar regions as the training and validation sets, while test set 2 covers the city of Philadelphia, which is not seen during the training. Note that this second test set also uses another sensor (marked as unknown on the USGS data catalog), not seen during training.

(a) Distribution of answers for the LR dataset
(b) Distribution of answers for the HR dataset (numerical answers are ordered, and 0 is the most frequent)
Fig. 5: Distributions of answers in the Low resolution (LR) and High resolution (HR) datasets.

Differences between the two datasets:
Due to their characteristics, the two datasets represent two different possible use cases of VQA:

  • The LR dataset allows for large spatial and temporal coverage thanks to the frequent acquisitions made by Sentinel-2. This characteristic could be of interest for future applications of VQA such as large scale queries (e.g. rural/urban questions) or temporal (which is out of the scope of this study). However, due to the relatively low resolution (10m), some objects can not be seen on such images (such as small houses, roads, trees, …). This fact severely limits the questions to which the model could give an accurate answer.

  • Thanks to the much finer resolution of the HR dataset, a quantity of information of interest to answer typical questions is present. Therefore, in contrast to the LR dataset, questions concerning objects’ coverage or counting relatively small objects can possibly be answered from such data. However, data of such resolution is generally less frequently updated and more expensive to acquire.

Based on these differences, we constructed different types of questions for the two datasets. Questions concerning the area of objects are only asked in the HR dataset. On the other hand, questions about urban/rural area classification are only asked in the LR dataset, as the level of zoom of images from the HR dataset would prevent a meaningful answer from being provided.

Fig. 6: Frequencies of exact counting answers in the LR dataset. Only the left part of the histogram is shown (until 200 objects), the largest (single) count being 17139. of the answers are less than 7 objects in the tile.

To account for the data distributions and error margins we also quantize different answers in both datasets:

  • Counting in LR: as the coverage is relatively large (6.55km), the number of small objects contained in one tile can be high, giving a heavy tailed distribution for the numerical answers, as shown in Figure 6. More precisely, while 26.7% of the numerical answers are ’0’ and 50% of the answers are less than ’7’, the highest numerical answer goes up to ’17139’. In addition to making the problem complex, we can argue that allowing such a range of numerical answer does not make sense on data of this resolution. Indeed, it would be in most cases impossible to distinguish 17139 objects on an image of 65536 pixels. Therefore, numerical answers are quantized into the following categories:

    • ’0’;

    • ’between 1 and 10’;

    • ’between 11 and 100’;

    • ’between 101 and 1000’;

    • ’more than 1000’.

  • In a similar manner, we quantize questions regarding the area in the HR dataset. A great majority (60.9%) of the answer of this type are ’0m’, while the distribution also presents a heavy tail. Therefore, we use the same quantization as the one proposed for counts for the LR dataset. Note that we do not quantize purely numerical answers (i.e. answers to questions of type ’count’) as the maximum number of objects is 89 in our dataset. Counting answers therefore correspond to 89 classes in the model in this case (see section III).

Ii-C Discussion

Questions/Answers distributions:
We show the final distribution of answers per question type for both datasets in Figure 5. We can see that most question types (with the exception of ’rural/urban’ questions in the LR dataset, asked only once per image) are close to evenly distributed by construction. The answer ’no’ is dominating the answers’ distribution for the HR dataset with a frequency of 37.7%. In the LR dataset, the answer ’yes’ occurs 34.9% of the time while the ’no’ frequency is 34.3%. The strongest imbalance occurs for the answer ’0’ in the HR dataset (with a frequency of 60.9% for the numerical answer). This imbalance is greatly reduced by the quantization process described in the previous paragraph.

Limitations of the proposed method:
While the proposed method for image/question/answer triplets generation has the advantage of being automatic and easily scalable while using data annotated by humans, a few limitations have been observed. First, it can happen that some annotations are missing or badly registered [30]. Furthermore, it was not possible to match the acquisition date of the imagery to the one of OSM. The main reason being that it is impossible to know if a newly added element appeared at the same time in reality or if it was just entered for the first time in OSM. As OSM is the main source of data for our process, errors in OSM will negatively impact the accuracy of our databases.

Furthermore, due to the templates used to automatically construct questions and provide answers, the set of questions and answers is more limited than what it is in traditional VQA datasets (9 possible answers for the LR dataset, 98 for the HR dataset).

Iii VQA Model

Fig. 7: Framework of the proposed Visual Question Answering model.

We investigate the difficulty of the VQA task for remote sensing using a basic VQA model based on deep learning. An illustration of the proposed network is shown in Figure 7. In their simple form, VQA models are composed of three parts [34]:

  • feature extraction;

  • fusion of these features to obtain a single feature vector representing both the visual information and the question;

  • prediction based on this vector.

As the model shown in Figure 7 is learned end-to-end, the vector obtained after the fusion (in green in Figure 7) can be seen as a joint embedding of both the image and the question which is used as an input for the prediction step. We detail each of these 3 parts in the following.

Iii-a Feature extraction

The first component of our VQA model is the feature extraction. Its purpose is to obtain a low-dimensional representation of the information contained in the image and the question.

Iii-A1 Visual part

To extract information from a 2D image, a common choice is to use a Convolutional Neural Network (CNN). Specifically, we use a Resnet-152 model [14] pre-trained on ImageNet [8]. The principal motivation for this choice is that this architecture manages to avoid the undesirable degradation problem (decreasing performance with deeper networks) by using residual mappings of the layers’ inputs which are easier to learn than the common choice of direct mappings. This architecture has been succesfully used in a wide range of work in the remote sensing community (e.g. [40, 7, 24]). The last average pooling layer and fully connected layer are replaced by a 2D convolution which outputs a total of 2048 features which are vectorized. A final fully connected layer is learned to obtain a 1200 dimension vector.

Iii-A2 Language part

The feature vector is obtained using the skip-thoughts model [19] trained on the BookCorpus dataset [41]. This model is a recurrent neural network, which aims at producing a vector representing a sequence of words (in our case, a question). To make this vector informative, the model is trained in the following way: it encodes a sentence from a book in a latent space, and tries to decode it to obtain the two adjacent sentences in the book. By doing so, it ensures that the latent space embeds semantic information. Note that this semantic information is not remote sensing specific due to the BookCorpus dataset it has been trained on. However, several works, including [20], have successfully applied non-domain specific NLP models to remote sensing. In our model, we use the encoder which is then followed by a fully-connected layer (from size 2400 elements to 1200).

Iii-B Fusion

At this step, we have two feature vectors (one representing the image, one representing the question) of the same size. To merge them into a single vector, we use a simple strategy: a point-wise multiplication after applying the hyperbolic tangent function to the vectors’ elements. While being a fixed (i.e. not learnt) operation, the end-to-end training of our model encourages both feature vectors to be comparable with respect to this operation.

Iii-C Prediction

Finally, we project this 1200 dimensional vector to the answer space by using a MLP with one hidden layer of 256 elements. We formulate the problem as a classification task, in which each possible answer is a class. Therefore, the size of the output vector depends on the number of possible answers.

Iii-D Training procedure

We train the model using the Adam optimizer [18] with a learning rate of

until convergence (150 epochs in the case of the LR dataset, and 35 epochs in the case of the HR dataset). We use a dropout of 0.5 for every fully connected layer. Due to the difference of input size between the two datasets (HR images are 4 times larger), we use batches of 70 instances for the HR dataset and 280 for the LR dataset. Furthermore, when the questions do not contain a positional component relative to the image space (i.e. ”left of”, ”top of”, ”right of” or ”bottom of”, see

subsection II-A), we augment the image space by randomly applying vertical and/or horizontal flipping

Iv Results and discussion

We report the results obtained by our model on the test sets of the LR and HR datasets. In both cases, 3 model runs have been trained and we report both the average and the standard deviation of our results to limit the variability coming from the stochastic nature of the optimization.

The numerical evaluation is achieved using the accuracy, defined in our case as the ratio of correct answers. We report the accuracy per question type (see subsection II-A), the average of these accuracies (AA) and the overall accuracy (OA).

We show some predictions of the model on the different test sets in Figure 8 and Figure 9 to qualitatively assess the results. Numerical performance of the proposed model on the LR dataset is reported in Table II

and the confusion matrix is shown in

Figure 10. The performance on both tests sets of the HR dataset are reported in Table III and the confusion matrices are shown in Figure 11.

Type Accuracy
Count 67.01% (0.59%)
Presence 87.46% (0.06%)
Comparison 81.50% (0.03%)
Rural/Urban 90.00% (1.41%)
AA 81.49% (0.49%)
OA 79.08% (0.20%)
TABLE II: Results on the test set of the low resolution dataset. The standard deviation is reported in brackets.
Type Accuracy Accuracy
Test set 1 Test set 2
Count 68.63% (0.11%) 61.47% (0.08%)
Presence 90.43% (0.04%) 86.26% (0.47%)
Comparison 88.19% (0.08%) 85.94% (0.12%)
Area 85.24% (0.05%) 76.33% (0.50%)
AA 83.12% (0.03%) 77.50% (0.29%)
OA 83.23% (0.02%) 78.23% (0.25%)
TABLE III: Results on both test sets of the high resolution dataset. The standard deviation is reported in brackets.
Fig. 8: Samples from the high resolution test sets: (a)-(f) are from the first set of the HR dataset, (g)-(i) are from the second set of the HR dataset.
Fig. 9: Samples from the low resolution test set.

General accuracy assessment:
The proposed model achieves an overall accuracy of 79% on the low resolution dataset (see Table II) and of 83% on the first test set of the high resolution dataset (Table III), indicating that the task of automatically answering question based on remote sensing images is possible. When looking at the accuracies per question type (in Tables II and III), it can be noted that the model performs inconsistently with respect to the task the question is tackling: while a question about the presence of an object is generally well answered (87.46% in the LR dataset, 90.43% in the first test set of the HR dataset), counting questions gives poorer performances (67.01% and 68.63% respectively). This can be explained by the fact that presence questions can be seen as simplified counting questions to which the answers are restricted to two options: ”0” or ”1 or more”. Classical VQA models are known to struggle with the counting task [38]. An issue which partly explains these performances in the counting task is the separation of connected instances. This problem has been raised for the case of buildings in [24] and is illustrated in Figure 8(f), where the ground truth is indicating three buildings, which could also be only one. We found another illustration of this phenomenon in the second test set in Figure 8(i). This issue mostly arises when counting roads or buildings.

Thanks to the answers’ quantization, questions regarding the areas of objects are generally well answered with an accuracy of 85.24% in the first test set of the HR dataset. This is illustrated in Figures 8(a,b), where presence of buildings (by the mean of the covered area) is well detected.

However, we found that our model performs poorly with questions regarding the relative positions of objects, such as those illustrated in Figures 8(c-e). While Figure 8(c) is correct, despite the question being difficult, Figure 8(d) shows a small mistake from the model and Figure 8(e) is completely incorrect. These problems can be explained by the fact that the questions are on high semantic level and therefore difficult for a model considering a simple fusion scheme, as the one presented in section III.

Regarding the low resolution dataset, rural/urban questions are generally well answered (90% of accuracy), as shown in Figure 9(a,b). Note that the ground truth for this type of questions is defined as a hard threshold on the number of buildings, which causes an area as the one shown in Figure 9(b) to be labeled as urban.

However, the low resolution of Sentinel-2 images can be problematic when answering questions about relatively small objects. For instance, in Figures 9(c,d), we can not see any water area nor determine the type of buildings, which causes the model’s answer to be unreliable.

Generalization to unseen areas:
The performances on the second test set of the HR dataset show that the generalization to new geographic areas is problematic for the model, with an accuracy drop of approximately 5%. This new domain has a stronger impact on the most difficult tasks (counting and area computation). This can be explained when looking at Figures 8(g-i). We can see that the domain shift is important on the image space, as a different sensor was used for the acquisition. Furthermore, the urban organization of Philadelphia is different from that of the city of New York. This causes the buildings to go undetected by the model in Figure 8(h), while the parkings can still be detected in Figure 8(g) possibly thanks to the cars. This decrease in performance could be reduced by using domain adaptation techniques. Such a method could be developed for the image space only (a review of domain adaptation for remote sensing is done in [29]) or at the question/image level (see [6], which presents a method for domain adaptation in the context of VQA).

Answer’s categories:
The confusion matrices indicate that the models generally provide logical answers, even when making mistakes (e.g. it might answer ”yes” instead of ”no” to a question about the presence of an object, but not a number). Rare exceptions to this are observed for the first test set of the HR dataset (see Figure 11(a)), on which the model gives 23 illogical answers (out of the 316941 questions of this test set).

Fig. 10: Confusion matrix for the low resolution dataset (logarithm scale) on the test set. Red lines group answers by type (”Yes/No”, ”Rural/Urban”, numbers).
(a) Test set 1
(b) Test set 2
Fig. 11: Subsets of the confusion matrices for the high resolution dataset (counts are at logarithm scale) on both test sets. Red lines group answers by type (”Yes/No”, areas, numbers).

Language biases:
A common issue in VQA models, raised in [10], is the fact that strong language biases are captured by the model. When this is the case, the answer provided by the model mostly depends on the question, rather than on the image. To assess this, we evaluated the proposed models by randomly selecting an image from the test set for each question. We obtained an overall accuracy of 73.78% on the LR test set, 73.78% on the first test set of the HR dataset and 72.51% on the second test set. This small drop of accuracy indicates that indeed, the models rely more on the questions than on the image to provide an answer. Furthermore, the strongest drop of accuracy is seen on the HR dataset, indicating that the proposed model extracts more information from the high resolution data.

Restricted set of questions:
While not appearing in the numerical evaluation, an important issue with our results is the relative lack of diversity in the dataset. Indeed, due to the source of our data (OSM), the questions are only on a specific set of static objects (e.g. buildings, roads, …). Other objects of interest for applications of a VQA system to remote sensing would also include different static objects (e.g. thatched roofs mentioned in section I), moving objects (e.g. cars), or seasonal aspects (e.g. for crop monitoring). Including these objects would require another source of data, or manual construction of question/answer pairs.

Another limitation comes from the dataset construction method described in subsection II-A. We defined five types of questions (count, comparison, presence, area, rural/urban classification). However, they only start to cover the range of questions which would be of interest. For instance, questions about the distance between two points (defined by textual descriptions), segmentation questions (e.g. ”where are the buildings in this image?”) or higher semantic level question (e.g. ”does this area feel safe?”) could be added.

While the first limitation (due to the data source) could be tackled using other databases (e.g. from national institutes) and the second limitation (due to the proposed method) could be solved by adding other question construction functions to the model, it would be beneficial to use human annotators using a procedure similar to [3] to diversify the samples.

V Conclusion

We introduce the task of Visual Question Answering from remote sensing images as a generic and accessible way of extracting information from remotely sensed data. We present a method for building datasets for VQA, which can be extended and adapted to different data sources, and we proposed two datasets targeting different applications. The first dataset uses Sentinel-2 images, while the second dataset uses very high resolution (30cm) aerial orthophotos from USGS.

We analyze these datasets using a model based on deep learning, using both convolutional and recurrent neural networks to analyze the images and associated questions. The most probable answer from a predefined set is then selected.

This first analysis shows promising results, suggesting the potential for future applications of such systems. These results outline future research directions which are needed to overcome language biases and difficult tasks such as counting. The former can be tackled using an attention mechanism [34], while the latter could be tackled by using dedicated components for counting questions [24] in a modular approach.

Issues regarding the current database raised in section IV also need to be addressed to obtain a system capable of answering a more realistic range of questions. This can be done by making the proposed dataset construction method more complex or by using human annotators.


The authors would like to thank CNES for the funding of this study under the R&T project ”Application des techniques de Visual Question Answering à des données d’imagerie satellitaire”.


  • [1] K. Anderson, B. Ryan, W. Sonntag, A. Kavvada, and L. Friedl (2017) Earth observation in service of the 2030 agenda for sustainable development. Geo-spatial Information Science 20 (2), pp. 77–96. Cited by: §I.
  • [2] P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang (2018)

    Bottom-up and top-down attention for image captioning and visual question answering


    Proceedings of the IEEE conference on computer vision and pattern recognition

    pp. 6077–6086. Cited by: §I.
  • [3] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh (2015) VQA: Visual Question Answering. In International Conference on Computer Vision, Cited by: §I, §I, §IV.
  • [4] H. Ben-Younes, R. Cadene, M. Cord, and N. Thome (2017) MUTAN: Multimodal tucker fusion for Visual Question Answering. In Proceedings of the IEEE international conference on computer vision, pp. 2612–2620. Cited by: §I.
  • [5] J. P. Bigham, C. Jayant, H. Ji, G. Little, A. Miller, R. C. Miller, R. Miller, A. Tatarowicz, B. White, S. White, et al. (2010) VizWiz: nearly real-time answers to visual questions. In ACM symposium on User interface software and technology, pp. 333–342. Cited by: §I.
  • [6] W. Chao, H. Hu, and F. Sha (2018) Cross-dataset adaptation for visual question answering. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 5716–5725. Cited by: §IV.
  • [7] I. Demir, K. Koperski, D. Lindenbaum, G. Pang, J. Huang, S. Basu, F. Hughes, D. Tuia, and R. Raska (2018) Deepglobe 2018: a challenge to parse the earth through satellite images. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Cited by: §I, §III-A1.
  • [8] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei (2009) ImageNet: A Large-Scale Hierarchical Image Database. In IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §I, §III-A1.
  • [9] A. Fukui, D. H. Park, D. Yang, A. Rohrbach, T. Darrell, and M. Rohrbach (2016) Multimodal compact bilinear pooling for visual question answering and visual grounding. arXiv preprint arXiv:1606.01847. Cited by: §I.
  • [10] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh (2017) Making the V in VQA matter: elevating the role of image understanding in Visual Question Answering. In IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §IV.
  • [11] Y. Gu, J. Chanussot, X. Jia, and J. A. Benediktsson (2017-11) Multiple kernel learning for hyperspectral image classification: a review. IEEE Transactions on Geoscience and Remote Sensing 55 (11), pp. 6547–6565. External Links: Document, ISSN 0196-2892 Cited by: §I.
  • [12] D. Gurari, Q. Li, A.J. Stangl, A. Guo, C. Lin, K. Grauman, J. Luo, and J.P. Bigham (2018) VizWiz Grand Challenge: answering visual questions from blind people. IEEE Conference on Computer Vision and Pattern Recognition. Cited by: §I.
  • [13] R. Hamaguchi and S. Hikosaka (2018) Building detection from satellite imagery using ensemble of size-specific detectors. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 223–2234. Cited by: §I.
  • [14] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. IEEE Conference on Computer Vision and Pattern Recognition. Cited by: §III-A1.
  • [15] F. Hu, G. Xia, J. Hu, and L. Zhang (2015) Transferring deep convolutional neural networks for the scene classification of high-resolution remote sensing imagery. Remote Sensing 7 (11), pp. 14680–14707. Cited by: §I.
  • [16] B. Huang, K. Lu, N. Audeberr, A. Khalel, Y. Tarabalka, J. Malof, A. Boulch, B. Le Saux, L. Collins, K. Bradbury, et al. (2018) Large-scale semantic classification: outcome of the first year of INRIA aerial image labeling benchmark. In IEEE International Geoscience and Remote Sensing Symposium, pp. 6947–6950. Cited by: §I.
  • [17] J. Johnson, B. Hariharan, L. van der Maaten, L. Fei-Fei, C.L. Zitnick, and R.B. Girshick (2017) CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. IEEE Conference on Computer Vision and Pattern Recognition. Cited by: §II-A.
  • [18] D.P. Kingma and J. Ba (2015) Adam: A method for stochastic optimization. International Conference on Learning Representations. Cited by: §III-D.
  • [19] R. Kiros, Y. Zhu, R. Salakhutdinov, R.S. Zemel, A. Torralba, R. Urtasun, and S. Fidler (2015) Skip-thought vectors. Neural Information Processing Systems. Cited by: §III-A2.
  • [20] A. Li, Z. Lu, L. Wang, T. Xiang, and J. Wen (2017) Zero-shot scene classification for high spatial resolution remote sensing images. IEEE Transactions on Geoscience and Remote Sensing 55 (7), pp. 4157–4167. Cited by: §III-A2.
  • [21] Q. Li, L. Mou, Q. Xu, Y. Zhang, and X. X. Zhu (2019) R-Net: a deep network for multioriented vehicle detection in aerial images and videos. IEEE Transactions on Geoscience and Remote Sensing (), pp. 1–15. External Links: Document, ISSN 0196-2892 Cited by: §I.
  • [22] S. Li, W. Song, L. Fang, Y. Chen, P. Ghamisi, and J. A. Benediktsson (2019) Deep learning for hyperspectral image classification: an overview. IEEE Transactions on Geoscience and Remote Sensing (), pp. 1–20. External Links: Document, ISSN 0196-2892 Cited by: §I.
  • [23] S. Lobry, J. Murray, D. Marcos, and D. Tuia (2019-07) Visual question answering from remote sensing images. In IEEE International Geoscience and Remote Sensing Symposium, Cited by: §I, §I.
  • [24] S. Lobry and D. Tuia (2019) Deep Learning Models to Count Buildings in High-Resolution Overhead Images. In Joint Urban Remote Sensing Event, Cited by: §III-A1, §IV, §V.
  • [25] E. Maggiori, Y. Tarabalka, G. Charpiat, and P. Alliez (2017) Can semantic labeling methods generalize to any city? the INRIA aerial image labeling benchmark. In IEEE International Geoscience and Remote Sensing Symposium, pp. 3226–3229. Cited by: §I.
  • [26] Z. Shi and Z. Zou (2017-06) Can a machine generate humanlike language descriptions for a remote sensing image?. IEEE Transactions on Geoscience and Remote Sensing 55 (6), pp. 3623–3634. External Links: Document, ISSN 0196-2892 Cited by: §I, §I.
  • [27] R. Shrestha, K. Kafle, and C. Kanan (2019) Answer them all! toward universal visual question answering models. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 10472–10481. Cited by: §I.
  • [28] C. Tian, C. Li, and J. Shi (2018) Dense fusion classmate network for land cover classification.. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 192–196. Cited by: §I.
  • [29] D. Tuia, C. Persello, and L. Bruzzone (2016-06) Domain adaptation for the classification of remote sensing data: an overview of recent advances. IEEE Geoscience and Remote Sensing Magazine 4 (2), pp. 41–57. External Links: Document, ISSN 2168-6831 Cited by: §IV.
  • [30] J. E. Vargas-Muñoz, S. Lobry, A. X. Falcão, and D. Tuia (2019) Correcting rural building annotations in openstreetmap using convolutional neural networks. ISPRS Journal of Photogrammetry and Remote Sensing 147, pp. 283–293. Cited by: §I, §II-C.
  • [31] M. Volpi and D. Tuia (2016) Dense semantic labeling of subdecimeter resolution images with convolutional neural networks. IEEE Transactions on Geoscience and Remote Sensing 55 (2), pp. 881–893. Cited by: §I.
  • [32] B. Wang, X. Lu, X. Zheng, and X. Li (2019) Semantic descriptions of high-resolution remote sensing images. IEEE Geoscience and Remote Sensing Letters. Cited by: §I.
  • [33] Q. Wang, S. Liu, J. Chanussot, and X. Li (2019-02) Scene classification with recurrent attention of VHR remote sensing images. IEEE Transactions on Geoscience and Remote Sensing 57 (2), pp. 1155–1167. External Links: Document, ISSN 0196-2892 Cited by: §I.
  • [34] Q. Wu, D. Teney, P. Wang, C. Shen, A.R. Dick, and A. v.d. Hengel (2017) Visual question answering: A survey of methods and datasets. Computer Vision and Image Understanding. Cited by: §I, §III, §V.
  • [35] G. Xia, X. Bai, J. Ding, Z. Zhu, S. Belongie, J. Luo, M. Datcu, M. Pelillo, and L. Zhang (2018) DOTA: a large-scale dataset for object detection in aerial images. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 3974–3983. Cited by: §I.
  • [36] X. Zhang, X. Li, J. An, L. Gao, B. Hou, and C. Li (2017-07) Natural language description of remote sensing images based on deep learning. In IEEE International Geoscience and Remote Sensing Symposium, Vol. , pp. 4798–4801. External Links: Document, ISSN 2153-7003 Cited by: §I.
  • [37] X. Zhang, X. Wang, X. Tang, H. Zhou, and C. Li (2019) Description generation for remote sensing images using attribute attention mechanism. Remote Sensing 11 (6), pp. 612. Cited by: §I.
  • [38] Y. Zhang, J. Hare, and A. Prügel-Bennett (2018) Learning to count objects in natural images for visual question answering. In International Conference on Learning Representations, Cited by: §IV.
  • [39] L. Zhou, C. Zhang, and M. Wu (2018) D-linknet: linknet with pretrained encoder and dilated convolution for high resolution satellite imagery road extraction.. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 182–186. Cited by: §I.
  • [40] X. X. Zhu, D. Tuia, L. Mou, G. Xia, L. Zhang, F. Xu, and F. Fraundorfer (2017-12) Deep learning in remote sensing: a comprehensive review and list of resources. IEEE Geoscience and Remote Sensing Magazine 5 (4), pp. 8–36. External Links: Document, ISSN 2168-6831 Cited by: §I, §III-A1.
  • [41] Y. Zhu, R. Kiros, R. Zemel, R. Salakhutdinov, R. Urtasun, A. Torralba, and S. Fidler (2015) Aligning books and movies: towards story-like visual explanations by watching movies and reading books. In IEEE international conference on computer vision, pp. 19–27. Cited by: §III-A2.