Deep Learning Techniques for Geospatial Data Analysis

08/30/2020 ∙ by Arvind W Kiwelekar, et al. ∙ 0

Consumer electronic devices such as mobile handsets, goods tagged with RFID labels, location and position sensors are continuously generating a vast amount of location enriched data called geospatial data. Conventionally such geospatial data is used for military applications. In recent times, many useful civilian applications have been designed and deployed around such geospatial data. For example, a recommendation system to suggest restaurants or places of attraction to a tourist visiting a particular locality. At the same time, civic bodies are harnessing geospatial data generated through remote sensing devices to provide better services to citizens such as traffic monitoring, pothole identification, and weather reporting. Typically such applications are leveraged upon non-hierarchical machine learning techniques such as Naive-Bayes Classifiers, Support Vector Machines, and decision trees. Recent advances in the field of deep-learning showed that Neural Network-based techniques outperform conventional techniques and provide effective solutions for many geospatial data analysis tasks such as object recognition, image classification, and scene understanding. The chapter presents a survey on the current state of the applications of deep learning techniques for analyzing geospatial data. The chapter is organized as below: (i) A brief overview of deep learning algorithms. (ii)Geospatial Analysis: a Data Science Perspective (iii) Deep-learning techniques for Remote Sensing data analytics tasks (iv) Deep-learning techniques for GPS data analytics(iv) Deep-learning techniques for RFID data analytics.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep learning has emerged as a preferred technique to build intelligent products and services in various application domains. The resurgence of deep learning in recent times is attributed to three key factors. The first one is the availability high-performance GPUs necessary to execute computation-intensive deep learning algorithms. Second, the availability of such hardware at an affordable price. Also, the third and most important key factor responsible for the success of deep learning algorithms is the availability of open datasets in various application domains such as ImageNet

[22] required to train the deep learning algorithms extensively[17].

The conventional application domains in which deep learning techniques have been applied effectively are speech recognition, image processing, language modelling and understanding, natural language processing and information retrieval

[10]. All these application domains include processing and retrieving of useful information from raw multimedia data.

The success of deep learning techniques in these fields triggered its application in other fields such as Biomedicine [3], Drug Discovery [6], and Geographical Information System [54].

The chapter presents a state of the art review on applications of deep learning techniques for geospatial data analysis, one of the fields which is increasingly applying deep learning techniques to understand our planet earth.

2 Deep Learning: A Brief Overview

The field of Deep Learning is a sub-field of Machine Learning which studies the techniques for establishing a relationship between input feature variables and one or more output variables. Many tasks, such as classification and prediction, can be represented as a mapping between input feature variables and output variable(s).

For example, the price of a house in a city is a function of input feature variables such as the number of rooms, built-up area, the locality in a city, and other such parameters. The goal of the machine learning algorithms is to learn a mapping function from the given input data set, which is referred to as a training data set.

For example, the variables , ,…. which are input feature variables and the variable which is an output variable can be represented as

The input variables are also referred to as independent variables, features, and predictors. The function is referred to as a model or hypothesis function.

The training data set may or may not include the values of , i.e., output variable. When a machine learning algorithm learns the function from both features and output variable, the algorithm is referred to as a supervised algorithm. An unsupervised learning algorithm learns the function from input features only without knowing the values of the output variable. Both kinds of the algorithm have been widely in use to develop intelligent product and services.

There exist many machine learning algorithms (e.g., Linear Regression, Logistic Regression, Support Vector Machine

[12]) which are practical to learn simple tasks such as predicting house prices based on input features(e.g. the number of bedrooms, built-up area, locality). The goal of these learning algorithms is to reduce the error in predicting the value of the output variable. This minimizing goal is captured by a function called cost function. The stochastic gradient descent is one of the optimization techniques that is commonly used to achieve the minimization goal.

The conventional machine learning algorithms have been found useful and effective in learning simple tasks such as predicting house prices and classifying the tumour as malignant or benign. In such situations, the relationship between input features and the output variable is a simple linear function involving few predefined features.

However, they fail to perform effectively in situations where it is difficult to identify the features required for prediction — for example, recognizing the plant, rivers, animals in a given image. In such situations, the number of input features required for accurate prediction is large, and the relationship between input features and the output variable is also complex and non-linear one.

The set of algorithms that belongs to Deep Learning outperforms as compared to conventional machine learning methods in such situations. This section briefly describes deep learning techniques that have been found useful for geospatial data analysis. For more detailed and elaborate discussion on these techniques, one can refer [17].

Figure 1: Deep Neural Network.

2.1 Deep Learning Architectures

The deep learning techniques attempt to reproduce or emulate the working of the human brain in an artificial context. A network of neurons is the structural and functional unit of the human brain. Likewise, the Artificial Neural Network (ANN) is the fundamental element underlying most of the deep learning techniques.

2.2 Deep Neural Networks

A simple ANN consists of three essential elements: (i) Input Layer (ii) Hidden Layer and (iii) Output Layer. An input layer consists of values of input features, and an output layer consists of values of output variables. A hidden layer is referred to as hidden because values held by the neurons in a hidden layer are not visible during information processing.

A layer consists of more than one information processing nodes called neurons. An artificial neuron is an information processing node taking -inputs and producing -outputs. It is essentially a mapping function whose output is defined in two steps. The first step is the sum of weighted multiplication of all of its input. Mathematically, it can be represented as:

where is output of node, is the weight of input on node and is the value of input. This operation implements a matrix multiplication operation which is a linear function. In the second step, the output of the first step is fed to a non-linear function called an activation function

. A neural network may use any one of the functions (i) Sigmoid function (ii) Hyperbolic tangent function or (iii) Rectified Linear Unit (ReLU)

The modern deep neural networks consist of more than one hidden layer, as shown in Figure 1 and ReLU as an activation function.

Training of the DNN is an iterative process which usually implements a stochastic gradient algorithm to find the parameters or weights of input features. The weights of the parameters are randomly initialized or used from a pre-trained model. An error in the prediction is calculated at the output layer. At a hidden layer, the gradient of error which is a partial derivative of the error with respect to the existing values of weights is fed back to update the values of weights at the input layer during next iteration. The process is known as back-propagation of gradients.

This seemingly simple strategy of learning parameters or weights of a model works effectively to detect features required for classification and prediction in many image processing and speech recognition tasks, provided that hardware required to do matrix multiplication and a large data-set is available. Thus eliminating the need for manual feature engineering required for non-hierarchical classification and prediction mechanism.

The deep neural networks have been successfully used to detect objects such as handwritten digits, and pedestrians [25]. In general, DNNs have been found efficient in handling 2-dimensional data that can be represented through a simple matrix.

Figure 2: Convolutional Neural Network.

2.3 Convolutional Neural Network (CNN)

As seen in the previous section, the input of a hidden layer of DNN is connected to all the outputs of the previous layer making computations and handling the number of connections unmanageable in case of high-dimensional matrices. Such situations arise when image sizes are of high resolutions (1024 X 1024). Also, the DNNs perform well when the data-set is 2-dimensional. But the majority of multi-modal data sets, for example, coloured images, videos, speech, and text are of 3-dimensional nature.

The Convolutional Neural Networks (CNN) are employed when the data set is three dimensional in nature and matrix size is very large. For example, a high-resolution coloured image has three channels of pixels (i.e., Red, Blue, Green) of size 1024 X 1024.

The architecture of CNN, as shown in Figure 2 can be divided into multiple stages. These stages perform pre-processing activities such as identifying low- level features and reducing the number of features, followed by the main task of classification and prediction as done by a fully connected neural network.

The pre-processing activities are done by convolution layers and pooling layers.

The convolution layer performs step-wise convolution operation on the given input data size and a filter bank to create a feature map. Here, a filter is a matrix of learn-able weights representing a pattern or a motif. The purpose of the step is to identify low-level features — for example, edges in an image. The underlying assumption of this layer is that low-level features correlate with a pixel configuration [25].

The pooling layer implements an aggregation operation such as either addition or maximization or average with an intention to share the weights among multiple connections. Thus reducing the number of features required for classification and/or prediction.

These pre-processing stages drastically reduce the number of features in a fully connected part of CNN.

The CNNs have been found useful in many image processing and speech recognition tasks. Because the working of CNN is based on the assumption that many high-level features are the compositions of low-level features. For example, a group of edges constitute a pattern or a motif, a group of motif constitute a part of an image, and a group of parts constitute an object [25].

Figure 3: Recurrent Neural Network.

2.4 Recurrent Neural Networks (RNN)

The third type of neural network is Recurrent Neural Networks (RNN). It has found applications in many fields such as speech recognition and machine language translation.

The RNN learns the dependency relationship among the sequence of input data. Unlike DNN and CNN, which learn relationships among feature variables and output variables, the RNN learns the relationship between data items fed to the network at different instances. To do this, RNN maintains a state vector or memory, which is implemented by connecting the hidden layer of current input to the hidden layer of previous input, as shown in Figure 3. Hence, the output of RNN is not only the function of current input data but also the input data observed so far. As a result, the RNN may give two different outputs for the same input at two separate instances.

A variant of RNN called RNN with LSTM (Long Short Term Memory) uses a memory unit to remember long term dependencies between data items in a sequence. Such networks have been found useful in question answering systems and symbolic reasoning, which draw a conclusion from a series of premises.

Figure 4: Auto-Encoders

2.5 Auto-Encoders (AE)

The autoencoder is an

unsupervised deep neural network architecture. Unlike supervised deep neural networks (e.g., DNN, CNN, RNN) which establish a mapping between input and output variables, autoencoders identify patterns in input data intending to transform the input data items. Reducing the dimensions of input vector is one example of transformation activity. In such variants, autoencoders transform an n-dimensional input vector into k-dimensional output vector such that . These networks learn two different hypothesis function called encode and decode. The purpose of the function encode is to transform input vector in some other representation which is defined as

where , and are vectors with different representations. Similarly, the purpose of the function is to restore the vector to its original form:

The functions and may be simple linear functions or a complex non-linear function. In case the data is highly non-linear an architecture called deep autoencoder with hidden layers is employed.

Autoencoders are typically employed for dimensionality reduction and pre-training a deep neural network. It has been observed that the performance of CNN or RNN improves when they are pre-trained with autoencoders [24].

3 Geospatial Analysis:A Data Science Perspective

The purpose of geospatial data analysis is to understand the planet Earth by collecting, interpreting and visualizing the data about objects, properties and events happening on, above and below the surface of Earth. Tools that we use to manage this information influence our knowledge of the Earth. So this section briefly reviews the technologies that enable geospatial data analysis.

3.1 Enabling Technologies for Geospatial Data Collection

The tools that are being used to collect location-specific information include (i) Remote Sensing (ii) Unmanned Aerial Vehicles (UAV) (iii) Global Positioning System (GPS) and (iv) Radio Frequency Identifiers (RFID). This section briefly explains these techniques.

  1. Remote Sensing

    Remote sensing is a widely used technique to observe and acquire information about Earth without having any physical contacts. The principle of electromagnetic radiation is used to detect and observe objects on Earth. With Sun as the primary source of electromagnetic radiations, sensors on remote sensing satellites detect and record energy level of radiations reflected from the objects on Earth.

    Remote sensing can be either active or passive. In passive remote sensing, the reflected energy level is used to detect objects on Earth while in active remote sensing the time delay between emission and delay is used to detect the location, speed, and direction of the object.

    The images collected by a remote sensor are characterized primarily by attributes such as spectral resolution and spatial resolution. Spatial resolution describes the amount of physical area on Earth corresponding to a pixel in a raster image. Typically it corresponds to 1 to 1000 meter area. Spectral resolution corresponds to the number of electromagnetic bands used in a pixel which typically corresponds to 3 bands (e.g., Red, Green and Blue ) to seven visible colours. In hyper-spectral imaging produced by some remote sensing mechanism, 100 to 1000 bands correspond to a pixel.

    Data acquired through remote sensing have been useful in many geospatial analysis activities such as precision agriculture[40], in hydrology [20], for monitoring soil moisture [46] to name a few.

  2. Drones and Unmanned Aerial Vehicles (UAV)

    Drones and UAVs are emerging as a cost effective alternative to conventional satellite-based method of remote sensing for Earth surface imaging [43]. Like satellite based remote sensing, UAVs are equipped with multi-spectral cameras, infrared cameras, and thermal cameras. A spatial accuracy of 0.5m to 2.5 meter has been reported when UAVs are used for remote sensing [37]. Remote sensing with UAVs have found applications for precision agriculture [37] and civilian security applications [9].

  3. Global Positioning Systems (GPS)

    The devices equipped with Global Positioning Systems (GPS) can receive signals from GPS satellites and can calculate its accurate position. Smartphones, Cars and dedicated handheld devices are examples of GPS. These devices are specifically used for navigation purpose showing directions to a destination on maps, and monitoring and tracking of movements of objects of interest. Such devices can identify their locations with an accuracy of 5 meters to 30 centimeters. The location data collected from GPS devices have been used to identify driving styles [14], to predict traffic conditions [35] and transportation management.

  4. Radio Frequency Identification (RFID)

    It is a low-cost technological alternative to GPS used for asset management. It is typically preferred when assets to be managed move around a shorter range. Unlike GPS device, RFID tags are transmitter of radio waves which are received by RFID tracker to identify the location of RFID tagged device. Recently data generated by RFID devices have been found useful to recognise human activities[26], and to predict order completion time [45].

3.2 Geospatial Data Models

The geospatial data models represent information about earth surface and locations of objects of reference. The data model used to represent this information depends on the mode used to collect the data. The raster and vector data models are used, when the mode of data collection used is remote sensing and UAV, while, GPS and RFID data models are used when mode of data collection GPS and RFID respectively.

The three data models that are prevalent in the field of remote sensing and UAV are raster, vector and Triangular Irregular Networks (TIN). In raster data model the earth surface is represented through points, lines and polygons. In vector representation, the earth surface is represented through cell matrices that store numeric values. In TIN data model the earth surface is represented as non-overlapping contiguous triangles.

The GPS data contains the location information of GPS enabled device in terms of longitude, latitude, timestamps and the satellites used to locate the device.

The RFID data contains information about Identification number of the RFID tag, location and timestamp.

3.3 Geospatial Data Management

Geographic Information System (GIS) integrates various aspects of geospatial analysis into one unified tool. It combines multiple geospatial analysis activities such as capturing of data, storage of data, querying data, presenting and visualizing data. It provides a user interface for users to perform these activities.

GIS integrates various data capturing technologies such as satellite-based remote sensing, UAV-based remote sensing, GPS devices, and scanning of paper-based maps.

GIS uses multiple kinds of data models such as raster data model, vector data model to represent earth surfaces as a base map. On top of a base map layers are used to prepare thematic maps showing roads, land cover and population distributions. Few examples of GIS are ArcGIS [21], Google Earth [18], QGIS[42].

4 Deep learning for Remotely Sensed Data Analytics

Images constitute the significant chunk of data acquired through the method of remote sensing. These images vary in terms of information representation and graphical resolution. Typically the images obtained through remote sensing represent information either in vector or in raster form. Depending on the resolution of the images, they can be of Low Resolution (LR), High Resolution(HR) and Very High Resolution(VHR) type.

Deep learning techniques have been found as a promising method to process images of all kinds. Especially for the purpose of image segmentation[1], image enhancement [29], and image classification[5], and to identify objects in images [49].

Applications of deep learning techniques for remotely sensed data analytics leverage these advancements in the field of image processing to develop novel applications for geospatial data analysis. This section briefly reviews some of these recent applications. A detailed survey appears in [30, 56].

4.1 Data Pre-processing

The images obtained through remote sensing techniques often are of poor quality due to atmospheric conditions. The quality of these images needs to be enhanced to extract useful information from these images.

A set of activities which include denoising, deblurring, super-resolution, pan-sharpening and image fusion are typically performed on these images. Recently, geoscientists have started applying deep learning techniques for this purpose. Table

1 shows application of deep learning techniques for image pre-processing activities.

Pre-Processing Activity Example Deep learning Technique used
Denoising To Restore original clean image from the low quality image or image with irrelevant details. A combination of sparse coding with denoising Auto Encoders [50] denoising CNN[53].
Deblurring Restoring original or sharp image from the blurred image. The blurring of images occurs due to atmospheric turbulence in remote sensing. CNN[33], A combination of Auto-Encoders and Generative Artificial Neural Network (GAN) [34].
Pan Sharpening, Image Fusion, Super Resolution In many geospatial applications images with high spatial and high spectral resolutions are required. The pan sharpening is a method to combine a LR multi-spectral image with a HR panchromatic image. The super-resolution combines a LR hyper-spectral (HS) image and a HR MS image. Stacked Auto-Encoders [28], CNN [32].
Table 1: Application of Deep learning Image Pre-processing

From the table, it can be observed that both supervised(e.g., CNN) and unsupervised (e.g., Auto-encoders) are used for image pre-processing. Automatic feature extraction and comparable performance with conventional methods are some of the motivating factors behind adopting DL techniques for image pre-processing.

4.2 Feature Engineering

In the context of deep learning, feature engineering is simple as compared with conventional machine learning techniques because many deep learning techniques such as CNN automatically extract features. This is one reason to prefer deep learning techniques for the analysis of remote sensing imagery. However, there exists other feature engineering steps described below which need to be carried out for better performance and reuse of knowledge learnt on similar applications.

  1. Feature Selection

    : Feature selection is one of the crucial steps in the feature engineering aimed to identify the most relevant features that contribute to establishing the relationship between input and output variables. Earlier studies observe that three key factors affect the performance of machine learning techniques

    [52]

    . These are (i)choice of data set, (ii) machine learning algorithm and (iii) features used for classification or prediction tasks. In conventional non-hierarchical machine learning methods, regression analysis is usually performed to select the most relevant features.

    Recently, a two stage DL-based technique to select features is proposed in [13]. It is based on a combination of supervised deep networks and autoencoders. Deep networks in the first stage learn complicated low-level representations. In the second stage, an unsupervised autoencoders learn simplified representations. Few special DL-techniques specific to geospatial application have also been developed in order to extract spatial and spectral features[55, 57] from remotely sensed imagery

  2. Feature Fusion: Feature fusion is the process of identifying discriminating features so that an optimal set of features can be used to either classify or predict an output variable. For example, feature fusion is performed to recover a high-resolution image from a low-resolution image [47]. A remote sensing specific DL- technique is proposed in [27]. The method is designed to classify hyper-spectral images. In the first stage, an optimal set of multi-scale features for CNN are extracted from the input data. In the second stage, a collection of discriminative features are identified by fusing the multi-scale features.

  3. Transfer learning or domain Adaptation: The problem of domain adaptation is formulated with respect to the classification of remotely sensed images in [44]

    . It is defined as adapting a classification model to spectral properties or spatial regions that are different from the images used for training purposes. Usually, in such circumstances, a trained model fails to accurately classify images because of different acquisition method or atmospheric conditions, and they are called to be sensitive to data shifts. Domain adaptation is a specialized technique to address the transfer learning problem. Transfer learning deals a general situation where the features of a trained classifier model are adapted to classify or predict data that is not stationary over a period of time or space. For example, images captured through a remote sensing device vary at day and night time.

    A strategy called transduction strategy that adopts DL techniques (e.g., autoencoders) has been proposed in [2] to address the problem of transfer learning. The strategy suggests to pre-train a DL architecture on a test data set using unsupervised algorithms (e.g., autoencoders) to identify discrimating features. Here, the test data set is used to pre-train a DL architecture and the test data set is entirely different from the training data set used to train a DL architecture. The strategy works to learn abstract representations because it identifies a set of discriminating features responsible to generalize the inferences from the observed the data.

4.3 Geospatial Object Detection

The task of detecting or identifying an object of interest such as a road, or an airplane, or a building from aerial images acquired either through remote sensing or UAV is referred to as geospatial object detection.

It is a special case of general problem of object recognition from images. However with numerous challenges such as small size of the object to be detected, large number of objects in imagery, and complex environment [8, 54] make the task of object recognition more difficult. Hence, DL-methods have been found useful in such situations specially to extract low-level features and learn abstract representations.

When DL-based methods are applied, the models such as CNN are first trained with geospatial images labeled to separate out the objects of our interest. The images used for training purposes are optimized to detect a specific object. These steps can be repeated until the image analysis model accurately detect multiple types of objects from the given images [15].

4.4 Classification Tasks in Geospatial Analysis

Like in other fields such as Computer Vision and Image processing, the field of Geospatial Analysis primarily adopts DL-techniques to classify remotely sensed imagery in various ways to extract useful information and patterns. Some of these usecases are discussed below.

These classification tasks are either pixel-based or object-based [48]. In pixel-based classification, pixels are grouped based on spectral properties (e.g., resolution). In object based classification, a group pixels are classified using both spectral and spatial properties into various geometric shapes and patterns. Either a supervised (e.g., CNN) or unsupervised algorithms (e.g. Autoencoders) are applied for the purpose of classification. In comparison with conventional machine learning techniques, application of DL-based techniques has outperformed in both scenarios in terms of classification accuracy[54].

Some of the classification activities include.

  1. Land Cover Classification:

    Observed bio-physical cover over the surface of earth is referred to as land cover. A set of 17 different categories have been identified by International Geosphere-Biosphere Programme (IGBP) to classify earth surface. Some of these broad categories include Water, Forest, Shrubland, Savannas, Grassland, Wetland, Cropland, Urban, Snow, and Barren [11].

    There are two primary methods of getting information about land cover: field survey and remotely sensed imagery. DL-techniques are applied when information is acquired through remote sensing. In [51, 23, 39] the pre-trained CNN model is used to classify different land cover types. CNN is the most preferred technique to classify earth surface according to land cover. However other techniques such as RNN are also applied in few cases[41].

  2. Land Use Classification:

    In this mode of classification, earth surface can be classified according to their social, economical and political usages. For example, a piece of green-land may be used as a tennis court or a cricket pitch. A piece of urban can be used for industrial or residential purposes. So there are no fixed set of categories or a classification system to differentiate land uses.

    Like in case of land cover classification, CNN [19] is typically used to classify land uses and to detect change in land cover according their uses [4].

  3. Scene Classification:

    Scenes in remotely sensed images are high-level semantic entities such as a densely populated area, a river, a freeway, a golf-course, an airport etc. Supervised and unsupervised deep learning techniques have been found effective for scene understanding and scene classification in remotely sensed imagery [7].

Further, DL-techniques have been applied in specific cases of land use and land cover classification such as fish species classification [38], crop type classification [23] and mangrove classification [16].

5 Deep learning for GPS Data Analytics

GPS enabled devices, particularly vehicles, are generating data in the range of GB/Sec. Such data includes information about exact coordinates, i.e. longitude and latitude of the device, direction and speed with which the device is moving. Such information throws light on many behavioural patterns useful for targetted marketing.

One way to utilize this data is to identify the driving style of a human driver. Various methods of accelerating, braking, turning under different road and environmental conditions determine the driving style of a driver. This information is useful for multiple business purposes like insurance companies may use to mitigate driving risks, to design better vehicles, to train autonomous cars, and to identify an anonymous driver.

However, identifying features that precisely determine a driving style is a challenging task. Hence the techniques based on deep learning are useful to extract such features. W. Dong et al. [14]

have applied DL-techniques to GPS data to characterize the driving styles. The technique is motivated by the applications of DL-methods in speech recognition. They interpreted GPS data as a time series and applied CNN and RNN to identify driving styles. The raw data collected from the GPS devices are transformed to extract statistical features (e.g., mean and standard deviation) and then it is fed for analysis to CNN and RNN. The method was validated through identifying the driver’s identity.

In another interesting application, a combination of RNN and Restricted Boltzman machine

[31] is adopted to analyze GPS data for predicting the congestion evolution in a transportation network. The technique effectively predicts how congestion at a place has a ripple effect at other sites.

GPS data has also been analyzed using DL techniques to manage resources efficiently. For example, in the construction industry, the usages of GPS enabled equipment are monitored and analyzed [36].

6 Deep learning for RFID Data Analytics

RFID and GPS are emerging as technologies to ascertain the locations of devices or assets. However, RFID in comparison with GPS allows to build location-sensitive applications for shorter geographical coverage. Unlike GPS mechanism, RFID is a low cost alternative to measure locations of assets with respect to a RFID tracker. GPS enabled devices provide absolute location information in terms of longitude and latitude.

The RFID’s potential to accurately measure the location of devices with respect to a RFID tracker is increasingly being used in automated manufacturing plants. By using the location data transmitted by a RFID tagged device, various intelligent applications have been designed by applying DL techniques. These applications are intended to identify patterns and recognise activities in a manufacturing or a business process.

DL techniques have been found specially effective when activities happens in a spatio-temporal dimension. For example, in the field of medicine, detecting activities during trauma resuscitation [26].

In such situations, activity recognition is represented as a multi-class classification problem. For example, in case of trauma resuscitation, activities are oxygen preparation, blood pressure measurement, temperature measurement, and cardiac lead placement. For detecting these activities, a hardware set up of RFID tagged device along-with RFID tracker is used to collect the data. The collected data is analysed using CNN to extract relevant features and recognise the activity during trauma resuscitation.

In another application from manufacturing processes, the deep learning techniques have been used to accurately predict the job completion time. The conventional methods of job completion time rely on use of historical data. Such predictions greatly vary from the actual job completion time. The method proposed in [45] adopts RFID tags and trackers to collect real-time data. The collected data then mapped to historical data using CNN for predicting job completion time.

7 Conclusion

The scope of the geospatial data analysis is very vast. The data collection methods vary from manual one to satellite-based remote sensing. Numerous data models have been developed to represent various aspects of earth surfaces and objects on earth. These models capture absolute and relative location-specific information. These models also reveal static and dynamic aspects, spectral and spatial resolutions. Various GIS tools are being used to integrate and manage geospatial information.

So far, harnessing this collected information for useful purposes such as to extract hidden information in the form patterns, behaviours and predictions were limited by the absence of powerful analysis methods. But the emergence of Deep learning-based data analysis techniques has opened up new areas for geospatial applications.

This chapter presents some of the emerging applications designed around DL-techniques in the field of geospatial data analysis. These applications are categorized based on the mode of data collection adopted.

The deep-learning architectures such as CNN and Autoencoders are increasingly used when the method of data collection is remote sensing and UAV. Applications of DL techniques realize the high-level classification tasks such as land uses and land covers.

When GPS is the primary method of data collection, the collected data needs to be interpreted as a sequence or as a series of data. In such situations, RNN, along with CNN, is increasingly used to identify hidden patterns and behaviours from the traffic and mobility data collected from GPS enabled devices.

The CNNs are primarily used to process the location-specific information gathered through RFID device over shorter geographic areas. These analyses lead to recognize a Spatio-temporal activity in a manufacturing or a business process and to predict the time required to complete these activities using real-time data, unlike using historical data for predictive analytics.

These novel applications of DL-techniques in the field of Geospatial Analysis are improving our knowledge of planet earth, creating new insights about our behaviour during mobility, assisting us to make decisions (e.g., route selection), and making us more environmentally conscious citizens (e.g., predictions about depleting levels of forest land and protections of mangroves).

Acknowledgement:

The authors acknowledge the funding provided by Ministry of Human Resource Development (MHRD), Government of India, under the Pandit Madan Mohan National Mission on Teachers Training (PMMMNMTT). The work presented in this chapter is based on the course material developed to train engineering teachers on the topics of Geospatial Analysis and Product Design Engineering.

References

  • [1] V. Badrinarayanan, A. Kendall, and R. Cipolla (2017) Segnet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence 39 (12), pp. 2481–2495. Cited by: §4.
  • [2] Y. Bengio (2012) Deep learning of representations for unsupervised and transfer learning. In Proceedings of ICML workshop on unsupervised and transfer learning, pp. 17–36. Cited by: item 3.
  • [3] C. Cao, F. Liu, H. Tan, D. Song, W. Shu, W. Li, Y. Zhou, X. Bo, and Z. Xie (2018) Deep learning and its applications in biomedicine. Genomics, proteomics & bioinformatics 16 (1), pp. 17–32. Cited by: §1.
  • [4] C. Cao, S. Dragićević, and S. Li (2019) Land-use change detection with convolutional neural network methods. Environments 6 (2), pp. 25. Cited by: item 2.
  • [5] T. Chan, K. Jia, S. Gao, J. Lu, Z. Zeng, and Y. Ma (2015) PCANet: a simple deep learning baseline for image classification?. IEEE transactions on image processing 24 (12), pp. 5017–5032. Cited by: §4.
  • [6] H. Chen, O. Engkvist, Y. Wang, M. Olivecrona, and T. Blaschke (2018) The rise of deep learning in drug discovery. Drug discovery today 23 (6), pp. 1241–1250. Cited by: §1.
  • [7] G. Cheng, J. Han, and X. Lu (2017) Remote sensing image scene classification: benchmark and state of the art. Proceedings of the IEEE 105 (10), pp. 1865–1883. Cited by: item 3.
  • [8] C. Cira, R. Alcarria, M. Manso-Callejo, and F. Serradilla (2019) A deep convolutional neural network to detect the existence of geospatial elements in high-resolution aerial imagery. In Multidisciplinary Digital Publishing Institute Proceedings, Vol. 19, pp. 17. Cited by: §4.3.
  • [9] K. Daniel and C. Wietfeld (2011) Using public network infrastructures for uav remote sensing in civilian security operations. Technical report DORTMUND UNIV (GERMANY FR). Cited by: item 2.
  • [10] L. Deng (2014) A tutorial survey of architectures, algorithms, and applications for deep learning. APSIPA Transactions on Signal and Information Processing 3. Cited by: §1.
  • [11] A. Di Gregorio (2005) Land cover classification system: classification concepts and user manual: lccs. Vol. 2, Food & Agriculture Org.. Cited by: item 1.
  • [12] P. Domingos (2012-10) A few useful things to know about machine learning. Commun. ACM 55 (10), pp. 78–87. External Links: ISSN 0001-0782, Link, Document Cited by: §2.
  • [13] N. T. Dong, L. Winkler, and M. Khosla (2019) Revisiting feature selection with data complexity for biomedicine. bioRxiv, pp. 754630. Cited by: item 1.
  • [14] W. Dong, J. Li, R. Yao, C. Li, T. Yuan, and L. Wang (2016) Characterizing driving styles with deep learning. arXiv preprint arXiv:1607.03611. Cited by: item 3, §5.
  • [15] A. Estrada, A. Jenkins, B. Brock, and C. Mangold (2018-July 3) Broad area geospatial object detection using autogenerated deep learning models. Google Patents. Note: US Patent App. 10/013,774 Cited by: §4.3.
  • [16] S. Faza, E. Nababan, S. Efendi, M. Basyuni, and R. Rahmat (2018) An initial study of deep learning for mangrove classification. In IOP Conference Series: Materials Science and Engineering, Vol. 420, pp. 012093. Cited by: §4.4.
  • [17] I. Goodfellow, Y. Bengio, and A. Courville (2016) Deep learning. MIT press. Cited by: §1, §2.
  • [18] N. Gorelick, M. Hancher, M. Dixon, S. Ilyushchenko, D. Thau, and R. Moore (2017) Google earth engine: planetary-scale geospatial analysis for everyone. Remote Sensing of Environment 202, pp. 18–27. Cited by: §3.3.
  • [19] P. Helber, B. Bischke, A. Dengel, and D. Borth (2019) Eurosat: a novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 12 (7), pp. 2217–2226. Cited by: item 2.
  • [20] T. J. Jackson, J. Schmugge, and E. Engman (1996) Remote sensing applications to hydrology: soil moisture. Hydrological Sciences Journal 41 (4), pp. 517–530. Cited by: item 1.
  • [21] K. Johnston, J. M. Ver Hoef, K. Krivoruchko, and N. Lucas (2001) Using arcgis geostatistical analyst. Vol. 380, Esri Redlands. Cited by: §3.3.
  • [22] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §1.
  • [23] N. Kussul, M. Lavreniuk, S. Skakun, and A. Shelestov (2017) Deep learning classification of land cover and crop types using remote sensing data. IEEE Geoscience and Remote Sensing Letters 14 (5), pp. 778–782. Cited by: item 1, §4.4.
  • [24] Q. V. Le et al. (2015) A tutorial on deep learning part 2: autoencoders, convolutional neural networks and recurrent neural networks. Google Brain, pp. 1–20. Cited by: §2.5.
  • [25] Y. LeCun, Y. Bengio, and G. Hinton (2015) Deep learning. nature 521 (7553), pp. 436. Cited by: §2.2, §2.3, §2.3.
  • [26] X. Li, Y. Zhang, I. Marsic, A. Sarcevic, and R. S. Burd (2016) Deep learning for rfid-based activity recognition. In Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems CD-ROM, pp. 164–175. Cited by: item 4, §6.
  • [27] Z. Li, L. Huang, and J. He (2019) A multiscale deep middle-level feature fusion network for hyperspectral classification. Remote Sensing 11 (6), pp. 695. Cited by: item 2.
  • [28] Y. Liu, X. Chen, Z. Wang, Z. J. Wang, R. K. Ward, and X. Wang (2018) Deep learning for pixel-level image fusion: recent advances and future prospects. Information Fusion 42, pp. 158–173. Cited by: Table 1.
  • [29] K. G. Lore, A. Akintayo, and S. Sarkar (2017) LLNet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recognition 61, pp. 650–662. Cited by: §4.
  • [30] L. Ma, Y. Liu, X. Zhang, Y. Ye, G. Yin, and B. A. Johnson (2019) Deep learning in remote sensing applications: a meta-analysis and review. ISPRS journal of photogrammetry and remote sensing 152, pp. 166–177. Cited by: §4.
  • [31] X. Ma, H. Yu, Y. Wang, and Y. Wang (2015) Large-scale transportation network congestion evolution prediction using deep learning theory. PloS one 10 (3), pp. e0119044. Cited by: §5.
  • [32] G. Masi, D. Cozzolino, L. Verdoliva, and G. Scarpa (2016) Pansharpening by convolutional neural networks. Remote Sensing 8 (7), pp. 594. Cited by: Table 1.
  • [33] S. Nah, T. Hyun Kim, and K. Mu Lee (2017) Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3883–3891. Cited by: Table 1.
  • [34] T. M. Nimisha, A. Kumar Singh, and A. N. Rajagopalan (2017) Blur-invariant deep learning for blind-deblurring. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4752–4760. Cited by: Table 1.
  • [35] X. Niu, Y. Zhu, and X. Zhang (2014) DeepSense: a novel learning mechanism for traffic prediction with taxi gps traces. In 2014 IEEE global communications conference, pp. 2745–2750. Cited by: item 3.
  • [36] N. Pradhananga and J. Teizer (2013) Automatic spatio-temporal analysis of construction site equipment operations using gps data. Automation in Construction 29, pp. 107–122. Cited by: §5.
  • [37] C. A. Rokhmana (2015) The potential of uav-based remote sensing for supporting precision agriculture in indonesia. Procedia Environmental Sciences 24, pp. 245–253. Cited by: item 2.
  • [38] A. Salman, A. Jalal, F. Shafait, A. Mian, M. Shortis, J. Seager, and E. Harvey (2016) Fish species classification in unconstrained underwater environments based on deep learning. Limnology and Oceanography: Methods 14 (9), pp. 570–585. Cited by: §4.4.
  • [39] G. J. Scott, M. R. England, W. A. Starms, R. A. Marcum, and C. H. Davis (2017) Training deep convolutional neural networks for land–cover classification of high-resolution imagery. IEEE Geoscience and Remote Sensing Letters 14 (4), pp. 549–553. Cited by: item 1.
  • [40] S. K. Seelan, S. Laguette, G. M. Casady, and G. A. Seielstad (2003) Remote sensing applications for precision agriculture: a learning community approach. Remote Sensing of Environment 88 (1-2), pp. 157–169. Cited by: item 1.
  • [41] Z. Sun, L. Di, and H. Fang (2019) Using long short-term memory recurrent neural network in land cover classification on landsat and cropland data layer time series. International journal of remote sensing 40 (2), pp. 593–614. Cited by: item 1.
  • [42] Q. D. Team et al. (2009) QGIS geographic information system. Open Source Geospatial Foundation. Cited by: §3.3.
  • [43] K. Themistocleous (2014) The use of uav platforms for remote sensing applications: case studies in cyprus. In Second International Conference on Remote Sensing and Geoinformation of the Environment (RSCy2014), Vol. 9229, pp. 92290S. Cited by: item 2.
  • [44] D. Tuia, C. Persello, and L. Bruzzone (2016) Domain adaptation for the classification of remote sensing data: an overview of recent advances. IEEE geoscience and remote sensing magazine 4 (2), pp. 41–57. Cited by: item 3.
  • [45] C. Wang and P. Jiang (2019) Deep neural networks based order completion time prediction by using real-time job shop rfid data. Journal of Intelligent Manufacturing 30 (3), pp. 1303–1318. Cited by: item 4, §6.
  • [46] L. Wang and J. J. Qu (2009) Satellite remote sensing applications for surface soil moisture monitoring: a review. Frontiers of Earth Science in China 3 (2), pp. 237–247. Cited by: item 1.
  • [47] W. Wang, R. Guo, Y. Tian, and W. Yang (2019) CFSNet: toward a controllable feature space for image restoration. arXiv preprint arXiv:1904.00634. Cited by: item 2.
  • [48] R. C. Weih Jr and N. D. Riggan Jr OBJECT-based classification vs. pixel-based classification: comparitive importance of multi-resolution imagery. Cited by: §4.4.
  • [49] J. Wu, Y. Yu, C. Huang, and K. Yu (2015) Deep multiple instance learning for image classification and auto-annotation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3460–3469. Cited by: §4.
  • [50] J. Xie, L. Xu, and E. Chen (2012) Image denoising and inpainting with deep neural networks. In Advances in neural information processing systems, pp. 341–349. Cited by: Table 1.
  • [51] G. Xu, X. Zhu, D. Fu, J. Dong, and X. Xiao (2017) Automatic land cover classification of geo-tagged field photos by deep learning. Environmental modelling & software 91, pp. 127–134. Cited by: item 1.
  • [52] D. Yan, C. Li, N. Cong, L. Yu, and P. Gong (2019) A structured approach to the analysis of remote sensing images. International Journal of Remote Sensing, pp. 1–24. Cited by: item 1.
  • [53] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang (2017) Beyond a gaussian denoiser: residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing 26 (7), pp. 3142–3155. Cited by: Table 1.
  • [54] L. Zhang, L. Zhang, and B. Du (2016) Deep learning for remote sensing data: a technical tutorial on the state of the art. IEEE Geoscience and Remote Sensing Magazine 4 (2), pp. 22–40. Cited by: §1, §4.3, §4.4.
  • [55] W. Zhao and S. Du (2016) Spectral–spatial feature extraction for hyperspectral image classification: a dimension reduction and deep learning approach. IEEE Transactions on Geoscience and Remote Sensing 54 (8), pp. 4544–4554. Cited by: item 1.
  • [56] X. X. Zhu, D. Tuia, L. Mou, G. Xia, L. Zhang, F. Xu, and F. Fraundorfer (2017) Deep learning in remote sensing: a comprehensive review and list of resources. IEEE Geoscience and Remote Sensing Magazine 5 (4), pp. 8–36. Cited by: §4.
  • [57] Q. Zou, L. Ni, T. Zhang, and Q. Wang (2015) Deep learning based feature selection for remote sensing scene classification. IEEE Geoscience and Remote Sensing Letters 12 (11), pp. 2321–2325. Cited by: item 1.