Understanding Non-optical Remote-sensed Images: Needs, Challenges and Ways Forward

12/23/2016
by   Amit Kumar Mishra, et al.
IEEE
0

Non-optical remote-sensed images are going to be used more often in man- aging disaster, crime and precision agriculture. With more small satellites and unmanned air vehicles planning to carry radar and hyperspectral image sensors there is going to be an abundance of such data in the recent future. Understanding these data in real-time will be crucial in attaining some of the important sustain- able development goals. Processing non-optical images is, in many ways, different from that of optical images. Most of the recent advances in the domain of image understanding has been using optical images. In this article we shall explain the needs for image understanding in non-optical domain and the typical challenges. Then we shall describe the existing approaches and how we can move from there to the desired goal of a reliable real-time image understanding system.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/26/2018

KRISM --- Krylov Subspace-based Optical Computing of Hyperspectral Images

Low-rank modeling of hyperspectral images has found extensive use in num...
12/06/2021

Fusion Detection via Distance-Decay IoU and weighted Dempster-Shafer Evidence Theory

In recent years, increasing attentions are paid on object detection in r...
12/04/2020

Optical Wavelength Guided Self-Supervised Feature Learning For Galaxy Cluster Richness Estimate

Most galaxies in the nearby Universe are gravitationally bound to a clus...
01/10/2017

Methods for Mapping Forest Disturbance and Degradation from Optical Earth Observation Data: a Review

Purpose of review: This paper presents a review of the current state of ...
12/28/2015

MRF-Based Multispectral Image Fusion Using an Adaptive Approach Based on Edge-Guided Interpolation

In interpretation of remote sensing images, it is possible that some ima...
09/05/2016

Depth Reconstruction and Computer-Aided Polyp Detection in Optical Colonoscopy Video Frames

We present a computer-aided detection algorithm for polyps in optical co...
10/02/2017

Technical Note: Towards Virtual Monitors for Image Guided Interventions - Real-time Streaming to Optical See-Through Head-Mounted Displays

Purpose: Image guidance is crucial for the success of many interventions...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction: Why do we need to understand non-optical images?

Remote sensing (RS) using non-optical sensors like synthetic aperture radar (SAR) and radiometers has been becoming very popular as the cost of implementing such a system has been reducing over the past decade. This is evident from the number of new SAR sensor based small satellite projects started by various space agencies over the globe. The situation gets more interesting with the advent of low cost drones and the development in the domain of light airborne SAR sensor development. The driving motivation behind has been the extra information that a non-optical image delivers. This has been found very useful in precision agriculture. Secondly non-optical imaging sensors work under all weather condition and also in night. This makes such sensors particularly suitable for home-land security, and for the control of illegal trafficking. Both precision agriculture and control of illegal trafficking align closely to many of the sustainable development goals[1].

Some of the major challenges of handling huge amount of RS non-optical images for extracting information are the lack of sufficient training data, and the highly dynamic nature of the scenes. For example in many parts of the world locust infestation is a fairly unlikely event which occurs once in five to 10 years. To predict such an event means we deal with hardly any training dataset. Secondly given the aggressive rate at which locusts infest the warning need to be delivered as quickly as possible. These are some of the challenges which can potentially be solved by proper image understanding schemes. Hence, the need for focussed image understanding research activities in the domain of non-optical RS images is an emerging and crucial field.

There are some works in the open literature describing the application of emerging optical image processing tools in the domain of RS non-optical images. Some examples can be found in the references [2, 3, 4, 5, 6, 7]. A recently published tutorial [8] looks into many recently proposed deep learning algorithms and investigates their use in the domain of remote sensed (RS) image segmentation, classification and recognition. However this is a state-of-the-art review and does not discuss on why RS is unique and what more can be done to solve the typical challenges of RS image understanding.

2 Plan of the final paper

In addition to a detailed introduction describing the need and challenges of image understanding for remote-sensed non-optical images, the final submission will elucidate on the following themes.

2.1 Clarifying the Goal

The major step towards solving any problem is to analyse the problem and the goal. This shall be explained following a system engineering approach[9] where I shall start from “user requirements” and from there the functional requirements will be analysed and finally the desired specifications shall be presented.

This shall elegantly make clear the bottlenecks in front of us and how we might try to overcome them.

2.2 Existing Approaches

In this part some of the pertinent existing approaches shall be described. Only those algorithms which closely fit the “goals” shall be picked up. In describing them thrust will be put on the respective algorithms’ strength and how the algorithms can be fine-tuned to fit our requirements.

2.3 How to get from where we are our goals

In this last theme I shall describe some of the broad stream of approaches that can be taken to solve the problem of image understanding for non-optical RS images. This will be described under three headings.

2.3.1 Phenomenology and DeepNN

A major difference between optical and non-optical images is the fact that optical images mosty represent the scene the way we “see” it. This helps a lot in the algorithm design phase of an image understanding algorithm. Especially for deep neural networks the deeper layers represent canonical and (mostly) invariant sub-features from the image. This is easier to arrive at than it is in non-optical images.

In non-optical images arriving at canonical features requires a phenomenological study of the imaging system [10, 11, 12]. This point shall be described in detail in this part.

2.3.2 Processing following the DIKW Pyramid

Understanding, fundamentally, forms a part of the knowledge acquiring phase in the classic data-information-knowledge-wisdom (DIKW) pyramid. Putting the whole problem of image understanding in the DIKW pyramid makes the processing intuitive and easy to track. This will be detailed in this part.

It can, however, be noted that this process might be equaly helpful for optical image understanding.

2.3.3 Cognitive architecture based global loop

Human brain has been one of the major inspirations behind the development of much of the machine learning algorithms. Prefrontal cortex has been ascribed to as the source of human cognition and prefrontal cortex has a distinct layered architecture

[13, 14]. This might be one of the reasons why deepNN performs so well. This kind of modelling has been well taken care of in cognitive architectures[15, 16] which deal with symbolic information processing.

We shall describe how a symbolic non-symbollic hybrid architecture[17] inspired by prefrontal cortex can be used to process images for better understanding and for discovering knowledge from new images. The proposed structure can be seen in Figure 1.

Figure 1: A cognitive architecture can be used as a deliberative layer because of its high-level cognitive capabilities. The middle executive layer would control the flow of information between the deliberative and reactive layer. An ANN would make an appropriate reactive layer for perception and action. Further detail can be found in [17].

3 Conclusion

Short Biography of the Author

Dr. Amit Kumar Mishra is an Associate Professor with the Radar Remote Sensing Group, University of Cape Town. He has more than 12 years of experience in the domain of radar system design and data analysis. In his work related to radar analysis he has worked extensively on synthetic aperture radar (SAR) image analysis and classification. He is a Senior Member of IEEE and has eight patents in which he is either the principal applicant or a co-applicant. One of his current interests is in the emerging domain of cognitive engineering and cognitive data processing.

References

  • [1] “Sustainable development goals,” sustainabledevelopment.un.org.
  • [2] J. Tang, C. Deng, G.-B. Huang, and B. Zhao, “Compressed-domain ship detection on spaceborne optical image using deep neural network and extreme learning machine,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 3, pp. 1174–1185, 2015.
  • [3] X. Huang and J. R. Jensen, “A machine-learning approach to automated knowledge-base building for remote sensing image analysis with gis data,” Photogrammetric engineering and remote sensing, vol. 63, no. 10, pp. 1185–1193, 1997.
  • [4] P. D. Heermann and N. Khazenie, “Classification of multispectral remote sensing data using a back-propagation neural network,” IEEE Transactions on Geoscience and Remote Sensing, vol. 30, no. 1, pp. 81–88, 1992.
  • [5]

    J. Han, D. Zhang, G. Cheng, L. Guo, and J. Ren, “Object detection in optical remote sensing images based on weakly supervised learning and high-level feature learning,”

    IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 6, pp. 3325–3337, 2015.
  • [6] M. Pal, A. E. Maxwell, and T. A. Warner, “Kernel-based extreme learning machine for remote-sensing image classification,” Remote Sensing Letters, vol. 4, no. 9, pp. 853–862, 2013.
  • [7]

    O. A. Penatti, K. Nogueira, and J. A. dos Santos, “Do deep features generalize from everyday objects to remote sensing and aerial scenes domains?” in

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops

    , 2015, pp. 44–51.
  • [8] L. Zhang, L. Zhang, and B. Du, “Deep learning for remote sensing data: A technical tutorial on the state of the art,” IEEE Geoscience and Remote Sensing Magazine, vol. 4, no. 2, pp. 22–40, 2016.
  • [9] NASA, NASA system engineering handbook.   NASA, 2007.
  • [10] J. Huynen, “Phenomenological theiry of radar targets,” Ph.D. dissertation, TU Delf, 1970.
  • [11] G. Lellouch, A. Mishra, and M. Inggs, “OFDM phenomenology: recognition of canonical scatterers using flat spectra OFDM pulses,” IET Radar, Sonar and Navigation, pp. 647–654, 2016.
  • [12]

    ——, “OFDM phenomenology: radar technique combining genetic algorithm-based pulse design and energy detector for target recognition,”

    IET Radar, Sonar and Navigation, pp. 912–922, 2016.
  • [13] J. M. Fuster, “Upper processing stages of the perception action cycle,” TRENDS in Cognitive Sciences, vol. 8, no. 4, pp. 143 – 145, April 2004.
  • [14] D. O. Hebb, The Organization of Behavior, John Wiley and Sons, 1949.
  • [15] J. E. Laird, A. Newell, and P. S. Rosenbloom, “Soar: An architecture for general intelligence,” Artificial intelligence, vol. 33, no. 1, pp. 1–64, 1987.
  • [16] P. Langley, J. E. Laird, and S. Rogers, “Cognitive architectures: Research issues and challenges,” Cognitive Systems Research, vol. 10, no. 2, pp. 141–160, 2009.
  • [17] J. Son and A. Mishra, “Understanding deep image representations by inverting them,” in 2016 IEEE Annual Symposium of Pattern Recognition Association of South Africa.   IEEE, 2016, arxiv.org/abs/1610.09882.