Non-parametric Image Registration of Airborne LiDAR, Hyperspectral and Photographic Imagery of Forests

07/28/2014 ∙ by Juheon Lee, et al. ∙ 0

There is much current interest in using multi-sensor airborne remote sensing to monitor the structure and biodiversity of forests. This paper addresses the application of non-parametric image registration techniques to precisely align images obtained from multimodal imaging, which is critical for the successful identification of individual trees using object recognition approaches. Non-parametric image registration, in particular the technique of optimizing one objective function containing data fidelity and regularization terms, provides flexible algorithms for image registration. Using a survey of woodlands in southern Spain as an example, we show that non-parametric image registration can be successful at fusing datasets when there is little prior knowledge about how the datasets are interrelated (i.e. in the absence of ground control points). The validity of non-parametric registration methods in airborne remote sensing is demonstrated by a series of experiments. Precise data fusion is a prerequisite to accurate recognition of objects within airborne imagery, so non-parametric image registration could make a valuable contribution to the analysis pipeline.



There are no comments yet.


page 6

page 7

page 8

page 9

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Airborne multimodal (multi-sensor) imaging is increasingly used to examine vegetation properties [8, 13]. The advantage of using multiple sensors is that each detects a different feature of the vegetation, so that collectively they provide a detailed understanding of the ecological processes [4, 22, 35]. Specifically, Light Detection And Ranging (LiDAR) devices produce detailed point clouds of where laser pulses have been backscattered from surfaces, giving information on vegetation structure [44, 48]; hyperspectral sensors measure reflectances within narrow wavebands, providing spectrally detailed information about the optical properties of targets [4, 5]; while aerial photographs provide high spatial-resolution imagery within three colour bands [36, 55]. Using a combination of these sensors, individual trees in tropical rain forests can be mapped, enabling invasive species to be monitored [5, 6], carbon storage to be assessed [3] and leaf physiological processes to be inferred [7, 4]. Accurate alignment of images is critical for the successful identification of individual trees using object recognition approaches [4, 6, 5]. However, images taken from different sensors or angles have relative rotation, translation or scale mismatches, and rugged terrain can cause complex displacement between images [66, 54]. As a result, aligning images is challenging.

Alignment of remotely sensed images (known as image registration) is currently conducted with feature-based methods [12, 54, 45, 64, 8, 50, 46, 28, 62, 26, 39, 28, 29, 63], intensity-based methods [66, 19, 18, 58, 56, 30, 42, 27, 34] or a combination of the two [54, 47, 65], but all these approaches have their drawbacks. Image registration involves transforming a template image so that it aligns with a reference image . Feature-based methods rely on identifying common features in and , for example ground control points (GCPs), patches or edges located in the images. These features are used to calculate transformation parameters, such that the location of the features in the transformed image are identical to those in . Feature information can be obtained using manual selection [12, 54], edge detection [45, 64, 8], scale invariant feature transformation [50, 46, 28, 62, 65], random sample consensus [26, 39], feature segmentation [28, 29] or a phase congruency method [63]. Feature-based methods can be very effective, but their performance relies on image quality and it can be difficult to locate corresponding features between images when datasets have different spatial resolutions or optical properties [19, 18]. Furthermore, in the case of multimodal imaging, some features in the reference image may not be present in the template image, or vice versa. Intensity-based methods involve maximising the similarity in intensity values between the transformed template image and [66, 19, 18, 58, 56]. In theory this approach is fully automatic, but in practice it is often mathematically ill-posed, in the sense that the registration solution might not be unique and a small change within the data might result in large variation in registration results [32]

. In addition, image modality affects the similarity between images significantly, therefore the choice of similarity measure for the intensity-based methods is very important

[21, 18, 58, 56, 37, 16, 13, 47, 65]. In general, it is extremely difficult to co-register multi-modal images if the images are not preprocessed (i.e. orthorectified or georeferenced). In this case, current intensity-based algorithms are likely to fail as they usually assume that displacement is neither complex nor large [37, 16, 13, 47, 65]. Therefore, preprocessing methods must be applied to align images precisely. But preprocessing methods requires GCPs (i.e. feature information) [49, 9, 60, 61] unless the flight navigation system is fully integrated with orientation (bore-sight) calibrated imaging sensors [43, 14, 59]. Although there are near-automatic ways to get GCPs, it is still difficult to extract and choose common features from multi-modal images. Generally, GCPs or similarity measures are used to calculate optimal transformation parameters in affine transformation [15] (preserves points, straight lines and planes), which has been widely used in both feature-based and intensity-based registration methods [53, 57].

This paper develops the use of a non-parametric registration method based on variational formulation as an alternative to the well-established feature-based and intensity-based approaches [54]

. Non-parametric registration methods are already well-established in mathematical analysis, medical imaging communities and computer vision

[1, 51, 52, 66, 53, 17] but have yet to permeate far in the field of remote sensing. Unlike parametric image registration, which uses a small set of parameters to transform (examples include affine transformations calculated by intensity-based Normalised Cross Correlation [27, 34], Mutual Information [21, 18, 58, 56] and Normalised Gradient Fields [30, 42]), non-parametric registration methods are based on a variational formulation within which a cost function is minimised. They have been developed to overcome the ill-posedness of established methods by considering not only the similarity between images but also the regularity of the transformation in the calculated cost function, so that they can deal with non-linearity effectively [11, 1, 51, 66, 52, 53]. To the best of our knowledge, these methods have never been applied to the registration of remote sensing imagery.

We will demonstrate how non-parametric registration can be used to register three types of airborne remote sensing data sampled over forests (i.e. LiDAR, hyperspectral and photographic imagery). The benefits of the non-parametric registration method are illustrated, focussing particularly on its strong performance regardless of modality or degree of preprocessing. The datasets used to exemplify the approach are introduced in Section II. Then in Section III, the mathematical concepts of the non-parametric image registration algorithm are introduced. The demonstration of the effectiveness of the approach is given in Sections IV and V. Finally, Section VI gives recommendations for future work.

Ii Data

This section briefly addresses the methodologies and properties of the datasets used for registration in this paper. Acquisition of remote sensing datasets was conducted in three areas of the Los Alcornocales Natural Park, Spain (lat 3619, long 537) on 10 April 2011, by Airborne Research and Survey Facility of the UK’s Natural Environment Research Council (NERC-ARSF) and preprocessed by their Data Analysis Node. The airplane flew at a nominal height above ground of approximate 3000 m and was equipped with LiDAR and hyperspectral imagers, as well as a digital camera. LiDAR [Leica ALS 50-II] emits pulses of monochromatic laser light (1064 nm) to scan topographical and geometrical structures of the surface, creating three-dimensional point clouds representing the points at which pulses are backscattered off surfaces and returned to the aircraft. Each point has an associated intensity value, which correlates with the proportion of a pulse’s energy which is returned to the sensor. However, the radiometric properties of LiDAR intensity are not completely known - LiDAR pulse intensity values are controlled by an automatic gain control (AGC) system during the acquisition process, so the intensity of the return is a function of unknown varying pulse energy as well as the backscattering properties of the ground surface [38, 40, 41]. NERC-ARSF preprocessed these LiDAR data and georeferenced them to the Universal Transverse Mercator (UTM) projection with WGS-84 datum. The average LiDAR point density over the study site is 2 points per square metre (m). In order to compare LiDAR imagery with other datasets, LiDAR point clouds were projected onto a two-dimensional image plane by ignoring the height information for each LiDAR point. LiDAR intensity was calculated in 1 m pixels as the average of the all-return pulse intensities.

Hyperspectral imaging spectrometers measure solar energy reflected off the earth’s surface within a swath of land. Hyperspectral data were gathered using the AISA Eagle and AISA Hawk sensors (Specim Ltd., Finland) with 255 and 256 spectral bands respectively covering 400–2500 nm wavelengths across 2300 m swath width with 3 m spatial resolution. The hyperspectral sensors record reflected energy in digital numbers, which were converted to spectral radiance (

) and then provided to us. Before image registration, hyperspectral imagery was atmospherically corrected using ATCOR-4 (Rese Ltd., Switzerland), which converts radiance values to reflectances. An accurate navigation system integrated with boresight calibrated hyperspectral sensors provide geocoordinates of each pixel in the hyperspectral imagery, which meant that the hyperspectral images could be orthorectified by digital elevation models (DEM) from ASTER and LiDAR data and then georeferenced to the UTM projection with WGS-84 datum. The estimated georeferencing error of hyperspectral image is about

m horizontally. However, it deteriorates at the edge of the field of view of the hyperspectral sensors.

Aerial photographs were acquired during the flight using a Leica RCD-105 Digital Frame Camera. Each photograph has pixels. Since the spatial resolution of aerial photographs is much higher than that of hyperspectral images, aerial photos can help to identify objects more accurately. However, aerial photographs were not integrated with the aircraft navigation system, so they were not orthorectified or georeferenced during pre-processing. Metadata associated with aerial photographs informs of the time, location and altitude of aircraft when each photo was taken. We assumed that the location was the centre of each image and that the spatial resolution of each pixel equaled to 0.3 m.

If the preprocessed data had been georeferenced to 1 m accuracy then it would have been completely straightforward to register images, but the fact that uncertainty in the spatial resolution of the hyperspectral image often exceeds 3 m means that image registration techniques need to be be applied in order to precisely align images. Registration of aerial photos onto hyperspectral images or LiDAR imagery is even more challenging because they were neither orthorectified nor georeferenced when delivered. This paper provides a robust and accurate approach for registering all three datasets.

Iii Method

This section will briefly describe the mathematical concept of image registration, and the particular registration method that we use for the registration of images in our dataset (see [25] for further details). Let and be the given reference and template images, respectively, modelled as functions defined on a finite two-dimensional grid and mapping a point on the grid to a real intensity value and , respectively.

Remark 1.

Note that the resolutions of and do not necessarily have to be the same, that is they can have different sizes in vertical and horizontal directions. As such, the grid refers to a spatial domain on which both and are defined rather than the particular resolution of the latter.

When registering the template with the reference image we find a suitable transformation which maps to such that the transformed version of is similar to . This transformation maps the grid of to the grid of . A generic variational method for finding this transformation is as a solution of


where is a similarity measure that quantifies the difference between the distorted template and reference image , is a so-called regularisation term that imposes appropriate regularity on the transformation and is a positive parameter that balances the importance of the similarity measure against the regularisation term. Existence of solutions of (1) for the registration problem are discussed, for example, in [52, 53, 24] and the references therein. In the particular case of non-parametric registration considered in this paper, the transformation function can be expressed as the sum of identity and displacement , that is


A standard choice for in (1) is

which has the disadvantage of not being contrast-invariant [53]. This can be corrected by using gradient information rather than intensity information to measure similarity [53]. In this paper we use a NGF similarity measure [30, 53]. Here, the normalized gradient of an image is used to measure similarity between and . More precisely, the NGF measure is defined as


where is edge parameter , and

is the command of generating a vector by aligning the columns of the input. The edge parameter

models the level of the noise present in such that image values below this parameter are ignored. Then NGF distance measure is defined as


which, if minimised, maximises the linear dependency (alignment) of the NGF of and . A number of other similarity measures have been suggested for different types of image analysis, cf. [53].

The regularisation term encodes the regularity that should be imposed on the transformation to reduce the ill-posedness of the regstration problem. For an overview of different regularisation terms and their effect on the registration, see [53, 52]. In what follows we use a curvature regularisation [24, 30], that is


This regularisation results in the registration accuracy being dependent on the smoothness of the displacement between and [52]. In particular, curvature regularisation penalises oscillations in since it can be regarded as an approximation of the curvature of [52]. One advantage of curvature regularisation is that it does not require affine preregistration steps. Other regularisation techniques, such as fluid registration [20, 10]

, are sensitive to affine linear displacement such that preregistration with affine linear transformation is required, see

[52, 24, 25]. With these choices for in (4) and in (5), this leads the registration method on the minimisation of the specific functional


For the numerical minimisation of (6) we use the Image Registration software package (FAIR)111MATLAB version of FAIR There, the minimiser of (6) is computed iteratively via a semi-implicit scheme for the so-called Euler-Lagrange equation for (6). The latter is the equation that arises as the spatially discrete version of the Gâteaux derivative of the continuous functional , which reads [24]


where is the discretisation of the derivative of the distance measure . In order to solve equation (7) a semi-implicit iterative scheme is used which introduces an artificial time step and computes the fixed point of the equation [52, 24, 53]


where denotes the -th iterate of the scheme. Further details regarding discretisation and numerical optimisation are provided in [53]. Since remote sensing datasets contain large-scale surface information, it is computationally expensive to conduct entire image registration steps at the original resolution [53]. FAIR provides a multilevel image-registration scheme, producing a series of images varying in resolution, such that registration results from a coarser image can be used to initialise the registration on finer resolutions of the images. The multilevel scheme reduces the expensive computation and the chance of being trapped in local minima during the iterative search as images are much smoother in coarse resolution, cf. [31, 53].

Iv Application of the Registration Approach to the Airborne Remote Sensing Dataset

The first step of the process was to match the geographical boundaries of all datasets to each other, reducing the number of features present in either or , but not both. Since both hyperspectral and LiDAR intensity images contain geo-coordinates, geographical boundary matching of them is straightforward. But the aerial photographs were neither georeferenced nor orthorectified and matching the boundary between aerial photographs and other datasets was therefore challenging. For the latter we used the geocoordinate at which each photo was taken as the centre of each aerial photograph. Then the geographic boundary of each aerial photo was roughly calculated by counting the approximate number of pixels of an aerial photograph and adding m in and directions to compensate the errors caused by rough approximation. Hence the size of each aerial photograph image was assumed to be m, i.e.

where and are the number of pixels of aerial photographs in and directions, and approximately equal to and respectively for the data tested in this paper.

LiDAR is the most accurately georeferenced of the three datasets from the airborne sensors (having to m horizontal error and about 0.2 m vertical error). Therefore it is used as the reference image onto which the hyperspectral template image is aligned. LiDAR intensity data and the mean intensity of RGB bands (640, 549 and 460 nm) of the hyperspectral images were used. Although it would seem natural to use the band at 1065 nm wavelength of the hyperspectral imagery – which corresponds to the LiDAR intensity wavelength – this band suffers from low signal-to-noise ratio.

Non-parametric image registration with a variational formulation finds the optimised location for each pixel, which maximises similarity between two images. This can be achieved by numerical optimisation methods, the choice of which can influence the performance of image registration. The FAIR toolbox provides three different second-order optimisation schemes: Gauss-Newton, l-BFGS and Trust region, all of which were explored (see experimental results). Non-parametric registration yielded optimised spatial coordinates of each pixel, which were used for the transformation of original hyperspectral images. During the transformation, the hyperspectral images were interpolated and resampled by nearest neighbour interpolation. Interpolation estimates were chosen from existing values, thus minimising interpolation artefacts. This is important because hyperspectral imagery should preserve physically meaningful values.

Choosing optimal parameters in (6) is the most important step of the registration process, but these are difficult to find automatically (although see [2, 30] for examples of automatic edge parameter selection once noise level and image volume are known). We used a trial-and-error approach to find and smoothness parameter , which was time consuming. Fortunately, tuning of parameters for each registration of remote sensing images is not normally required - a single calibration for template and reference images taken by two different sensors was enough to obtain reasonable results in most cases. For the registration of a hyperspectral image onto a LiDAR intensity image the optimal values of and were found to be and , respectively.

The aerial photo was aligned first with the hyperspectral image and then from there registered onto LiDAR. We found this circuituous route necessary because the swath width of LiDAR ( m) is much smaller than scene width of the aerial camera ( m). The registration of the aerial photographs onto the hyperspectral images is challenging because aerial photographs are distorted by various effects, among them topography, lense distortion or the viewing angle. As we regarded the location where each aerial photo was taken as the centre of the image, corresponding hyperspectral images of size m were extracted from the hyperspectral imagery and used as the reference image. Curvature registration with NGF distance measure (6) was employed to register aerial photographs onto hyperspectral images. Regularisation parameter was set to and the edge parameter . RGB bands of hyperspectral images and RGB aerial photos were both transformed to grey intensity images before registering them to each other to increase the processing speed and the robustness of the registration. The results of the registration suggest that the method can handle both orthorectification and registration (see examples in Section V). After the registration of the aerial photographs onto the hyperspectral imagery, a mosaic of the aerial photos was created which was then aligned with the LiDAR data in an additional registration step. This last registration step was aided by the fact that hyperspectral and LiDAR imagery had already been aligned.

Numerical experiments were conducted, to compare our non-parametric approach (NP) (6) with well-known parametric registration methods based on different distance measures (i.e. NCC [27, 34], MI [21, 18, 58, 56] and NGF [30, 42]).

V Experimental Results

l-BFGS was found to be the fastest and most accurate of the second-order optimisation approaches available in the FAIR toolbox, and was used in all numerical experiments.

The first case we consider is image registration of hyperspectral imagery onto LiDAR (Figure 1). As both datasets were georeferenced by the data provider, only small distortions were present (up to m) as a result of DEM or navigation inconsistencies [37, 16, 13, 47, 65]. Figure 1 (a) and (b) show the LiDAR intensity reference image () and hyperspectral template image (), respectively, where the colour intensity of is the composite intensity of the RGB bands of the hyperspectral image. (c) is the complement of an intensity difference map between the LiDAR intensity reference image and the hyperspectral image. Figure 1 (g) shows the registration result using the non-parametric approach and (k) shows the intensity difference map (the absolute values of the difference between the registered image and the reference image). The white to black pixel values represent small to large differences. From Figure 1 (h)–(k), in particular the parts inside the circles marked on the figures and the average value of the intensity of all pixels in each intensity difference map, we see that the results of the NCC and NP methods are better than the results of the MI and NGF methods. In this example, the NCC method performed as well as the NP method, because both the hyperspectral and LiDAR images were approximately georeferenced before the registration was applied, so finding a local minimum was enough to get reasonable outcomes [13].

(a) LiDAR: (b) Hyperspectral: (c) (73.7)
(d) NCC: (e) MI: (f) NGF: (g) NP (6):

(h) (66.9) (i) (73.1) (j) (72.6) (k) (65.7)
Fig. 1: Image registration of a hyperspectral image onto a LiDAR intensity image of a Spanish woodland, surveyed from an aircraft (scale m). The first row shows (a) a LiDAR intensity reference image (); (b) a hyperspectral template image (); (c) a map highlighting difference between these images (i.e. the complement of intensity differences ), which would be entirely white if the match was perfect. The second row of panels show the aerial photograph template image after it has been registered using parametric methods (d) NCC, (e) MI and (f) NGF and the NP approach (g), the results of which are denoted by , , , respectively, and our non-parametric approach as . The final row of maps (h)–(k) highlight the absolute values of the differences between the registered hyperspectral images and the LiDAR reference image; the average intensity difference within the image is given in parentheses, indicating that NCC and NP approaches are similarly good whilst MI and NGF approaches are only slightly better than using the original template to calculate intensity differences (i.e. ). Yellow circles highlight regions of the images where differences among registration methods are seen.

(a) Hyperspectral: (b) Aerial photograph: (c) (52.9393)
(d) NCC: (e) MI: (f) NGF: (g) NP (6):

(h) (49.0) (i) (46.2) (j) (50.7) (k) (45.6)
Fig. 2: Image registration of an aerial photograph onto a hyperspectral image in the case of flat terrain (scale m). The first row shows (a) a hyperspectral reference image (); (b) an aerial photograph template image (); (c) a map highlighting differences between these images (i.e. the complement of intensity differences ). The second row of panels show the aerial photograph template image after it has been registered using parametric methods (d) NCC, (e) MI and (f) NGF and (g) the NP approach. The final row shows maps highlighting the absolute values of the differences between the registered aerial photograph images and the hyperspectral reference image; the average intensity difference within the image is given in parentheses.

(a) Hyperspectral: (b) Aerial photograph: (c) (53.2706)
(d) NCC: (e) MI: (f) NGF: (g) NP (6):

(h) (58.1) (i) (49.9) (j) (51.0) (k) (46.8)
Fig. 3: Image registration of an aerial photograph onto a hyperspectral image in the case of rugged terrain (scale m). The first row shows (a) a hyperspectral reference image (); (b) an aerial photograph template image (); (c) a map highlighting difference between these images (i.e. the complement of intensity differences . The second row of panels show the aerial photograph template images after it has been registered using parametric methods (d) NCC, (e) MI and (f) NGF and (g) the NP approach. The final row shows maps highlighting the absolute values of the differences between the registered aerial photograph images and the hyperspectral reference image; the average intensity difference within the image is given in parentheses.

Image registration of aerial photographs onto hyperspectral or LiDAR images was more challenging because the aerial photos were not preprocessed and did not come with georeferencing information. Moreover, the swath width of the aerial camera is much larger than that of LiDAR sensors, making it difficult to create a reference image onto which aerial photographs could be aligned. We present two image registration examples: one for a flat terrain and one for a rugged terrain (Figures 2 and 3, respectively). Where topographical variation is large the correct alignment of the images becomes more difficult [16, 13, 47, 65]. .The non-parametric registration approach (6) worked well in the case of flat terrain (see Figure 2), while parametric registration with three different distance measures (NCC, MI and NGF) poorly matched the detailed structures of a given reference image, see Figure 2 (h)–(k) in particular the parts marked by circles. Approach (6) provides reasonable outcomes while parametric registration methods (NCC, MI and NGF) make serious mistakes and in particular, could not align detailed features (e.g. see red circles on Figure 3 (h)–(k)). Figure (4) shows the results of aligning the aerial photographs onto the hyperspectral image for the cases of flat and rugged terrains in the form of a checkerboard: if the aligment is good then features such as roads and rivers should join across the checkerboard. We can clearly see that the approach (6) gives very accurate registration results.

After registration of aerial photographs onto hyperspectral images, additional registration was performed to remove minor mismatches between aerial photos and LiDAR (Figure 5). As aerial photographs were registered individually onto hyperspectral imagery there may be mismatches at the edge of each aerial photograph (visible in Figures 2 and 3), which may produce noticeable discontinuity between the photographs. For example, in Figure 5 (c), the part marked by the red circle shows discontinuity at the interface of two aerial photographs. These boundary artefacts are due to a non-optimal choice of the regularisation parameter for the registration of aerial photographs to hyperspectral images. We chose to have a fixed regularisation parameter in (6) which might not be optimal for every aerial photo in the data set, and this caused errors at the boundaries. Tuning the parameters for each aerial photograph where discontinuity deteriorates the quality of registration, can improve the result significantly. In the case of the mismatch inside the circle in Figure 5 (c) a tuning of the regularisation parameter from to significantly improved the registration and removed the discontinuity between the two aerial photos (Figure 5 (d)).

(a) Raw image overlay in flat terrain case (b) Registered image overlay in flat terrain case
(c) Raw image overlay in rugged terrain case (d) Registered image overlay in rugged terrain case
Fig. 4: Checkerboard overlay between aerial photograph template images () and hyperspectral reference images () and checkerboard overlay of registered aerial photograph template images () and hyperspectral reference images () of the NP approach. (a)-(b) Flat terrain case; (c)-(d) rugged terrain case.

(a) LiDAR intensity image

(b) RGB bands of registered hyperspectral image

(c) Mosaic image of registered aerial photographs (with fixed global parameter in (6))

(d) Mosaic image of registered aerial photographs (with locally tuned parameter in (6))
Fig. 5: Fully registered LiDAR, hyperspectral and aerial photograph imagery. (a) LiDAR intensity image; (b) RGB bands of hyperspectral imagery; (c) mosaic imagery of registered aerial photographs of the NP approach with fixed global regularisation parameter ; (d) mosaic imagery of registered aerial photographs of the NP approach with locally tuned regularisation parameter .

Vi Conclusions and Outlook

The experiments illustrated in Figures 2, 3 and 4

indicate that non-parametric image registration techniques can effectively co-align remote sensing images well, working as well as established methods when registration is straight forward and out-performing those approaches when dealing with non-georeferenced photos. Remote sensing images are usually preprocessed before being sent to users, but the orthorectification and georeferencing procedures are not accurate enough to identify individual trees. Techniques based on feature extractions are well established in the field and are capable of accurate data assimilation if used in sufficient numbers, but this approach is at most semi-automatic. Intensity-based parametric methods, such as NCC, MI and NGF, can perform full-automatic registration but presumes that data are pre-processed and displacement between template and reference images is small. Non-parametric image registration provides flexible algorithms for image registration, thus allowing image registration without any prior knowledge of the dataset. The validation of this method in reducing processing time and improving analysis results was demonstrated by various experiments in multimodal remote sensing datasets, i.e., the LiDAR, hyperspectral and aerial photograph datasets.

From the experiments shown in Section V, we see that the non-parametric registration with a variational formulation has successfully aligned remote sensing images. As it can be applied to non-orthorectified images, it performs automatic orthorectification and georeferencing as well. The non-parametric registration is of course dependent on the quality of the reference image. We used LiDAR intensity images in order to register hyperspectral images onto LiDAR. We believe the quality of LiDAR can be further improved by increasing the understanding of the radiometric properties of LiDAR intensity. The automatic gain control system adjusts the pulse energy during the LiDAR acquisition (i.e. the pulse energy is increased when the returned energy is low). An AGC value within the range is given for each pulse in the LAS file, and a few studies have attempted to normalise LiDAR intensity using these numbers [38, 40, 41]. However, whilst none of those methods are able to successfully correct the LiDAR datasets we used, we believe that a successful radiometric calibration could indeed improve the registration accuracy. Another difficulty is that hyperspectral and aerial photos are strongly influenced by shading effects, because they record backscattered solar energy. Shaded pixels create strong gradients on one side of trees, so the registration process is intrinsically biased to some extent. Hence, combining image registration with shade removal [23] could improve the quality of image registration.

Similarity measures, such as Sum of Squared Distance (SSD), NCC or MI, could play a key role in image registration. SSD measures intensity difference between images and NCC maximises correlation between two images [53]

. Although these conventional methods can deal with images from the same measurement, their performance on multimodal images are poor. The MI method has been widely used as a similarity measure in remote sensing applications as it can be applied to multimodal imaging. MI, which is originally from information theory, measures joint probability of intensities of images, can be viewed as a generalised similarity measure

[53]. However, the MI method has noticeable disadvantages. MI is highly non-convex, therefore it is difficult to optimise and increases non-linearity of registration [30]. Since MI is based on joint density of intensity values, it may suffer from interpolation induced artefact [18]. The NGF method is designed to measure similarity between images taken by multi-sensors. It compares gradients of two images, so it is computationally fast and handles multimodality.

Regularisation is a key part of this paper. Although a number of studies have used intensity-based similarity measures [66, 19, 18, 58, 56], the ill-posedness of these measures prevents their use in flexible applications in remote sensing. This means that successful image registration is conditional upon the data being preprocessed and displacement between images being small. In theory, adding a regularisation term in addition to similarity term makes the problem close to or exactly well-posed such that the registration problem has a much more meaningful solution, although it is difficult to remove all local minima to get the exact solution in reality. A few regularisation methods have been suggested to guarantee well-posedness during the registration process [17]. As we mentioned before, many regularisation techniques are sensitive to affine linear displacement such that pre-registration with affine linear transformation is required [52, 24, 25]. In contrast, this current research uses curvature regularisation, which does not require affine preregistration steps. However, pre-registration at coarsest level is recommended in general applications as non-parametric registration still penalises affine transformation by its boundary conditions (i.e. it is still influenced by initial position of two images to some extent, see[33, 53]).

Acknowledgements: The authors would like to thank NERC-ARSF for collecting and pre-processing the data used in this research project [EU11/03/100], and the grants supported from King Abdullah University of Science Technology and Wellcome Trust. We thank Ben Taylor of Plymouth Marine Laboratoy and Will Simonson for their valuable comments on the manuscript.


  • [1] Y. Amit. A nonlinear variational problem for image matching. SIAM Journal on Scientific Computing, 15(1):207–224, 1994.
  • [2] U. Ascher, E. Haber, and H. Huang. On effective methods for implicit piecewise smooth surface recovery. SIAM Journal on Scientific Computing, 28(1):339–358, 2006.
  • [3] G. Asner. Tropical forest carbon assessment: integrating satellite and airborne mapping approaches. Environmental Research Letters, 4(3):034009, 2009.
  • [4] G. Asner, J. Boardman, C. Field, D. Knapp, T. Kennedy-Bowdoin, M. Jones, and R. Martin. Carnegie airborne observatory: in-flight fusion of hyperspectral imaging and waveform light detection and ranging for three-dimensional studies of ecosystems. Journal of Applied Remote Sensing, 1(1):013536–013536, 2007.
  • [5] G. Asner, R. Hughes, P. Vitousek, D. Knapp, T. Kennedy-Bowdoin, J. Boardman, R. Martin, M. Eastwood, and R. Green. Invasive plants transform the three-dimensional structure of rain forests. Proceedings of the National Academy of Sciences, 105(11):4519–4523, 2008.
  • [6] G. Asner, D. Knapp, T. Kennedy-Bowdoin, M. Jones, R. Martin, J. Boardman, and R. Hughes. Invasive species detection in hawaiian rainforests using airborne imaging spectroscopy and lidar. Remote Sensing of Environment, 112(5):1942–1955, 2008.
  • [7] G. Asner and R. Martin. Canopy phylogenetic, chemical and spectral assembly in a lowland amazonian forest. New Phytologist, 189(4):999–1012, 2011.
  • [8] Y. Bentoutou, N. Taleb, K. Kpalma, and J. Ronsin. An automatic image registration for applications in remote sensing. Geoscience and Remote Sensing, IEEE Transactions on, 43(9):2127–2137, 2005.
  • [9] J. Berni, P. Zarco-Tejada, L. Suárez, and E. Fereres. Thermal and narrowband multispectral remote sensing for vegetation monitoring from an unmanned aerial vehicle. Geoscience and Remote Sensing, IEEE Transactions on, 47(3):722–738, 2009.
  • [10] M. Bro-Nielsen and C. Gramkow. Fast fluid registration of medical images. In Visualization in Biomedical Computing, pages 265–276. Springer, 1996.
  • [11] C. Broit. Optimal registration of deformed images. PhD thesis, 1981.
  • [12] L. Brown. A survey of image registration techniques. ACM computing surveys (CSUR), 24(4):325–376, 1992.
  • [13] D. Brunner, G. Lemoine, and L. Bruzzone. Earthquake damage assessment of buildings using vhr optical and sar imagery. Geoscience and Remote Sensing, IEEE Transactions on, 48(5):2403–2420, 2010.
  • [14] M. Bryson, A. Reid, F. Ramos, and S. Sukkarieh. Airborne vision-based mapping and classification of large farmland environments. Journal of Field Robotics, 27(5):632–655, 2010.
  • [15] M. Bueger. Geometry I. Berlin: Springer, 1987.
  • [16] P. Bunting, R. Lucas, and F. Labrosse. An area based technique for image-to-image registration of multi-modal remote sensing data. In Geoscience and Remote Sensing Symposium, 2008. IGARSS 2008. IEEE International, volume 5, pages V–212. IEEE, 2008.
  • [17] M. Burger, J. Modersitzki, and L. Ruthotto. A hyperelastic regularization energy for image registration. SIAM Journal on Scientific Computing, 35(1):B132–B148, 2013.
  • [18] H. Chen, P. Varshney, and M. Arora. Performance of mutual information similarity measure for registration of multitemporal remote sensing images. Geoscience and Remote Sensing, IEEE Transactions on, 41(11):2445–2454, 2003.
  • [19] H-M Chen, Manoj K Arora, and Pramod K Varshney. Mutual information-based image registration for remote sensing data. International Journal of Remote Sensing, 24(18):3701–3706, 2003.
  • [20] G. Christensen. Deformable shape models for anatomy. PhD thesis, Washington University Saint Louis, Mississippi, 1994.
  • [21] A. Cole-Rhodes, K. Johnson, J. Le Moigne, and I. Zavorin. Multiresolution registration of remote sensing imagery by optimization of mutual information using a stochastic gradient. Image Processing, IEEE Transactions on, 12(12):1495–1511, 2003.
  • [22] M. Dalponte, L. Bruzzone, and D. Gianelle. Fusion of hyperspectral and lidar remote sensing data for classification of complex forest areas. Geoscience and Remote Sensing, IEEE Transactions on, 46(5):1416–1427, 2008.
  • [23] G. Finlayson, M. Drew, and C. Lu. Entropy minimization for shadow removal. International Journal of Computer Vision, 85(1):35–57, 2009.
  • [24] B. Fischer and J. Modersitzki. Curvature based image registration. Journal of Mathematical Imaging and Vision, 18(1):81–85, 2003.
  • [25] B. Fischer and J. Modersitzki. A unified approach to fast image registration and a new curvature based registration technique. Linear Algebra and its applications, 380:107–124, 2004.
  • [26] M. Fischler and R. Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381–395, 1981.
  • [27] L. Fonseca and BS. Manjunath. Registration techniques for multisensor remotely sensed imagery. PE & RS- Photogrammetric Engineering & Remote Sensing, 62(9):1049–1056, 1996.
  • [28] H. Goncalves, L. Corte-Real, and J. Goncalves. Automatic image registration through image segmentation and sift. Geoscience and Remote Sensing, IEEE Transactions on, 49(7):2589–2600, 2011.
  • [29] H. Gonçalves, J. Gonçalves, and L. Corte-Real. Hairis: A method for automatic image registration through histogram-based image segmentation. Image Processing, IEEE Transactions on, 20(3):776–789, 2011.
  • [30] E. Haber and J. Modersitzki. Intensity gradient based registration and fusion of multi-modal images. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2006, pages 726–733. Springer, 2006.
  • [31] E. Haber and J. Modersitzki. A multilevel method for image registration. SIAM Journal on Scientific Computing, 27(5):1594–1607, 2006.
  • [32] J. Hadamard. Sur les problèmes aux dérivées partielles et leur signification physique. Princeton university bulletin, 13(49-52):28, 1902.
  • [33] S. Henn. A full curvature based algorithm for image registration. Journal of Mathematical Imaging and Vision, 24(2):195–208, 2006.
  • [34] G. Hong and Y. Zhang. Wavelet-based image registration technique for high-resolution remote sensing images. Computers & Geosciences, 34(12):1708–1720, 2008.
  • [35] S. Huang, R. Crabtree, C. Potter, and P. Gross. Estimating the quantity and quality of coarse woody debris in yellowstone post-fire forest ecosystem from fusion of sar and optical data. Remote Sensing of Environment, 113(9):1926–1938, 2009.
  • [36] A. Hudak and C. Wessman. Textural analysis of historical aerial photography to characterize woody plant encroachment in south african savanna. Remote Sensing of Environment, 66(3):317–330, 1998.
  • [37] J. Inglada and A. Giros. On the possibility of automatic multisensor image registration. Geoscience and Remote Sensing, IEEE Transactions on, 42(10):2104–2120, 2004.
  • [38] S. Kaasalainen, H. Hyyppa, A. Kukko, P. Litkey, E. Ahokas, J. Hyyppa, H. Lehner, A. Jaakkola, J. Suomalainen, A. Akujarvi, M. Kaasalainen, and U. Pyysalo. Radiometric calibration of lidar intensity with commercially available reference targets. Geoscience and Remote Sensing, IEEE Transactions on, 47(2):588–598, 2009.
  • [39] T. Kim and Y. Im. Automatic satellite image registration by combination of matching and random sample consensus. Geoscience and Remote Sensing, IEEE Transactions on, 41(5):1111–1117, 2003.
  • [40] I. Korpela, H. Ørka, J. Hyyppä, V. Heikkinen, and T. Tokola. Range and agc normalization in airborne discrete-return lidar intensity data for forest canopies. ISPRS Journal of Photogrammetry and Remote Sensing, 65(4):369–379, 2010.
  • [41] I. Korpela, H. Ørka, M. Maltamo, T. Tokola, J. Hyyppä, and etal. Tree species classification using airborne lidar–effects of stand and tree parameters, downsizing of training set, intensity normalization, and sensor type. Silva Fennica, 44(2):319–339, 2010.
  • [42] D. Kroon and C. Slump. Mri modalitiy transformation in demon registration. In Biomedical Imaging: From Nano to Macro, 2009. ISBI’09. IEEE International Symposium on, pages 963–966. IEEE, 2009.
  • [43] A. Laliberte, J. Herrick, A. Rango, and C. Winters. Acquisition, orthorectification, and object-based classification of unmanned aerial vehicle (uav) imagery for rangeland monitoring. Photogrammetric Engineering and Remote Sensing, 76(6):661–672, 2010.
  • [44] M. Lefsky, W. Cohen, G. Parker, and D. Harding. Lidar remote sensing for ecosystem studies lidar, an emerging remote sensing technology that directly measures the three-dimensional distribution of plant canopies, can accurately estimate vegetation structural attributes and should be of particular interest to forest, landscape, and global ecologists. BioScience, 52(1):19–30, 2002.
  • [45] H. Li, B. Manjunath, and S. Mitra. A contour-based approach to multisensor image registration. Image Processing, IEEE Transactions on, 4(3):320–334, 1995.
  • [46] Q. Li, G. Wang, J. Liu, and S. Chen. Robust scale-invariant feature matching for remote sensing image registration. Geoscience and Remote Sensing Letters, IEEE, 6(2):287–291, 2009.
  • [47] J. Liang, X. Liu, K. Huang, X. Li, D. Wang, and X. Wang. Automatic registration of multisensor images using an integrated spatial and mutual information (smi) metric. Geoscience and Remote Sensing, IEEE Transactions on, 51(1):603–615, 2014.
  • [48] K. Lim, P. Treitz, M. Wulder, B. St-Onge, and M. Flood. Lidar remote sensing of forest structure. Progress in Physical Geography, 27(1):88–106, 2003.
  • [49] Y. Lin and G. Medioni. Map-enhanced uav image sequence registration and synchronization of multiple image sequences. In

    Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on

    , pages 1–7. IEEE, 2007.
  • [50] D. Lowe. Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2):91–110, 2004.
  • [51] J. Maintz and M. Viergever. A survey of medical image registration. Medical image analysis, 2(1):1–36, 1998.
  • [52] J. Modersitzki. Numerical methods for image registration. OUP Oxford, 2003.
  • [53] J. Modersitzki. FAIR: flexible algorithms for image registration, volume 6. SIAM, 2009.
  • [54] J. Le Moigne, N. Netanyahu, and R. Eastman. Image registration for remote sensing. Cambridge University Press, 2011.
  • [55] T. Nakashizuka, T. Katsuki, and H. Tanaka. Forest canopy structure analyzed by using aerial photographs. Ecological Research, 10(1):13–18, 1995.
  • [56] E. Parmehr, C. Fraser, C. Zhang, and J. Leach. Automatic registration of optical imagery with 3d lidar data using statistical similarity. ISPRS Journal of Photogrammetry and Remote Sensing, 88:28–40, 2014.
  • [57] Zhili Song, Shuigeng Zhou, and Jihong Guan. A novel image registration algorithm for remote sensing under affine transformation. 2014.
  • [58] S. Suri and P. Reinartz. Mutual-information-based registration of terrasar-x and ikonos imagery in urban areas. Geoscience and Remote Sensing, IEEE Transactions on, 48(2):939–949, 2010.
  • [59] D. Turner, A. Lucieer, and L. Wallace. Direct georeferencing of ultrahigh-resolution uav imagery. 52(5):2738–2745, 2014.
  • [60] D. Turner, A. Lucieer, and C. Watson. An automated technique for generating georectified mosaics from ultra-high resolution unmanned aerial vehicle (uav) imagery, based on structure from motion (sfm) point clouds. Remote Sensing, 4(5):1392–1410, 2012.
  • [61] G. Verhoeven, M. Doneus, C. Briese, and F. Vermeulen. Mapping by matching: a computer vision-based approach to fast and accurate georeferencing of archaeological aerial photographs. Journal of Archaeological Science, 39(7):2060–2070, 2012.
  • [62] M. Wahed, G. El-tawel, and A. El-karim. Automatic image registration technique of remote sensing images. International Journal of Advanced Computer Science & Applications, 4(2), 2013.
  • [63] A. Wong and D. Clausi. Arrsi: automatic registration of remote-sensing images. Geoscience and Remote Sensing, IEEE Transactions on, 45(5):1483–1493, 2007.
  • [64] Y. Yang and X. Gao. Remote sensing image registration via active contour model. AEU-international journal of electronics and communications, 63(4):227–234, 2009.
  • [65] Y. Ye and J. Shan. A local descriptor based registration method for multispectral remote sensing images with non-linear intensity differences. ISPRS Journal of Photogrammetry and Remote Sensing, 90:83–95, 2014.
  • [66] B. Zitova and J. Flusser. Image registration methods: a survey. Image and vision computing, 21(11):977–1000, 2003.