The usability of satellite images for interpretation, or object detection and reconstruction purposes highly depends on the image quality, which can be characterized by a large number of measures, e.g. contrast, brightness, noise variance, radiometric resolution, sharpness, etc. Among those measures, image sharpness is one of the most important for characterizing images as it evaluates image blur, which limits the visibility of details. Image blur is introduced by both the optical system and potential motion during the acquisition time.
Assuming a stationary blur kernel (or Point Spread Function (PSF)) that combines the optical and motion blur we can formulate the image formation model as
where is the blurry image, is the latent sharp image, and is acquisition noise. Then, sharpness can be objectively measured by estimating the point spread function or its amplitude spectrum, the Modulation Transfer Function (MTF).
focus on the in-flight characterization of the camera system. These approaches usually rely on the presence of on-ground targets such as edges, lines, or point reflectors. Some methods estimate the blur kernel assuming a parametric model, for example Gaussian, and try to fit its parameters. Another group of methods estimate cross sections of the MTF, usually by applying some variant of the slanted edge method over calibration sites . These methods allow to periodically assess the sharpness of the system, however they cannot account for motion blur of a particular acquisition, an artifact that has become more common within modern fleets of micro-satellites.
Other methods [5, 6, 7] seek to estimate the MTF by relying on the detection of straight edges naturally present in the scene. This allows to estimate sharpness (mostly on urban scenes) without the need of a calibration target. Note however that the MTF is only useful for characterizing the sharpness and, while it allows a simple frequency enhancement, it cannot be used to restore the image as the phase of the kernel is not estimated.
Estimating the blur kernel from a single image is an active field of research, especially for natural images since it is a necessary step of most blind deblurring methods [8, 9, 10]. These methods rely only on the presence of contrasted edges (not necessarily straight lines), which allows to apply them to virtually any scene. In addition to estimating the kernel for quality assessment, one may want to restore the sharp image . Indeed, if two images have different blur kernels, it might be difficult to compare and analyze, both visually and automatically by advanced image processing techniques.
We focus our experiments on PlanetScope  images. The PlanetScope constellation is made of approximately 130 small satellites (form factor of cm) imaging the entire Earth’s landmass every day. The satellites carry 3-band or 4-band frame cameras and fly at 475 km on sun-synchronous orbits whose constant local solar time is between 9:30 and 11:30 am. The Ground Sample Distance (GSD) at nadir is between 3.5 m and 4 m. The images are available as either individual basic scenes, ortho scenes, or ortho tiles. We focus here on the basic and ortho scenes, respectively for non-orthorectified and orthorectified images. Exploiting both products allows us to infer the influence of the orthorectification on the image sharpness.
Contributions. We propose a criteria to assess the sharpness of satellite images through the estimation of their blur kernels. This criteria allows to sort images by quality, thus giving an absolute threshold to discard low quality images and allowing to increase the quality of blurry images using a deblurring step. Our method is fully blind and is designed for consumers of PlanetScope images. Indeed it does not require the precise specifications of the satellites and could also be used for images from other sources. We validate our methodology through a study of the PlanetScope constellation. In particular we show the effect of the orthorectification on the sharpness and we study the per-satellite sharpness.
2 Sharpness assessment and deblurring
In this section we detail our methodology by first explaining how to blindly estimate the blur kernel and compute a sharpness score from it, then describe the deblurring step.
2.1 Blur kernel estimation
In this work, we use the kernel estimation method of Pan et al.  originally developed to deblur text images. This method is based on the gradient prior which restores the main structures of the image, including dominant edges. Once the edges are restored, the blur kernel can be estimated. The method iterates between the estimation of the sharp image using the previous kernel and the re-estimation of the blur kernel
We use the efficient implementation of Anger et al. .
Even if the gradient prior kernel estimation method was designed for text and natural images, we argue that it is applicable to satellite images without any adaptation. Indeed, the assumption behind this prior is that non-blurry images contain contrasted edges, which is valid for satellite images. Furthermore, satellite image are more likely to respect the stationary convolution model than natural images since the scene is far away from the camera, which results in less parallax, a mostly translational motion and low optical distortion.
2.2 Measure of sharpness
Existing quality assessment metrics for satellite images include measures on the PSF or on the MTF, please refer to Blanc et al.  for a comprehensive study. We design our sharpness score so that the maximal score of is achieved for a perfectly sharp image (delta kernel) and it decreases for blurrier images (spread out kernels). Let us note that the kernel is assumed to be normalized so that . The simplest measure satisfying these criteria is the norm
An advantage of using the blur kernel to assess sharpness is that it is independent of the image content, which is not the case for measures estimated based on properties of the image itself. Thus the resulting score is absolute and can be compared across scenes and/or satellites. While very simple, this measure is sufficient to characterize the quality of satellite images for many applications.
Figure 1 shows five crops of PlanetScope orthorectified images and their associated kernel and score. We observe that ordering the images by their estimated sharpness indeed correlates well with our perception of the blur introduced by the respective kernel. Furthermore, we observe that when the scene contains a majority of clouds, the kernel estimation tends to give a very spread out kernel. This phenomenon occurs because clouds do not have sparse gradients and thus hinder the estimation. Fortunately, most of the time the predicted sharpness of cloudy scenes is very low, allowing to sort such images as low quality and discard them.
2.3 Satellite image deblurring
Having estimated a blur kernel, it is possible to inverse the problem (1) using non-blind deconvolution methods [13, 14]. Since satellite images usually contain low noise in nominal conditions, a simple prior is enough to recover a high quality image. In this work, we use the total variation prior (TV)  already well studied for satellite image deblurring , leading to the following optimization problem
We solve problem (5) using the fast method from Krishnan et al. . Figure 2 shows a deconvolution result on an orthorectified image. Notice that the input image contains an anisotropic blur, successfully removed by deblurring.
3 Study of the PlanetScope constellation
In this section we apply our sharpness measure on PlanetScope images to show that it captures the variability present in the images from this constellation. As dataset, we collected basic and ortho PlanetScopes images at 29 different locations for a total of images.
Sharpness distribution. Figure 3 shows two histograms representing the distribution of sharpness for basic and ortho images. We note that the sharpness of ortho images is on average lower and less variable than the one of basic images. Indeed, the sharpness after orthorectification decreases significantly, with an average of versus for basic images. This indicates that the orthorectified images are indeed less sharp and our measure does quantify the amount of sharpness lost due to the resampling.
We also observe that both distributions have a second mode near . This mode correspond to invalid kernels which can occur on very cloudy scenes for example (as shown on Figure 0(a)), or when the signal to noise ratio (SNR) is low due to poor atmospheric conditions.
Quality thresholds. From these histograms and our observations of the data, we found that for orthorectified images, the threshold indicates high sharpness images whereas corresponds to highly blurred images leaving little hope for a high quality restoration (in red on Figure 3). Otherwise, the image can be sharpened using the previously described deblurring algorithm in order to increase its quality before visualization or processing. For basic images, provides a similar threshold while accounting for the increase of sharpness compared to ortho images.
Presence of motion blur. Sharpness metrics using MTF is usually calibrated using on-ground targets . While this allows for very precise estimations, it cannot consider all sources of blur. In particular, our methodology takes into account resampling as well as blur due to motion during the integration time. Figure 4 shows two non-orthorectified images taken on two consecutive days from different satellites. The right image shows an example of motion blur. The sharpness score allows to automatically filter out such poor quality images using a simple threshold.
Per-satellite sharpness. As previously explained, the PlanetScope constellation is composed of a hundred satellites. Here, we study the correlation between the sharpness of the images and the satellite that acquired them. Our dataset of images represent 153 distinct satellites. In order to have large enough sample sizes, we kept only the satellites for which there are at least 50 images. Then, we computed the score of each image and removed from the dataset all invalid images, that is having a sharpness score below and respectively for basic and ortho
images. Sharpness averages and standard deviations for each satellite are reported in Figure5. We first notice that the average sharpness is not uniform across the constellation, which would indicate that each satellite produces images with slightly different blur than others. Indeed, an analysis of variance (one-way ANOVA test) rejects the hypothesis of equal averages and indicates a statistically significant difference in the per-satellite sharpness averages. Moreover, the clear correlation between the sharpness of basic and ortho images per satellite confirms that our measure is reliable. Finally, it is important to note that the standard deviation is large and thus the satellite is not the only factor responsible for the variation of sharpness, and other factors such as motion during acquisition also introduce variance to the effective sharpness of the images.
In this study, we quantified the variability of blur from PlanetScope images using an efficient blur kernel estimation method and automatically assigning a measure of sharpness to each blur kernel. The method is blind and does not require the specifications of the optical system. We also demonstrated that it is possible to apply blind deblurring methods to satellite images in order to equalize quality across time or improve visualization.
Our study of the constellation indicated variation across the images. We observed that the images can contain significant motion blur. Furthermore, we showed that the orthorectification provided by Planet does decrease the average sharpness of the images. We also showed correlation between a given satellite and its average sharpness. Finally, we proposed simple thresholds that allow to discard unsatisfactory images.
However, our method has a few limitations we would like to overcome in future works. First, as explained in Section 2.2, the method is affected by clouds. One way to solve this issue would be to apply a cloud detector on the images and mask out detected regions during the kernel estimation. The second limitation is noise which can be present in some images due to atmospheric conditions and degrades the performance of both kernel estimation and non-blind deconvolution. We would also like to tackle this problem and provide an additional measure to indicate the noise level and the amount of details we can expect from the restoration. Finally, saturated region in the image tends to mislead the kernel estimation towards a delta, and further work is required to handle this degradation.
-  T. Choi and D. L. Helder, “Generic sensor modeling for modulation transfer function (mtf) estimation,” Pecora 16, pp. 23–27, 2005.
-  P. Blanc and L. Wald, “A review of earth-viewing methods for in-flight assessment of modulation transfer function and noise of optical spaceborne sensors,” 2009.
-  M. Pagnutti, S. Blonski, M. Cramer, D. Helder, K. Holekamp, E. Honkavaara, and R. Ryan, “Targets, methods, and sites for assessing the in-flight spatial resolution of electro-optical data products,” Canadian Journal of Remote Sensing, vol. 36, no. 5, pp. 583–601, 2010.
-  A. Jalobeanu, L. Blanc-Féraud, and J. Zerubia, “Estimation of blur and noise parameters in remote sensing,” in ICASSP, 2002, vol. 4, pp. 3580–3583.
-  K. Kohm, “Modulation transfer function measurement method and results for the orbview-3 high resolution imaging satellite,” 2004, pp. 12–23.
-  M. Crespi and L. De Vendictis, “A procedure for high resolution satellite imagery quality assessment,” Sensors, vol. 9, no. 5, pp. 3289–3313, 2009.
-  M. Luxen and W. Forstner, “Characterizing image quality: Blind estimation of the point spread function from a single image,” ISPRS, vol. 34, pp. 205–210, 2002.
-  A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, “Understanding and evaluating blind deconvolution algorithms,” CVPR Workshops, pp. 1964–1971, 2009.
-  Q. Shan, J. Jia, and A. Agarwala, “High-quality motion deblurring from a single image,” ACM Transactions on Graphics, vol. 27, no. 3, pp. 1, 2008.
-  J. Pan, Z. Hu, Z. Su, and M. H. Yang, “L0-Regularized Intensity and Gradient Prior for Deblurring Text Images and beyond,” TPAMI, vol. 39, no. 2, pp. 342–355, 2015.
-  Planet Team, “Planet Application Program Interface: In Space for Life on Earth,” San Francisco, CA., 2017, https://api.planet.com.
-  J. Anger, G. Facciolo, and M. Delbracio, “Blind image deblurring using the l0 gradient prior,” Image Processing On Line, 2019.
-  L. I. Rudin and S. Osher, “Total variation based image restoration with free local constraints,” ICIP, vol. 1, no. April, pp. 31—-35 vol.1, 1994.
-  D. Krishnan and R. Fergus, “Fast image deconvolution using hyper-laplacian priors,” NIPS, pp. 1–9, 2009.
S. Durand, F. Malgouyres, and B. Rougé,
“Image deblurring, spectrum interpolation and application to satellite imageing,”Tech. Rep., 2000.