WAHRSIS: A Low-cost, High-resolution Whole Sky Imager With Near-Infrared Capabilities

05/21/2016 ∙ by Soumyabrata Dev, et al. ∙ Nanyang Technological University 0

Cloud imaging using ground-based whole sky imagers is essential for a fine-grained understanding of the effects of cloud formations, which can be useful in many applications. Some such imagers are available commercially, but their cost is relatively high, and their flexibility is limited. Therefore, we built a new daytime Whole Sky Imager (WSI) called Wide Angle High-Resolution Sky Imaging System. The strengths of our new design are its simplicity, low manufacturing cost and high resolution. Our imager captures the entire hemisphere in a single high-resolution picture via a digital camera using a fish-eye lens. The camera was modified to capture light across the visible as well as the near-infrared spectral ranges. This paper describes the design of the device as well as the geometric and radiometric calibration of the imaging system.



There are no comments yet.


page 4

page 5

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Whole sky imagers are becoming popular amongst the research community for a variety of applications and domains, such as aviation, weather prediction, and solar energy. The resulting images are of higher resolution than what can be obtained from satellites, and the upwards pointing nature of the camera makes it easy to capture low-altitude clouds. They thus provide a useful complement to satellite images.

Our specific objective is to use such imagers in the analysis and prediction of signal attenuation due to clouds. Ground-to-satellite or ground-to-air communication signals suffer from substantial attenuation in the atmosphere. Rain, clouds, atmospheric particles, and water vapor along the signal path affect the quality in various ways[1, 2]. Analysis and prediction of those effects thus require accurate information about cloud formations along the communication path. An efficient data acquisition technique is required to detect and track the various types of clouds. Images captured with geo-stationary satellites are commonly used for this task. However those devices can only provide a limited spatial resolution. Such an approach generally fails in countries like Singapore, which has a small land mass, and where weather phenomenons are often very localized. Furthermore the downwards pointing nature of satellite images is a significant disadvantage for the capture of lower cloud layers[3].

Several models of automatic whole sky imagers have been developed at the Scripps Institute of Oceanography, University of California, San Diego, with cloud detection in mind [4]. Yankee Environmental Systems commercialize the TSI-880, which is used by many institutions [5, 6]. Those imagers provide a good starting point for cloud analysis, but their design and fabrication is controlled by a company, making specialized adaptations hard or impossible. Besides, those devices are expensive (around 30,000-35,000 US$ for the TSI-880) and use very low resolution capturing devices. Those limitations encouraged us to build our own.

In this paper, we propose a novel design of a whole sky imager called WAHRSIS (Wide Angle High-Resolution Sky Imaging System). It uses mostly off-the-shelf components which are available on the market for an overall price of around 2,500 US$. A simple overall design allows a home-made mounting of the various parts. The resulting images are of high resolution and can be used for various applications.

The paper is organized as follows: Section 2 presents the overall design of the device, including the various components and imaging system. Section 3 describes the geometric and radiometric calibration of the camera. Conclusions and future work are summarized in Section 4.

2 WAHRSIS Design

2.1 Components

Figure 1 shows a drawing of WAHRSIS. Its main components are the following:

  • The sun-blocker occludes the light coming directly from the sun and thus reduces the glare in the circumsolar region of the captured image. It is composed of a main arm, which is rotated around the outside of the box by a motor. A polystyrene ball is fixed on top of a stem, which is attached to the main arm via a second motor rotating along the other axis.

  • An Arduino board controls the two motors of the sun blocker. The algorithm is briefly explained in the Appendix A.

  • A built-in laptop supervises the whole process. It controls the Arduino board through a serial communication port and the camera through a USB connection and the Canon Digital Camera SDK. The captured images are stored on its hard drive and can be accessed via an Internet connection. As an alternative, a workstation situated outside the box can also be used.

  • Outer casing: A hermetically-sealed box prevents moisture and dirt from affecting the internal components. It contains a transparent dome at the top in which the fish-eye lens of the camera is positioned.

  • The camera and lens is discussed in more detail in Section 2.2 below.

Figure 1: Drawing of WAHRSIS.

The device operates autonomously. However, some periodic manual maintenance interventions may be needed occasionally, for example to prevent the wear and tear of the motor gears. We have currently built one prototype of WAHRSIS, which we placed on the roof-top of our university building. We plan to position several models across different parts of Singapore. Table 1 details all the components and their respective prices at the time of purchase [7].

Items Cost (in US$)
Arduino Mega 90
Arduino Mega 2560-control board 60
Bipolar 5.18:1 Planetary Gearbox Stepper 25
Bipolar 99.55:1 Planetary Gearbox Stepper 35
Stepper Motor Driver 20
Real Time Clock 15
Big Easy Stepper Motor Driver 30
12 V DC Power Supply 50
Base Hard plastic or Acrylic (Water-proof housing) 120
Plastic Dome 25
Metal Arm 50
Sun Blocker 5
Cables and accessories 40
Sigma 4.5 mm F2.8 EX DC HSM Circular Fisheye Lens 950
Canon EOS Rebel T3i (600D) camera body 360
Alteration for near-infrared sensitivity 300
Built-in laptop 350
Total Cost 2525
Table 1: Cost Analysis for all the components of WAHRSIS

2.2 Imaging System

The imaging system of WAHRSIS consists of a Canon EOS Rebel T3i (a.k.a. EOS 600D) camera body and a Sigma 4.5mm F2.8 EX DC HSM Circular Fisheye Lens with a field of view of degrees. It captures images with a resolution of pixels. Due to the lens design and sensor size, the entire scene is captured within a circle with a diameter of approximately pixels (see Figure 2). A custom program running on the built-in laptop uses the Canon Digital Camera SDK to control the camera. Images can be automatically captured at regular time intervals, and the camera settings can be adjusted as needed.

The sensor of a typical digital camera absorbs near-infrared light quite effectively, so much that there is an infra-red blocking filter behind the lens. We had this filter physically removed and replaced by a piece of glass, which passes all wavelengths. The reasoning is that near-infrared light is less susceptible to attenuation due to haze[8], which is a common phenomenon in the atmosphere and especially around large cities.

Haze consists of small particles in suspension in the air. Those have a scattering effect on the light whose incidence is modeled by Rayleigh’s law, meaning that the intensity of the scattered light , where is the intensity of the emitted light, and the wavelength of the light. We can thus reduce this effect by considering longer wavelengths, such as the ones in near-infrared. This model is however only valid for particles whose size is smaller than . Most cloud consist of larger particles; their resulting scattering effect obeys Mie’s law, which is wavelength-independent. For this reason, we expect our modified camera to provide us sharper images of cloud formations.

3 WAHRSIS Calibration

The images captured by WAHRSIS are of high resolution and capture the full sky hemisphere. However, the scene appears distorted by the fish-eye lens, and the colors are not rendered correctly due to the alteration for near-infrared sensitivity. Various calibration stages are thus required 111The source code of the various calibration processes for our sky camera WAHRSIS is available online at https://github.com/Soumyabrata/WAHRSIS.. Section 3.1 discusses white balancing. Section 3.2 relates the geometric calibration due. Finally, Section 3.3 details the vignetting correction.

3.1 White Balancing

The camera of WAHRSIS is also sensitive to near-infrared light and thus sees beyond the visible spectrum. Color calibration in the traditional sense is less meaningful for our camera, because it is sensitive to both visible and near-infrared light. However, white balancing is still necessary, as we will explain here.

The sensitivity of the various channels depends on the Bayer filter in front of the sensor. It is known that the red pixels show more near-infrared leakage than the blue or green ones. The automatic white balance settings were engineered for a non modified camera. As a result, images captured under an automatic white balance mode appear reddish, as shown in Figure 2(a). Fortunately, our camera also provides a custom mode. It relies on a known white patch in the scene, for which the camera computes the incident light intensity quantization parameters to render this patch white in the captured picture. The resulting image after custom white balancing looks visually plausible, as is shown in Figure 2(b).

(a) Auto
(b) Custom
Figure 2: Captured images using automatic and custom white balance settings.

Without white balancing, the red channel of our camera is prone to over-saturation and color clipping due to the additional near-infrared light reaching the sensor. The red channel of Figure 2(a) contains 24.5% of clipped values (i.e. where ), whereas the same channel in Figure 2(b) contains only 0.7% of them. This shows the importance of the custom white balancing, which manages to compensate quite well for the near-infrared alteration.

3.2 Geometric Calibration

A geometric calibration of the capturing device needs to be performed in order to obtain a precise model of the distortion introduced by the imaging system, which in our case includes the fish-eye lens and the dome[9]. This consists of determining the intrinsic parameters of the camera, which relates the pixel coordinates with their 3D correspondences in a camera coordinate system. On the other hand, extrinsic parameters express the translations and rotations required to use other 3D world coordinate systems, such as the ones span by the checkerboard borders. Intrinsic parameters give all the information needed to relate each pixel of a resulting image with the azimuth and elevation angles of the incident light ray and vice-versa, which is essential to be able to measure the physical extent of clouds.

3.2.1 Fisheye Calibration Background

The modeling of a typical (non-fisheye) camera uses the pinhole model. However, this is not applicable to our case because of the very wide capturing angle. A schematic representation of the refraction of an incident ray in a fisheye lens is shown in Figure 3. Most common approaches relate the incident angle () with the radius on the image plane (), assuming that the process is independent of the azimuth angle.

Figure 3: Schematic representation of object space and image plane

Several theoretical models relating those two values exist:

  • Stereographic projection:

  • Equidistance projection:

  • Equisolid angle projection:

  • Orthogonal projection:

  • Perspective projection:

Fisheye lens manufacturers usually claim to obey one of those, but those equations cannot be exactly matched in practice. However, they can be approximated by a polynomial due to the Taylor series theory. Some of the calibration techniques make use of this, as it easily allows a slight deviation from the projection equation. Two polynomial functions can also be used to cope with possible radial and tangential distortions caused by the lens.

Most calibration techniques use a checker-board with a known pattern. Shah et al.[10] introduce a calibration method where such a polynomial is used. Furthermore, they model radial and tangential distortions. The optical center of the camera has to be found using a low power laser beam, which is cumbersome in practice. Bakstein et al.[11]

use a spherical retina model, but do not provide any extrinsic parameter estimation technique and use cylindrical patterns. Sturm et al.

[12] introduce a very generic model, where a camera is described by the coordinates of a set of rays and a mapping between those and the image pixels. However the authors experienced difficulties with a fish-eye lens, especially on the side view. Kannala et al.[13] use polynomials to describe the calibration and the radial and tangential distortions, but they also assume that the positions of the pattern on the checker-board are known. Finally, Scaramuzza et al.[14] provide an algorithm estimating both intrinsic camera parameters and the extrinsic parameters describing the positions of the checker-board. They provide a full MATLAB implementation of their method, making it the easiest to use. We have chosen to use this method and enhanced it for our needs.

3.2.2 Calibration Method

The calibration toolbox proposed by Scaramuzza and al.[14] takes as input a number of images containing a known checker-board pattern. The first step is corner detection, which is performed by the toolbox. User interaction is required when the automatic detection algorithm fails. Those points are then used to find the parameters by a least-squares linear minimization, followed by a non-linear refinement with a maximum likelihood criterion. The toolbox uses a polynomial to model the ray refraction as well as a matrix transformation and a translation to cope with small axis misalignment and the digitizing process. The intrinsic parameters thus consist of both the coefficients of the polynomial as well as the values of the matrix and the translation parameters, which describe the position of the center of the image.

Figure 4 shows a sample of the images we used for the calibration.

Figure 4: Sample of the checker-board images used for the calibration

The toolbox also outputs the re-projection errors, i.e. the difference between the original point location computed during the corner detection and the one resulting from the estimated parameters. This is done on the same images as the ones used for the parameter estimation. However we noticed that the returned errors do not necessarily relate the quality of the calibration, as the re-projection sometimes looks more accurate than the corner detection or the user input in estimating the coordinates of a grid corner. Furthermore we would like to estimate a general error which does not only rely on the location of the checker-boards in the images used for the fitting. We also observed that the choice of the input images has a significant influence on the output of the algorithm. We thus modified the calibration process in the following way: we randomly split our captured checker-board images into a training set of 10 images and a validation set of 6 images. We compute the parameters using the training set and the re-projection error using the validation set. We repeat this process 50 times with different randomized splits and finally consider the parameters leading to the smallest error on the validation set as the most accurate one. In this way we can reduce the uncertainty of the whole process.

Our fish-eye lens (Sigma 4.5mm F2.8 EX DC HSM) was designed by the manufacturer to follow the equisolid projection for visible light, but may be less accurate for the near-infrared spectrum. There is also a transparent dome on top of the camera lens, which may have an additional refracting effect on the incident rays. The toolbox models the calibration function as a polynomial and can thus incorporate these effects when fitting the coefficients. We have set the maximum degree of the polynomial to 4, as advised by the authors of the toolbox. We indeed noticed a bigger re-projection error with smaller degrees. However when the maximum degree is set to a higher number, the values of the coefficients of the degrees higher than 4 are almost zero.

Figure 5(a) shows the result of the resulting calibration compared to the various theoretical projection models. Figure 5(b) shows the distance in pixels between our estimated calibration parameters and the various projection models. We see that we reach a distance of more than 16 pixels for angles around 40 degrees. We also find that the maximum viewing angle of the whole device is 178.7 degrees, slightly less than hemispheric.

(a) Relationship between the incident angle of a light ray and the corresponding distance from the image center.
(b) Difference between equisolid model and our imaging system.
Figure 5: Result of the calibration process compared to the various theoretical projection models.

Figure 6 shows the re-projection error for all the points of the grids in the validation set. A moving average with a window of 50 pixels is also shown. The average re-projection error for our model is 3.87 pixels. We can see a minor increase of the error with distance from the image center (and thus the elevation angle of the incident ray).

Figure 6: Re-projection errors as a function of the distance to the image center (crosses: raw data; line: moving average).

3.2.3 Chromatic Aberration

The refractive index of lenses is wavelength-dependent. This means that light rays from different parts of the spectrum converge at different points on the sensor. Chromatic aberration refers to the color distortions introduced to the image as a result this phenomenon. Since the camera of WAHRSIS is sensitive to near-infrared, a larger part of the light spectrum is captured by the sensor, making the device more prone to this issue. Figure 7 shows chromatic aberration on an image captured by the camera.

Figure 7: Example of chromatic aberration. Notice the green and magenta artifacts at the edges of the squares.

The calibration process described in the previous section uses grayscale images. In order to obtain more accurate calibration functions, we apply the same process on each of the red, green, and blue channels individually. As shown in Figure 8, we observe small differences in the resulting calibration functions, especially for the red channel. Visually, the difference is quite striking, as can be seen in Figure 9.

Figure 8: Difference between the calibration function of each color channel and the grayscale calibration function.
(a) Grayscale calibration.
(b) Color-channel-specific calibration.
Figure 9: Section of an image taken with WAHRSIS that was rectified using grayscale (left) and color-channel-specific (right) calibration. The chromatic aberration in the right image is much less noticeable.

3.3 Vignetting Correction

Vignetting refers to a darkening toward the corners of the image which is introduced during the capturing process. It has several origins. Natural vignetting is due to the incident light rays reaching the camera sensors with varying angles. It is commonly modelled by the cosine fourth law. The camera of WAHRSIS is particularly prone to this effect due to its wide angle. Optical vignetting describes the shading caused by the lens cylinder itself, blocking part of the off-axis incident light. This is aperture-dependant.[15]

We use an integrating sphere to analyze this phenomenon. It consists of a large sphere, whose interior surface is made of a diffuse white reflective coating, resulting in a uniform light source over the whole surface. A LED light source as well as the camera are placed inside. Figure 10 shows an image captured inside this sphere. Notice the higher brightness at the center. This image has been taken with the biggest available aperture () and constitutes the worst case with regards to vignetting. Smaller aperture values will result in less significant distortions.

Figure 10: Image captured inside the integrating sphere

Figure 11(a) shows the luminance of the pixels inside the lens circle as a function of the distance from the image center. We normalize those values and then take their inverse. We fit a moving average and use this result as a radius dependent correction coefficient (Figure 11(b)). We repeat this process for each aperture setting.

(a) Luminance as a function of the distance from the image center
(b) Correction coefficients in red and scatter plot of all the values in blue.
Figure 11: Computation of the vignetting correction coefficients

4 Conclusion

We have presented WAHRSIS, a new whole sky imager. Its advantages are low cost and simple design. We described its components and the calibration of its imaging system. It provides high resolution images of the whole sky hemisphere, which can be useful for many various applications.

In our future work we will use the resulting images for detailed cloud analysis. We plan to deploy several devices across the country to estimate cloud cover, cloud bottom altitude, cloud movement, etc. We will also investigate the effects of the near-infrared capabilities for these analyses in more detail. Finally, we are working on an improved version of WAHRSIS with a better sunblocker design.

Appendix A Sun Blocker Positioning

In order to block the light coming directly from the sun, the sun blocker head should intersect the ray from the sun to the camera. We use the Jean Meeus algorithm[16] to compute the azimuth and elevation angles of this ray, which we call and respectively. The algorithm has an error of degrees. The various lengths and angles required for the computations are shown in figure 12. represents the length of the main arm. is the distance of the motor shift between the blocker and the arm. denotes the length of the stem under the blocker head. denotes the length between the camera center and the arm motor axis. The two unknowns are which measures the arm angle measured from the horizon and which measures the stem angle measured from the east direction. The following trigonometric equations are used:

(a) Side and top views
(b) 3D view
Figure 12: Representation of WAHRSIS with the lengths and angles required for the computation of the motor angles.

The arm angle can range from to degrees, whereas the stem angle can range from to degrees. Combinations of and with increments of are used to compute the related and sun ray angles values. The combination leading to the smallest difference between those values and the ones obtained by the Jean Meeus algorithm is used.


  • [1] Yeo, J. X., Lee, Y. H., and Ong, J. T., “Performance of site diversity investigated through RADAR derived results,” Antennas and Propagation, IEEE Transactions on 59(10), 3890–3898 (2011).
  • [2] Kumar, L. S., Lee, Y. H., and Ong, J. T., “Truncated gamma drop size distribution models for rain attenuation in Singapore,” Antennas and Propagation, IEEE Transactions on 58(4), 1325–1335 (2010).
  • [3] Shields, J. E., Karr, M. E., Tooman, T. P., Sowle, D. H., and Moore, S. T., “The whole sky imager – a year of progress,” in [8th Atmospheric Radiation Measurement (ARM) Science Team Meeting ], 23–27 (1998).
  • [4] Shields, J. E., Karr, M. E., Johnson, R. W., and Burden, A. R., “Day/night whole sky imagers for 24-h cloud and sky assessment: History and overview,” Applied Optics 52(8), 1605–1616 (2013).
  • [5] Long, C. N., “Accounting for circumsolar and horizon cloud determination errors in sky image inferral of sky cover,” in [15th Atmospheric Radiation Measurement (ARM) Science Team Meeting ], (2005).
  • [6] Souza-Echer, M. P., Pereira, E. B., Bins, L. S., and Andrade, M. A. R., “A simple method for the assessment of the cloud cover state in high-latitude regions by a ground-based digital camera,” Journal of Atmospheric & Oceanic Technology 23(3), 437–447 (2006).
  • [7] Ong, K. W., “Developing cloud tracking and monitoring system,” final year project (fyp) report, Nanyang Technological University, Singapore (2012).
  • [8] Schaul, L., Fredembach, C., and Süsstrunk, S., “Color image dehazing using the near-infrared,” in [Image Processing (ICIP), 16th IEEE International Conference on ], 1629–1632 (Nov 2009).
  • [9] Huo, J. and Lu, D.-R., “Calibration and validation of an all-sky imager,” Atmospheric and Ocean Science Letters 2, 220–223 (2009).
  • [10] Shah, S. and Aggarwal, J., “Intrinsic parameter calibration procedure for a (high-distortion) fish-eye lens camera with distortion model and accuracy estimation,” Pattern Recognition 29(11), 1775–1788 (1996).
  • [11] Bakstein, H. and Pajdla, T., “Panoramic mosaicing with a 180-degree field of view lens,” in [Omnidirectional Vision, 3rd Workshop on ], 60–67, IEEE (2002).
  • [12] Sturm, P. and Ramalingam, S., “A generic concept for camera calibration,” in [Computer Vision (ECCV), European Conference on ], 1–13, Springer (2004).
  • [13] Kannala, J. and Brandt, S., “A generic camera calibration method for fish-eye lenses,” in [Pattern Recognition (ICPR), 17th International Conference on ], 1, 10–13, IEEE (2004).
  • [14] Scaramuzza, D., Martinelli, A., and Siegwart, R., “A toolbox for easily calibrating omnidirectional cameras,” in [Intelligent Robots and Systems (IROS), IEEE/RSJ International Conference on ], 5695–5701, IEEE (2006).
  • [15] van Walree, P., “Vignetting,” http://toothwalker.org/optics/vignetting.html (2002-2013).
  • [16] Reda, I. and Andreas, A., “Solar position algorithm for solar radiation applications,” Solar Energy 76(5), 577–589 (2004).