Geometric calibration of Colour and Stereo Surface Imaging System of ESA's Trace Gas Orbiter

07/03/2017 ∙ by Stepan Tulyakov, et al. ∙ EPFL Istituto Nazionale di Astrofisica Universität Bern Idiap Research Institute 0

There are many geometric calibration methods for "standard" cameras. These methods, however, cannot be used for the calibration of telescopes with large focal lengths and complex off-axis optics. Moreover, specialized calibration methods for the telescopes are scarce in literature. We describe the calibration method that we developed for the Colour and Stereo Surface Imaging System (CaSSIS) telescope, on board of the ExoMars Trace Gas Orbiter (TGO). Although our method is described in the context of CaSSIS, with camera-specific experiments, it is general and can be applied to other telescopes. We further encourage re-use of the proposed method by making our calibration code and data available on-line.



There are no comments yet.


page 8

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Quantity Value Optic 3-mirror plus fold mirror off-axis Detector Raytheon Osprey 2k CMOS hybrid Filters 675 nm, 485 nm, 840 nm, 985 nm Focal length 880 mm F# 6.52 Pixel size 10 um x 10 um Detector area 2048 x 2048 (2048 x 1350 is used) FOV 1.33 deg x 0.88 deg Angle w.r.t nadir 10.0 +/- 0.2 deg
Figure 1: CaSSIS camera specification. The CaSSIS telescope is three-mirror anastigmat system (off-axis) with a fold mirror. The CaSSIS Filter Strip Assembly (FSA) comprises a Raytheon Osprey 20482048 hybrid CMOS detector with 4 colour filters mounted on it following the push-frame technique. Small dark bands between the filters reduce spectral cross-talk. The detector can acquire an un-smeared image along ground-track. The along-track dimension of the image is then built up and put together on ground.

On March 15, 2016 Trace Gas Orbiter (TGO) was launched to Mars, as part of the the European Space Agency’s (ESA’s) ExoMars project. Its aim is to find trace gases, which may be evidence of geological or biological activity on Mars. The Colour and Stereo Surface Imaging System (CaSSIS) is TGO’s imaging system that provides visual context for sites identified as potential sources of trace gases. A brief specification of CaSSIS is provided in Tab. 1.

CaSSIS (Thomas et al. 2014, 2017) is a multi-spectral push-frame camera with 4 rectangular color filters covering its sensor (Fig. 1). As the spacecraft is moving along the orbit, each part of a targeted area becomes visible, sequentially, in each filter. By acquiring and mosaicking multiple images (“framelets”), CaSSIS is able to reconstruct a large 4-colours image of the targeted area.

CaSSIS is also a stereo camera. It is capable of acquiring two images of a target area from two distinct points on the same orbit. While approaching the target area it acquires the first image, then it gets mechanically rotated and acquires the second image, while departing from the target area. By computing parallax from these two images, one can reconstruct a Digital Elevation Model (DEM) of the target area.

To prepare scientific products, such as color images and DEMs from raw CaSSIS images, one needs geometric camera parameters, such as its focal length and a optical distortion model. While their nominal values are known from technical specification, their actual values might deviate from the nominal ones, due to imprecise manufacturing, mounting, or various incidents during the spacecraft cruise and operation. Therefore, their actual values have to be measured in the controlled environment of the clean room and validated during the commissioning phase in flight. This is the main goal of geometric calibration. Note that photometric calibration of CaSSIS is described in Roloff et al. 2017.

There are many geometric calibration methods (Hartley and Zisserman 2003; Zhang 1999; Heikkila and Silven 1997; Tsai 1987) and tools111MATLAB camera calibration tool,, accessed 2017-05-23222OpenCV camera calibration tool,, accessed 2017-05-23333Caltech camera calibration tool,, accessed 2017-05-23 for “standard” cameras. However, these off-the-shelf tools cannot be used for the calibration of telescopes such as CaSSIS, for two reasons. Firstly, most of these tools require images of calibration targets, such as a checkerboard chart. For telescopes with a large focal length, however, such targets must be very large () and should be placed very far away from the telescope (), which is impractical. Secondly, telescopes often have off-axis optical designs with complex optical distortion, that cannot be handled by off-the-shelf tools. Therefore, there is a need for specialized calibration methods, which are unfortunately scarce in the literature.

In this paper we describe the calibration method that we developed for CaSSIS. Although our method is described in the context of CaSSIS, it is general and can be applied to other telescopes. We further encourage re-use of the proposed method by making our calibration code and data available on-line444

We first discuss in § 2 the related work, and describe in § 3 the geometric camera model adopted in the paper. In § 4 we explain the distortion model selection procedure based on lens simulation, in § 5 we describe the on-ground calibration using images of a dotted calibration target captured through a collimator, and in § 6 we describe the in-flight calibration using star field images. Finally, in § 7 we show how refined geometric parameters improve the quality of map-projected CaSSIS images.

2 Related Work

2.1 Optical Distortion models

Off-the-shelf calibration tools typically assume a radial or a Brown-Conrady optical distortion model. The radial model (Hartley and Zisserman 2003

) is a simple 5 degrees-of-freedom (DOF) model, only accounting for radially symmetric distortion. Brown-Conrandy (

Brown 1966) is a more complex model with 7 DOF, that in addition to the radially symmetric component, accounts for tangential decentering. These models, however, cannot represent the complex distortion in a camera with off-axis optical elements, such as CaSSIS, as we show in § 4. Complex distortion is better modeled by a bi-cubic (Kilpelä 1981) or a rational model (Claus and Fitzgibbon 2005) with 20 and 17 DOF, respectively. In our work we adopted the rational distortion model (discussed in § 3).

2.2 Star field calibration

For geometric calibration of a camera one needs images of calibration targets - objects with known real-world coordinates. Since angular positions of stars are well known and documented in star catalogs555VisieR star catalog library,, accessed: 2017-05-24 , such as MASS2 and Tycho2, star fields can serve as perfect calibration targets. Indeed, star field calibration is widely used in star trackers that are an integral part of every spacecraft (Samaan et al. 2001; Pal and Bhat 2014; Junfeng et al. 2005). The star field calibration can also be used for calibration of consumer-level cameras (Klaus et al. 2004). Unfortunately, all known star field-based calibration methods assume a simplistic optical distortion model, and therefore, cannot be applied for telescope calibration. Before stars from an image can be used for calibration, they should be identified using a star catalog, which fortunately can be done automatically (Lang et al. 2010) with a library star recognition tool,, accessed: 2017-05-24.

3 CaSSIS camera model

The camera model consists of: (1) the intrinsic model, (3) the rational optical distortion model and (3) the extrinsic model. In this section we discuss each part of the camera model in detail.

3.1 Intrinsic Model

The intrinsic model (Hartley and Zisserman 2003, p153-158) describes the transformation from 3D camera frame coordinates to 2D image coordinates as follows:


where is the focal length of the camera, measured in pixels, and , are the coordinates of the principal point in the image. In the case of CaSSIS, we assume that and correspond to the center of an image. Therefore, the CaSSIS intrinsic model has only 1 DOF.

3.2 Rational optical distortion model

The intrinsic camera model is complemented with a optical distortion model, that describes the transformation from the distorted image coordinates to the ideal image coordinates . We use the rational distortion model (Claus and Fitzgibbon 2005):


where are rows of a rational distortion matrix.

Interestingly, while not invertible, the rational model can represent very precisely the inverse of itself (Tang et al. 2012

). We use this property and simultaneously estimate two rational models: one for distortion and another for correction.

3.3 Extrinsic Model

The extrinsic model (Hartley and Zisserman 2003, p155-156) describes the transformation from the reference (spacecraft) frame coordinates to the camera frame coordinates as follows


where is the rotation matrix, and is the

translation vector. The rotation matrix

is the function of 3 Euler angles . The translation vector is typically ignored, because the camera is much closer to the reference frame than to the scene. The extrinsic model has 3 DOF in total.

4 Distortion model selection

Before CaSSIS was assembled, we were provided with optical distortion data by the telescope manufacturer (RUAG Space Zurich, Switzerland), shown in Tab. 3, computed using a ray-tracing simulation. To find out what distortion model better represented CaSSIS optical distortion, we fitted radial, Brown-Conrady, rational and bi-cubic optical distortion models (see § 2.1) to the data, and compared the average Euclidean error of the models using leave-one-out cross-validation.

(a) radial (=3.169 pix)
(b) brown-conrandy (=1.585 pix)
(c) rational ( =0.088 pix)
(d) bi-cubic (=0.015 pix)
Figure 2: Distortion models “fitted” to simulated CaSSIS optical distortion data (see Tab. 3 in appendix). Vectors show the transformation from the distorted to the ideal image. Contour lines show the magnitude of this transformation. The errors are the average Euclidean distances between the positions of the ideal pixels, as predicted by the model, and their actual positions. Note that simple radial (1(a)) and Brown-Conrady models (1(b)), with more than 1 pixel error, fail to represent CaSSIS distortion, while bi-cubic (1(d)) and rational (1(c)) models, with less than 0.1 pixels error, both perform well.

The resulting distortion fields and errors for each model are shown in Fig. 2. Simple radial and Brown-Conrady models suffer from more than 1 pixel error, and hence failed to represent the CaSSIS distortion, while bi-cubic and rational models, with less than 0.1 pixel error, performed well. We decided to use the rational model.

5 On-ground calibration

After the CaSSIS camera was assembled and tested, we attempted to estimate the distortion model from a single image of a dotted calibration target, as in Claus and Fitzgibbon 2005. Because the focal length of CaSSIS is too large to acquire in-focus images of the target from a reasonable distance, we used a set-up with a collimator (Fig. 4).

After the image was acquired, we applied adaptive thresholding and connected components detection methods to identify dots in the image. Then, we found the dots’ centers using a centroid algorithm. Finally, we fitted the regular rectangular grid to the dots’ centers, using a simple algorithm that starts from an arbitrarily-selected dot, and expands the grid in horizontal and vertical directions, until no new dots can be added to the grid.

Figure 3: Image of the dotted target overlaid with the fitted grid. Red crosses show dots that were added to the grid. Blue points show dots, that were not added to the grid.
Figure 4: On-ground calibration setting. To acquire in-focus image of the dotted calibration target from a reasonable distance, we put it in the focus of the parabolic collimator.

The acquired image with the fitted grid is shown in Fig. 5. Analysis of the grid confirmed the presence of a small optical distortion in the image: the grid rows and columns appeared, not as straight lines but, as high-order curves. However, we failed to estimate the distortion field resembling Fig. 1(c)

using the grid. This is probably due to the fact that the experimental data was contaminated with a then-unknown residual distortion coming from the off-axis collimator.

6 In-flight calibration

During TGO commissioning and mid-cruise checkout, CaSSIS acquired multiple images of star fields, that we used for in-flight calibration. In § 6.1 we describe our in-flight calibration method and in § 6.2 we show the calibration results.

6.1 Method

Figure 5: Work flow of the in-flight calibration. Ellipses show the input and the output data and rectangles show processing steps. Figure 6: Position of all stars detected in combined “mcc motor” + “pointing CaSSIS” set on the image sensor. Note that the detector is almost uniformly filled. On the top and the bottom parts of the sensor we don not have observations, since they are covered by nontransparent mask.

The overall work flow for in-flight calibration is shown in Fig. 5 and each individual procedure is described below. They are performed in that order.

Image assembly.

We assemble full-sensor images from several data packets according to information in XML files from the telemetry conversion (Each image is accompanied by housekeeping data)

De-noising and flattening.

We denoise every image by subtracting the median of several images from each image. This procedure helps us to get rid of fixed-pattern noise and hot pixels. Then we flatten each image by applying a Difference-of-Gaussian (DoG) filter.

Star field recognition.

We perform star detection and recognition using the open-source library and 2MASS star catalog. The library takes an image of a star field as an input, and outputs coordinates of stars in the image, and their corresponding coordinates in equatorial frame J2000.

Data Name
2016-04-13 pointing cassis 45 539
2016-06-14 mcc motor 92 2573
2016-04-07 commissioning 2 12 670
Table 1: Datasets summary. Note that the calibration sets consist of sequences of 3-4 almost identical images, acquired within short time interval. There are 10-60 stars in each image.

False detections removal

In the next step we collect information about detected stars from all images and filter out erroneous detections. Since the calibration image sets consist of sequences of 3-4 almost identical images, we mark a star as a false detection, if it is not re-detected at a similar position in at least 2 images.

Camera rotation initialization

We find the camera rotation for every image independently. During the estimation, we set the focal length of the camera to nominal and search for the camera rotation that minimizes the projection error, i.e the Euclidean distance between observed and predicted star positions in each image individually. The optimization is done with Levenberg–Marquardt algorithm (lsqnonlin in MATLAB). We initialize the optimization with rotation angles from the SPICE kernel777ExoMars Trace Gas Orbiter SPICE kernels,, accessed: 2017-05-24888SPICE toolkit,, accessed: 2017-05-24

. The SPICE kernels contain information about orientation and position of spacecraft and its elements, received from its sensors for any moment in time.

Figure 7: Residual errors in pixels after optical distortion estimation. The average error is 0.66 pixels. Color coding shows the actual error scale, that is similar to Fig. 7(b). Note that the errors are small and spatially more uniform than compared to the residual errors after BA from Fig. 7(b). This suggests that they come from inaccurate star detection.

Iterative Bundle Adjustment (BA)

In this step we search for a refined focal length and rotations that minimize the projection errors for all images simultaneously. The optimization is performed with Levenberg–Marquardt algorithm. We initialize the optimization with the focal length and rotation matrices we found in the previous step. After each BA iteration, stars that have large residual projection errors compared to their spatial neighbors are rejected as outliers and BA is performed again until no new outliers are found. Without this outlier rejection, the subsequent optical distortion estimation would fail.

Rational optical distortion estimation

In this step we “freeze” the intrinsic and the extrinsic camera models and search for a rational optical distortion model that minimizes the remaining projection error. The optimization is performed with Levenberg–Marquardt algorithm. We initialize the optimization process using a “no distortion” hypothesis.

6.2 Results

We performed our experiments on 3 datasets: “mcc motor” and “pointing cassis” both acquired in July 2016 during mid-cruise checkout, and “commissioning2”, acquired on April 2016 during near-Earth commissioning. We selected these datasets, since they contain images of dense star fields acquired with long 1.92-second exposures. We estimated the camera parameters using the combined “mcc motor” and “pointing cassis” set, which we called the training set, and validated the results on “commissioning2” set, which we called the validation set.

(a) BA iteration
(b) (final) BA iteration
Figure 8: Residuals after the first and the forth BA iterations. Color coding shows the actual scale of the residuals. Crossed-out residuals correspond to the identified outliers. On the top and the bottom parts of the sensor we do not have observations, since they are covered by a nontransparent mask. Note that after the first iteration (7(a)), the residuals contain gross outliers, while after the forth iteration (7(b)) residuals form a clear spatial pattern suggesting the presence of optical distortion. The average Euclidean error before BA is 3.65 pixels, after the first iteration it is 2.78 pixels, and after the forth iteration it is 2.56 pixels.

A number of images and the recognized stars in every set are shown in Tab. 1. As shown in Fig. 6, the stars from the training set cover the sensor densely and uniformly, allowing for good optical distortion estimation.

Using stars detected in the training set, we refined the camera rotations obtained from SPICE kernels for every image individually, while keeping the focal length of the camera fixed to the nominal. By refining the rotations, we reduced the median Euclidean distance between observed and the predicted star positions in training-set images from 147.41 to 3.42 pixels.

Then, we used the estimated camera rotations and the nominal focal length to initialize the iterative bundle adjustment process that refined the camera rotations and focal length using all images simultaneously, while ignoring optical distortion. The iterative bundle adjustment converged after 4 iterations. The effect of the iterative outliers rejection scheme is shown in Fig. 8. Note that after the first iteration, the BA residuals contain gross outliers, while after the last iteration, the residuals form a clear spatial pattern suggesting the presence of optical distortion. BA reduced the average Euclidean distance between observed and predicted star positions in the training set images from 3.56 to 2.56 pixels. The refined focal length was found by BA to be 875.93 mm, i.e. slightly shorter than the nominal focal length of 880 mm.

A11 A12 A13 A14 A15 A16
0.0643 0.4091 -0.0011 1.0003 0.0003 -0.0000
A21 A22 A23 A24 A25 A26
-0.0043 0.0635 0.4065 0.0002 0.9952 0.0004
A31 A32 A33 A34 A35 A36
-0.0501 0.0071 -0.0305 0.0636 0.4401 1.0000
Table 2: Parameters of the rational distortion model from § 3.2, estimated using star field images.

Then, we “froze” the focal length and camera rotations and estimated the rational distortion model. The estimated distortion field is shown in Fig. 8(a). Note that its shape resembles the distortion field obtained by fitting the rational model to lens simulation data in §4, duplicated for convenience in Fig. 8(b).

(a) from star field images
(b) from simulation
Figure 9: The distortion field estimated from star field images (8(a)) and optical simulations (8(b)), described in §4. Vectors show transformation from distorted to ideal image. Contours show magnitude of the transformation. Note that the distortion fields are very similar in shape, with an apparent vertical translation of the field as the most obvious difference..

Parameters of the estimated distortion model are shown in Tab. 2. The distortion model fitting reduced the average Euclidean distance between the observed and the predicted star positions in the training set images from 2.54 to 0.66 pixels. Moreover, as shown in Fig. 7, the residuals after fitting the optical distortion model became spatially uniform and small when compared to the bundle adjustment residuals from Fig. 7(b). This suggests that the remaining residual errors probably come from inaccurate star detection.

Finally, we computed the error of the estimated camera model on a separate validation set, that was not used for model estimation, while effectively ignoring extrinsic model. With the refined camera model, the average projection error is 0.47 pixels, while with the nominal camera model the error would be 3.56 pixels. This result suggests that our geometric calibration results are valid.

7 Colour image experiment

Figure 10: Work flow of the color image experiment. Ellipses represent data, white rectangular boxes represent the standard ISIS functions, and yellow boxes represent the scripts implemented in Python.

A month after TGO’s Mars orbital insertion, CaSSIS captured several colour images of Mars from the elliptic capture orbit. In order to verify the effectiveness of our calibration, we map-projected these images using the nominal and the refined geometric models, and compared the quality of the resulting images. The map-projection was performed using Integrated Software for Imagers and Spectrometers (ISIS)999USGS Integrated Software for Imagers and Spectrometers,, accessed: 2017-06-06. In § 7.1 we describe the work flow of our color image experiment, and in § 7.2 discuss the results.

7.1 Method

The work flow of the colour image experiment is shown in Fig. 10, with each individual procedure described below.

First of all, all data packets belonging to a particular sequence and a color band are extracted from the dataset. Next, we correct the optical distortion in every data packet. Then, we convert each data packet to ISIS “.cub” format (cassis2isis) and add information from the SPICE kernel to each “.cub” (spiceinit). After that, we project all “.cub” that correspond to a single band of a image sequence into a sinusoidal map (cam2map), while keeping the resolution of the projections consistent. Next, we mosaic all projected “.cub” into one image, corresponding to a single band of the sequence (automos). We repeat the process described above for every color band. After that, we select one of the bands as a reference and match map-projections of all other bands to it (map2map). This is required since “by default” map projection of every band has its own resolution and coordinate limits. Finally, we combine the individual color bands into a multi-band cube (cubeit).

7.2 Results

During our experiments we noticed that map-projected images of individual color bands were misaligned along the track by 1-10 pixels, depending on the sequence. This fact, possibly caused by the off-nadir pointing of the camera relative to its rotation axis being slightly different from the nominal. This fact needs to be properly investigated. Meanwhile we checked the validity of the optical distortion model by verifying that the color band images are distortion-free. For that we compensated color band misalignment with a simple shift and compared color band images.

The results of this comparison are shown in Fig. 11. As seen from the figure, when we use nominal camera parameters, the projected image (10(a)) has color fringes (close-up #1, 2 and 3) and stitching artifacts (close-up #3), whereas when we use refined parameters, the projected image (10(b)) is almost perfect.

This confirms that the developed calibration method works and improves the quality of the final scientific products.

(a) Nominal parameters
(b) Refined parameters
Figure 11: Map projection of a CaSSIS image (first “framelet” acquired on November 22, 2016 at 16:01:10, central ). Images are shown in false colour (RED red, NIR green, BLU blue channel). On-ground resolution is 35.52 meters/pixel. Color bands are aligned as described in §7.2. Note that when we use nominal camera parameters, the projected image (10(a)) has color fringes (close-up #1, 2 and 3) and stitching artifacts (close-up #3), while when we use refined parameters, the projected image (10(b)) is almost perfect. Some prominent artifacts in the form of vertical and horizontal banding over the image are due to incorrect photometric calibration, and not to incorrect geometric calibration. This issue is investigated independently.

8 Conclusion

In this paper we developed a method for geometric calibration of telescopes with large focal length and complex optical distortion. The proposed method was used to refine the nominal parameters of the CaSSIS camera on board ESA’s TGO. As a result, we were able to improve the quality of scientific products, such as color images.

Our method is general and can be used for the calibration of other telescopes. We further encourage re-use of the proposed method by making our calibration code and data available on-line.


The authors wish to thank the spacecraft and instrument engineering teams for the successful completion of the instrument. CaSSIS is a project of the University of Bern and funded through the Swiss Space Office via ESA’s PRODEX programme. The instrument hardware development was also supported by the Italian Space Agency (ASI) (ASI-INAF agreement no.I/018/12/0), INAF/Astronomical Observatory of Padova, and the Space Research Center (CBK) in Warsaw. Support from SGF (Budapest), the University of Arizona (Lunar and Planetary Lab.) and NASA are also gratefully acknowledged. We also acknowledge support from the NCCR PlanetS.


, [mm] , [mm] , [mm] , [mm] [head to column names]simulation_optical_distortion_predict.csv
Table 3: The CaSSIS optical distortion data, computed using a ray-tracing simulation. , are ideal-image coordinates and , are distorted-image coordinates given relative to the image center.



  • Brown (1966) Brown, D.C., 1966. Decentering distortion of lenses. Photogrammetric Engineering and Remote Sensing .
  • Claus and Fitzgibbon (2005) Claus, D., Fitzgibbon, A.W., 2005.

    A rational function lens distortion model for general cameras, in: Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, IEEE. pp. 213–219.

  • Hartley and Zisserman (2003) Hartley, R., Zisserman, A., 2003. Multiple view geometry in computer vision. Cambridge university press.
  • Heikkila and Silven (1997) Heikkila, J., Silven, O., 1997. A four-step camera calibration procedure with implicit image correction, in: Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on, IEEE. pp. 1106–1112.
  • Junfeng et al. (2005) Junfeng, X., Wanshou, J., Jianya, G., 2005. On-orbit Stellar Camera Calibration Based on Space Resectioning. XXXVII ISPRS Congress Proceedings .
  • Kilpelä (1981) Kilpelä, E., 1981. Compensation of systematic errors of image and model coordinates. Photogrammetria 37, 15–44.
  • Klaus et al. (2004) Klaus, A., Bauer, J., Karner, K., Elbischger, P., Perko, R., Bischof, H., 2004. Camera calibration from a single night sky image. doi:10.1109/CVPR.2004.1315026.
  • Lang et al. (2010) Lang, D., Hogg, D.W., Mierle, K., Blanton, M., Roweis, S., 2010. Astrometry. net: Blind astrometric calibration of arbitrary astronomical images. The astronomical journal 139, 1782.
  • Pal and Bhat (2014) Pal, M., Bhat, M.S., 2014. Autonomous Star Camera Calibration and Spacecraft Attitude Determination. Journal of Intelligent & Robotic Systems , 323–343doi:10.1007/s10846-014-0068-z.
  • Roloff et al. (2017) Roloff, V., N, T.O., G, C., 2017. On-Ground Performance and Calibration of the ExoMars Trace Gas Orbiter CaSSIS Imager. Space Science Reviews .
  • Samaan et al. (2001) Samaan, M.a., Griffth, T., Singla, P., Junkins, J.L., 2001. Autonomous on-orbit calibration of star trackers.
  • Tang et al. (2012) Tang, Z., Gioi, R.V., Monasse, P., Morel, J., 2012. Self-consistency and universality of camera lens distortion models. HAL-ENPC .
  • Thomas et al. (2014) Thomas, N., Cremonese, G., Banaszkiewicz, M., Bridges, J., Byrne, S., Da Deppo, V., Debei, S., El-Maarry, M., Hauber, E., Hansen, C., et al., 2014. The colour and stereo surface imaging system (cassis) for esa’s trace gas orbiter, in: Eighth International Conference on Mars.
  • Thomas et al. (2017) Thomas, O.N., Cremonese, G., Banaszkiewicz, M., Bridges, J., Byrne, S., Da Deppo, V., Debei, S., El-Maarry, M.R., Hauber, E., Hansen, C.J., Ivanov, A., Markiewicz, W., Massironi, M., Mcewen, A.S., Okubo, C., Orleanski, P., Pommerol, A., Wajer, P., Wray, J., 2017. The Colour and Stereo Surface Imaging System (CaSSIS) for the ExoMars Trace Gas Orbiter. Space Science Reviews .
  • Tsai (1987) Tsai, R., 1987. A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv cameras and lenses. IEEE Journal on Robotics and Automation 3, 323–344.
  • Zhang (1999) Zhang, Z., 1999. Flexible camera calibration by viewing a plane from unknown orientations, in: Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on, Ieee. pp. 666–673.