I Introduction
Light detection and ranging (LIDAR) sensors, also known as laser range finders (LRF), are active depth sensors that transmit and receive laser light to measure the distance to objects that reflect light back to the sensor. Since the advent of compact planar LIDARs, such as Sick LMS 200 or Hokuyo URG04LX, they have become an important depth sensor in the perception stack of autonomous mobile robots.
LIDAR sensors exhibit unique noise characteristics. Their measurements are influenced by the distance to the object, the angle of incidence, the surface properties of objects (e.g. reflectivity and color), the environment temperature, and many other factors, which have been studied extensively [1, 2]. The goal of such characterizations is to derive an empirical sensor model “topdown”: given LIDAR measurements under varying conditions, such as surface types, inclination angles, etc., they produce a probabilistic model of noise characteristics for a particular sensor. This statistical model then further informs uncertaintyaware localization and mapping algorithms.
In this work, we take a different approach: starting from first principles on how laser light interacts with surfaces under various conditions, we implement a physically plausible simulation that recovers various effects encountered with physical LIDARs, such as spurious measurements, reflected and refracted rays. We simulate the measurement process of continuouswave LIDARs where the phase shift between two amplitudemodulated laser light waves is calculated to measure the distance. Using automatic differentiation, we compute the gradients of all parameters in our simulation with respect to the simulated measurements and apply gradientbased optimizers to find the simulation parameters that most closely match the true observations.
To the best of our knowledge, our approach is the first physically plausible simulation that captures the measurement process of a continuouswave LIDAR. Our realworld experiments show that our proposed model can accurately reproduce lightsurface phenomena, such as refraction and specular reflection, that have not been considered in most previous LIDAR models.
Ii Related Work
Detailed characterizations of 2D LIDAR sensors have been presented for the Hokuyo laser scanner URG04LX in [3, 4, 2, 5]. The focus of these works is on experimentally analyzing how the LIDAR measurements are influenced by object color, material, incidence angle, etc. Our goal, in contrast, is to model the physics behind lightsurface interaction and the LIDAR’s measurement process to allow for the automatic inference of the conditions that caused the observed measurements. A detailed analysis of the physical properties of LIDARs is given in [6], where the authors provide suggestions on how certain realworld effects can be modeled in simulation.
LIDARs and other active range finders are typically modelled in robot simulators to allow for the development of localization and mapping algorithms. Instead of accounting for the intricacies unique to these sensors – as done in the works on characterization – most generalpurpose simulators for robotics, such as Gazebo [7] and MuJoCo [8], typically implement zbuffering or raycasting to compute the depth and perturb it by zeromean Gaussian noise, leading to a significant gap between simulation and reality [9]. In this work, we aim for a more detailed simulation that takes into account the interaction of laser light with different surfaces, and many other physical phenomena, as described in Sec. III. Similar to [10], we implement ray tracing to account for reflective surfaces. Additionally, we model the sampling process taking place in continuouswave LIDARs, as well as the detailed lightsurface interaction that considers transmitting materials, such as glass and plastic.
Using gradients of models and algorithms has a long history in engineering, e.g. where gradients of raytracing equations have been derived to optimize the design of optical systems [11]. With the advent of Automatic Differentiation frameworks, accurate gradients can be computed algorithmically, powering modelpredictive optimal control algorithms based on robotic simulators [12, 13, 14], design optimization [11], and various other use cases [15, 16]. In the context of simulating vision, prior work in computer graphics has focused on differentiable rendering systems [17, 18, 19, 20, 21] that tackle the problem of inverse graphics. Overall, our framework follows a similar idea of constructing a physicsbased model of the real system for which the parameters can be efficiently estimated using gradientbased optimization.
Iii LIDAR Model
In the following section, we describe our simulation of a continuouswave LIDAR, starting from the physics of lightsurface interaction to the particular measurement process in the considered sensor.
Iiia Surface Interaction
When the laser light hits an object at a point , depending on the surface and angle of inclination, the beam is reflected, scattered and transmitted.
In computer graphics, researchers and practitioners use the rendering equation [22]
(1)  
which expresses the radiance of the reflected light ray from point along the direction , when a light ray of radiance tracking the direction collides with a medium at , integrated over all incoming light directions from the unit hemisphere . is the angle made by the incident light ray with the surface normal at . These quantities are depicted in Figure 2 (left).
The term captures the effects of blackbody emissions from the material on which the light was incident. All substances having a temperature above absolute zero emit electromagnetic radiation. The radiance of such electromagnetic radiation with a wavelength emitted by a blackbody at a particular temperature can be derived using Planck‘s law [22, Chapter 12.1.1].
The integral term in Equation 1 quantifies the amount of radiance imbibed by the reflected light ray along from the light rays incident at . This term is referred to as a bidirectional reflectance distribution function (BRDF). Fresnel equations give the fraction of light reflected along , denoted as . Thus, if we set the reflection distribution function , then the integral reduces to . Incidentally, is a function of the refractive indices of the participating media and can be computed in closed form [22, Chapter 8.2.1].
Apart from being specularly reflected, light can be scattered and diffused by the interacting surface. Diffuse reflection can be described using a BRDF (see Figure 2 for an illustration of uniform diffusion around a point, highlighted by the turquoise hemisphere). To accurately simulate the diffusive and reflective properties of surfaces, they are modelled as a collection of small microfacets. The distribution of microfacet orientations is described statistically to account for the roughness, specular and diffusive properties of surfaces.
Perfect diffusion is described by the Lambertian model that relates the intensity of the reflected light to the cosine of its incidence angle, resulting in surfaces (parameterized by brightness ) that reflect light into all directions equally:
(2) 
Besides perfect diffuse reflection, we additionally implement the OrenNayar model that represents the microfacet orientation angles by a Gaussian distribution which is parameterized by its standard deviation. Common LIDAR models assume Lambertian surfaces everywhere. In practice, this assumption is often invalidated – particularly when mobile robots are employed in indoor areas, such as office spaces or shopping malls, where many highly reflective or transparent surfaces, such as windows, are present.
A bidirectional transmittance distribution function (BTDF) captures the amount of radiance carried by the light transmitted into the reflecting medium (BSDF(2) in Figure 2 shows light transmission in glass). Since is the fraction of light reflected, gives the fraction of light transmitted or refracted. Therefore, by introducing Snell’s law of refraction to the integral term in Equation 1, a BTDF can be written as
where is the refracted direction of the light ray, and are the refractive indices of the participating media.
The combination of BTDFs and BRDFs describes the scattering of light upon its encounter with a medium. Any such combination is called a bidirectional scattering distribution function (BSDF). A material is often a combination of multiple BSDF – for example, transparent plastic, besides diffusing light slightly, has specular reflection and transmission components.
The integral in Equation 1 needs to be solved for each light ray which can in most cases only be approximated through sampling. Instead of sampling rays from the light source, we assume the LIDAR is the only source of radiance, and trace rays starting from the sensor’s location. Such technique is widely known as Whitted integration [23], a recursive raytracing algorithm that generates a tree of light rays (we use a maximum recursion depth of five). At each surface interaction, the raytracing tree is expanded by new rays that are cast into the scene according to the BSDF’s defined reflective and refractive properties.
In contrast to the rendering of a typical 2D image, we have to take into account the time it takes for the laser light to be reflected back so that we can simulate the measurement process in a LIDAR. Therefore, for each interaction at point , we record both the distance travelled so far along the render tree and compute the radiance received from . As the light travels a distance , its intensity attenuates according to the inverse square law .
Based on the physicsbased rendering toolkit pbrt [22], we implement raytracing with a watertight raytriangle intersection algorithm and represent all scene geometry as triangle meshes. To efficiently find intersecting meshes, we store the scene geometry in a kd tree.
IiiB Measuring Distance
There are two main types of operating principles for LIDARs [24]: pulsed systems emit a laser pulse and measure the time until the pulse is received to compute the distance. Continuouswave (CW) LIDARs emit a sinusoidal signal of known wavelength to estimate the distance by measuring the phase difference between the transmitted and received signal (Figure 2 right). CW LIDARs typically require less powerful lasers, thus reducing hardware costs.
The LIDAR which we model in this work is the URG04LX LRF by Hokuyo Automatic Co., Ltd. Being part of the URG series, as described in [25], it is a lowcost, planar CW LIDAR with a fieldofview of , transmitting laser rays into 682 directions (thus achieving an angular resolution of ) at a frequency. It modulates the amplitude of the signal to measure distance.
For a pulse ranging LIDAR, given the twoway travel time of the emitted light, the measured range is computed by where is the speed of light. As we are interested in modeling CW LIDARs, the range is computed from the measured phase difference between the transmitted and received signal. is proportional to period time and phase difference :
(3) 
In the case of the Hokuyo LRF, the amplitudemodulated laser light is received by an avalanche photodiode [25]. Throughout the raytracing process for a single measurement ray, we compute both the actual range traveled up to the lightsurface interaction plus the received radiance (cf. subsection IIIA). Each such return produces two new received waveforms over 15 periods at and frequencies with phase shifts , ,^{1}^{1}1Given actual range , the actual phase shift for the waveform of frequency is computed as . and amplitude (Figure 2 right), respectively. For the two frequencies and , we sum up the received waveforms until the raytracing process is complete.
Next, the measurement process begins by sampling both waveforms through 30 samples at times . The samples are reordered to form a highresolution waveform (Figure 4 right). Using a simple discrete Fourier series approximation, the phase is computed via
(4) 
Analogously, the phase is computed for the transmitted signal and the phase difference to the received signal is computed by subtracting both phases. From this phase shift , the measured distance is computed via Equation 3. The two phase shifts from the and waveforms are compared in order to resolve ambiguity in the phase difference of a single wave [25].
LIDARs exhibit spurious measurements (Figure 3), i.e., when the laser beam partially hits an obstacle closer to the scanner and another obstacle at a greater distance, the resulting measurement lies somewhere between these two obstacles. This phenomenon follows from the conical shape of the laser beam. As noted by Rosenberger et al. [6], a single infinitesimally thin ray is not sufficient to sample the behavior of beam divergence. Instead, we model the laser beam using three rays which are aligned around the central laser beam ray (Figure 5). At a distance of , the beam diameter is specified to be .
Iv Experimental Results
Visualization of the pose estimation results by transforming the point cloud of the given depth measurements by the estimated sensor transform.
Besides retaining various effects from the real world, we demonstrate in our experiments the capability of our simulation to serve as a model that can be fitted to actual measurements from a physical LIDAR. To this end, we focus on scenarios where certain inputs to our model need to be estimated – such as the sensor’s pose, material properties of intersecting surfaces, and parameters internal to the sensing process – so that the resulting simulated measurements closely match the real data.
We investigate several scenarios and replicate them in our simulator to study various applications of the proposed LIDAR model. As shown in Figure 1 (3), we created a cuboid environment in which the LIDAR is placed. The environment has the dimensions (width depth wall height) . We track the LRF using a Vicon motion capture system and place markers on the objects we want to track to achieve submillimeter groundtruth accuracy.
Throughout the experiments, the optimization objective is
(5) 
where the reality gap between simulated measurements and actual measurements is to be minimized by adapting the simulation parameters . These parameters are defined separately for each experiment.
Our simulator is implemented in C++ and uses the Automatic Differentiation framework Stan Math [26] to algorithmically compute gradients in our experiments. We use the implementation of the gradientbased optimization algorithm LBFGS with Wolfe line search from the Ceres library [27] to optimize Equation 5 with the calculated gradients throughout all experiments.
Iva Localization in a Known Environment
Being placed in a simple cuboid environment where the wall geometries and surface properties are known, the goal in this experiment is to localize the LIDAR. Such a scenario is very common in LIDAR odometry where the current set of range measurements is registered with the point cloud obtained from a previous observation to obtain the rigid transform between the two poses.
Given a set of range measurements, we optimize Equation 5 for the sensor’s SE(2) pose . The initial pose is defined as , the groundtruth pose is . We compare our approach to common point cloud registration algorithms. Given the depth information from the current pose (our initial guess), their task is to find the relative transform to a target point cloud, which stems from the actual LIDAR. Our baselines are Iterative Closest Point (ICP) [28], Generalized ICP (GICP) [29]
, and Normal Distributions Transform (NDT)
[30]. We use their implementations from the Point Cloud Library [31].As shown in Figure 6 (left), the optimization using our proposed model converges after 34 iterations to the correct pose. Executed on an Intel Core i78700K CPU (), the computation takes , while the respective point cloud registration algorithms converge after 1020 iterations (since we do not have access to the iterationwise performance of these algorithms we show a flat line in the graph). GICP achieves the highest baseline accuracy with a final pose error of 0.13. While the registration baselines finish the computation by one to two orders of magnitude faster than our approach, we note that our current implementation can be sped up considerably through parallelization.
While classical point cloud registration algorithms are generally able to find good solutions for simple scenarios such as one shown in Figure 6, where the points follow a simple rectangular geometry, and the shape largely remains constant under the rigid transformation, depth measurements can become more complex when highly reflective surfaces (e.g., mirrors or glass surfaces) are present. Such conditions are encountered more and more frequently, such as on autonomous vehicles where the metallic paint on cars greatly violates the Lambertian diffusion assumption underlying most conventional LIDARbased mapping and localization approaches.
To this end, we investigate an entirely simulated scenario where the LIDAR is exposed to multiple specular reflectors (mirrors), a glass surface and a diffuse surface (see Figure 7 left). Placed at the initial pose , the ground truth pose needs to be inferred given the sensor measurements. While the initial location is close to the actual pose, the point cloud registration algorithms achieve significantly less improvement over the initial guess compared to our optimizationbased approach. As the point clouds differ significantly, a model that relies solely on their shapes is prone to find suboptimal solutions, while the gradientbased optimization applied to Equation 5 finds the accurate pose within 34 iterations (see Figure 7 right).
IvB Tracking a Mirror
Besides localizing the LIDAR, other objects in the environment can be tracked if they are modelled within the simulator. To this end, we place a mirror near the LIDAR (see Figure 8) and estimate its SE(2) pose using gradientbased optimization. Similar to subsection IVA, LBFGS applied to our model estimates the correct pose by minimizing the norm between the simulated and real depth measurements within six iterations (see Figure 8 right).
Our current approach suffers from the loss of gradient information when the object to be tracked is moved outside the fieldofview of the LIDAR, e.g. when it is behind the walls or in the blind spot of the sensor. Therefore, we restrict the position of the mirror to an area in front of the LIDAR within the wall boundaries. We apply a transformation to the and coordinates to restrict the real values to and transform the result to the area within the walls in front of the LIDAR. Approaches from the computer graphics community, such as Soft Rasterizer [20], overcome the issue of missing gradient information by relaxing the intersection checks through a convex combination of blurred edges so that even faces that are hidden contribute to the intersection test.
IvC Diode Calibration and Inferring Surface Properties
As noted in several characterizations of lowenergy LIDARs, such as the URG04LX [4], they suffer from a strong dependence of the measured depth on the received radiance. As shown in Figure 1 (2), the dark sections of the checkerboard appear closer than the more reflective white sections, resulting in a staircase effect. While the exact cause of this effect is unknown to us, we seek to replicate this behavior through a datadriven approach and leverage our model to infer surface properties given the sensor’s range measurements.
We model the relationship between radiance and phase shift as a quadratic polynomial and regress its coefficients through optimizing Equation 5. This can be seen as a form of sumofsquares optimization [32]. The phase shift to modulate the two continuous waves in Equation 3 is modified as follows: where is the received radiance and are the polynomial coefficients to be estimated. We model the checkerboard using dark and bright stripes of the same dimensions as the real checkerboard (Figure 9 left) and assign Lambertian materials of separate reflectivity coefficients (cf. Equation 2) to the black and white sections.
It should be noted that this problem is particularly difficult since we do not have access to the actual intensities of each return but instead only the filtered distance measurements. Nonetheless, we are able to estimate the two different materials of a checkerboard, and converge to polynomial coefficients within 9 iterations (Figure 9 right) that closely match the real measurements (Figure 9 center). When we move the lidar from its initial configuration of , we see that the simulated measurements still retain the staircase pattern from the real world accurately (Figure 10).
V Conclusion
We have presented a physicsbased model of a continuouswave LIDAR that allows for the optimizationbased inference of model parameters. It captures the detailed interaction between the laser light and various surfaces, accounting for geometrical and material properties. Through experiments with realworld measurements from the Hokuyo URG04LX LRF, we have demonstrated various applications of our model in localization, calibration and tracking.
Future research is directed towards extending our proposed simulation to model more powerful pulsed LIDAR sensors and investigate its application in realworld domains on a larger scale.
References
 [1] Cang Ye and J. Borenstein, “Characterization of a 2d laser scanner for mobile robot obstacle negotiation,” in IEEE International Conference on Robotics and Automation, vol. 3, May 2002, pp. 2512–2518 vol.3.
 [2] L. Kneip, F. Tache, G. Caprari, and R. Siegwart, “Characterization of the compact hokuyo urg04lx 2d laser range scanner,” in 2009 IEEE International Conference on Robotics and Automation, May 2009, pp. 1447–1454.
 [3] Hirohiko Kawata, Kohei Miyachi, Yoshitaka Hara, Akihisa Ohya, and Shin’ichi Yuta, “A method for estimation of lightness of objects with intensity data from SOKUIKI sensor,” in IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Aug 2008, pp. 661–664.
 [4] Y. Okubo, C. Ye, and J. Borenstein, “Characterization of the Hokuyo URG04LX laser rangefinder for mobile robot obstacle negotiation,” in Unmanned Systems Technology XI, vol. 7332. International Society for Optics and Photonics, 2009, p. 733212.
 [5] R. Breda, “Experimental measurement of parameters of the spatial scanner Hokuyo URG04LX,” Przeglad Elektrotechniczny, vol. 88, no. 5b, pp. 132–135, 2012.
 [6] P. Rosenberger, M. Holder, M. Zirulnik, and H. Winner, “Analysis of real world sensor behavior for rising fidelity of physically based lidar sensor models,” in 2018 IEEE Intelligent Vehicles Symposium (IV), June 2018, pp. 611–616.

[7]
N. Koenig and A. Howard, “Design and use paradigms for gazebo, an opensource multirobot simulator,” in
2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), vol. 3, Sep. 2004, pp. 2149–2154 vol.3.  [8] E. Todorov, T. Erez, and Y. Tassa, “Mujoco: A physics engine for modelbased control,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2012, pp. 5026–5033.
 [9] P. J. Durst, C. Goodin, B. Q. Gates, C. L. Cummins, B. McKinley, J. D. Priddy, P. Rander, and B. Browning, “The need for highfidelity robotics sensor models,” Journal of Robotics, vol. 2011, 2011.
 [10] M. Gschwandtner, R. Kwitt, A. Uhl, and W. Pree, “Blensor: Blender sensor simulation toolbox,” in Advances in Visual Computing, G. Bebis, R. Boyle, B. Parvin, D. Koracin, S. Wang, K. Kyungnam, B. Benes, K. Moreland, C. Borst, S. DiVerdi, C. YiJen, and J. Ming, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 199–208.
 [11] D. P. Feder, “Differentiation of raytracing equations with respect to construction parameters of rotationally symmetric optics,” J. Opt. Soc. Am., vol. 58, no. 11, pp. 1494–1505, Nov 1968. [Online]. Available: http://www.osapublishing.org/abstract.cfm?URI=josa58111494
 [12] M. Giftthaler, M. Neunert, M. Stäuble, M. Frigerio, C. Semini, and J. Buchli, “Automatic differentiation of rigid body dynamics for optimal control and estimation,” Advanced Robotics, vol. 31, no. 22, pp. 1225–1237, 2017.
 [13] F. de Avila BelbutePeres, K. Smith, K. Allen, J. Tenenbaum, and J. Z. Kolter, “Endtoend differentiable physics for learning and control,” in Advances in Neural Information Processing Systems, 2018, pp. 7178–7189.
 [14] E. Heiden, D. Millard, H. Zhang, and G. S. Sukhatme, “Interactive differentiable simulation,” CoRR, vol. abs/1905.10706, 2019. [Online]. Available: http://arxiv.org/abs/1905.10706
 [15] J. F. M. Barthelemy and L. E. Hall, “Automatic differentiation as a tool in engineering design,” Structural optimization, vol. 9, no. 2, pp. 76–82, Apr 1995. [Online]. Available: https://doi.org/10.1007/BF01758823
 [16] G. Corliss, C. Faure, A. Griewank, L. Hascoët, and U. Naumann, Eds., Automatic Differentiation of Algorithms: From Simulation to Optimization. New York, NY, USA: SpringerVerlag New York, Inc., 2002.

[17]
M. M. Loper and M. J. Black, “Opendr: An approximate differentiable
renderer,” in
European Conference on Computer Vision
. Springer, 2014, pp. 154–169.  [18] P. Henderson and V. Ferrari, “Learning to generate and reconstruct 3d meshes with only 2d supervision,” in British Machine Vision Conference (BMVC), 2018.
 [19] T.M. Li, M. Aittala, F. Durand, and J. Lehtinen, “Differentiable monte carlo ray tracing through edge sampling,” in SIGGRAPH Asia 2018 Technical Papers. ACM, 2018, p. 222.
 [20] S. Liu, W. Chen, T. Li, and H. Li, “Soft rasterizer: Differentiable rendering for unsupervised singleview mesh reconstruction,” arXiv preprint arXiv:1901.05567, 2019.
 [21] M. NimierDavid, D. Vicini, T. Zeltner, and W. Jakob, “Mitsuba 2: A retargetable forward and inverse renderer,” Transactions on Graphics (Proceedings of SIGGRAPH Asia), vol. 38, no. 6, Nov. 2019.
 [22] M. Pharr, W. Jakob, and G. Humphreys, Physically based rendering: From theory to implementation. Morgan Kaufmann, 2016.
 [23] T. Whitted, “An improved illumination model for shaded display,” Commun. ACM, vol. 23, no. 6, pp. 343–349, June 1980. [Online]. Available: http://doi.acm.org/10.1145/358876.358882
 [24] J. Shan and C. K. Toth, Topographic laser ranging and scanning: principles and processing. CRC press, 2018.
 [25] H. Kawata, A. Ohya, S. Yuta, W. Santosh, and T. Mori, “Development of ultrasmall lightweight optical range sensor system,” in International Conference on Intelligent Robots and Systems. IEEE/RSJ, 2005, pp. 1078–1083.
 [26] B. Carpenter, M. D. Hoffman, M. Brubaker, D. Lee, P. Li, and M. Betancourt, “The stan math library: Reversemode automatic differentiation in c++,” arXiv preprint arXiv:1509.07164, 2015.
 [27] S. Agarwal, K. Mierle, and Others, “Ceres solver,” http://ceressolver.org.
 [28] P. J. Besl and N. D. McKay, “A method for registration of 3d shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239–256, Feb 1992.
 [29] A. Segal, D. Haehnel, and S. Thrun, “GeneralizedICP,” 2009.
 [30] P. Biber and W. Strasser, “The normal distributions transform: a new approach to laser scan matching,” in Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), vol. 3, Oct 2003, pp. 2743–2748 vol.3.
 [31] R. B. Rusu and S. Cousins, “3D is here: Point Cloud Library (PCL),” in IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, May 913 2011.
 [32] P. A. Parrilo, “Chapter 3: Polynomial optimization, sums of squares, and applications,” in Semidefinite Optimization and Convex Algebraic Geometry. SIAM, 2012, pp. 47–157.
Comments
There are no comments yet.