What happens for a ToF LiDAR in fog?

03/14/2020 ∙ by You Li, et al. ∙ Cerema Renault 0

This article focuses on analyzing the performance of a typical time-of-flight (ToF) LiDAR under fog environment. By controlling the fog density within CEREMA Adverse Weather Facility 1 , the relations between the ranging performance and fogs are both qualitatively and quantitatively investigated. Furthermore, based on the collected data, a machine learning based model is trained to predict the minimum fog visibility that allows successful ranging for this type of LiDAR. The revealed experimental results and methods are helpful for ToF LiDAR specifications from automotive industry.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

As an active sensor, LiDAR (light detection and ranging) illuminates the surroundings by emitting lasers. The reflected laser pulses are then detected by certain photodetectors, such as APD (avalanche photodiode) or SPAD (single-photon avalanche diode). Range measurements are acquired by processing the laser returns with regard to the emitted lasers. For example, the most popular ToF (time-of-flight) LiDARs compute the time differences between the transmitted and received lasers. Knowing the pose the LiDAR allows to calculate the 3D Cartesian coordinates from 1D ranges. Comparing with camera and radar, LiDAR is much better in ranging accuracy and precision [18]. Therefore, LiDAR is always regarded as a critical sensor to assure safety for high level autonomous vehicles. In DARPA Grand Challenge 2007 – a milestone in autonomous driving history, all the top 3 teams were equipped with multiple LiDARs. Applications of LiDAR in autonomous driving can be divided into two categories: 1) perception, such as object detection, tracking and recognition[15]; 2) localization and mapping[30].

However, most of the applications assume that LiDARs are always working within perfect environments, ignoring the impact of adverse conditions such as fog, rain or snow. In literature, the researches on this subject (e.g. [25, 1, 4]) are insufficient. With the fast progress of autonomous driving systems, the impacts of adverse weather on LiDARs become non-negligible for deploying full self-driving cars.

In this paper, we present a performance analysis and modeling of a popular ToF LiDAR, Velodyne UltraPuck222https://velodynelidar.com/vlp-32c.html, under well-controlled artificial fog environments. Fig. 1 shows a testing scenario. The contributions of this paper are twofold: (1) At first, comparing with some works in literature, very detailed experimental results are both qualitatively and quantitatively analyzed. (2) Based on the collected data, a machine learning based model is trained to predict when a LiDAR would fail under fog condition. As far as the authors’ knowledge, this is the first work to quantitatively analyze and model a ToF LiDAR’s performance under fog conditions in a data-driven approach.

This paper is organized as follows: Sec II reviews related researches. Theoretical model of a ToF LiDAR and the factors impacting a ToF LiDAR’s ranging capability under adverse weather are discussed in Sec III. Then, the experiments are described in Sec. IV, and the results are analyzed in Sec. V. At last, based on the collected data, a machine learning based method is proposed in Sec. V-D to model the performance of the tested LiDAR in fog environment.

Fig. 1: A scenario of testing LiDAR performance under fog in CEREMA Adverse Weather Platform.

Ii Literature Review

Fig. 2: An example of a ToF LiDAR system.

In literature, there are a few studies on the impacts of adverse weather on LiDARs. Some researchers tried to develop a theoretical model to simulate the behavior of LiDAR under adverse weather. For instance, in [29], the authors compared the range degradation of a 905nm ToF LiDAR with a 1050nm one due to adverse conditions. Atmospheric extinction coefficients and reflectances of various materials are deducted from theoretical models to infer the results of LiDARs. A simplified model of LiDAR’s performance in rain conditions is proposed in [10] for a simulation software. However, both of these two works were not verified by real experiments. [24] modeled the impact of bad weather on a 905nm ToF LiDAR based on Mie scattering theory. Although the developed model was verified through real experiments, only several rough visibilities of adverse weather are tested due to facility restrictions.

Although theoretical models represent the physical characters of LiDAR, they are always built on assumptions and simplifications rarely being held on real environments. Therefore, some researches emphasized on empirical evaluation. In [25]

, a radar and two LiDARs (SICK and Riegl) are tested in rain, mist and dust conditions. Radar is found to be more robust than the tested LiDARs in such environments. Range errors of LiDARs are estimated as well.

[21] assessed four different LiDARs under visibility reduced environments with water vapor or smoke. In [1], various fog conditions are created to test Veldoyne HDL-64E, a LiDAR of 905nm wavelength. A metric named SSIM (Structural Similarity Index Measurement) is used to measure the impact of fog attenuation, w.r.t the visibility of fog. [9] tried to quantify the influence of rain to Velodyne VLP16. Range and intensity changes are both investigated through field tests. However, the rain conditions are not well measured: the utilized weather data is too general to quantitatively analyze the LiDAR performance w.r.t rain density. [13] investigated the fog and smoke attenuation for NIR (near infrared) wavelength lasers under a 5.5m long atmospheric chamber, for the purpose of optical communication. The distances tested () is insufficient for LiDAR applications and atmospheric attenuation is just one of the factors impact LiDAR performance under adverse weather. Within an EU project DENSE (aDverse wEather eNvironment Sensing systEm)333https://www.dense247.eu/home/ aiming to develop perception sensors which can work under bad weather conditions, [16], [14] tested and benchmarked various range sensors within a well-controlled fog and rain facility at CEREMA. Also within the same facility, [4] quantitatively benchmarked a Velodyne HDL64 LiDAR and a IBEO Lux4 LiDAR, which are both in 905nm wavelength.

Iii Theoretical Model of ToF LiDAR and Adverse Weather Impacts

In this section, we summarize the principle of a ToF LiDAR and the factors impacting its performance under adverse weather.

Iii-a Principle of a ToF LiDAR

As the most popular LiDAR category, ToF (time-of-flight) LiDARs measure distances by calculating the time difference between emitted laser pulses and the diffused or reflected lasers from obstacles. The equation of a ToF LiDAR is given as:

(1)

where is the measured range, is the light speed, is the index of refraction of the propagation medium (approximately 1 for air). is the time gap between the transmitted laser and received laser.

A typical ToF LiDAR system comprises three parts: transmitter, receiver, time control and signal processing circuits, as shown in Fig. 2. Driven by the microcontroller, a pulsed laser is transmitted through certain transmission medium, air for instance, to illuminate the surroundings. When the emitted laser hits on an object, diffused or reflected laser returns are captured by the receiver’s optical system and are transformed into electrical signals by photodetectors, such as APD (Avalanche Photon Diode). This process can be summarized by LiDAR’s power model:

Iii-A1 Power model

The power of a received laser return at distance can be modeled as ([27, 24]):

(2)

Where is the total energy of a transmitted pulse laser, is light speed. represents receiver’s optical aperture area. is the overall system efficiency. is the reflectivity of the target’s surface, which is decided by surface properties and incident angle. In a simple case of Lambertian reflection with a reflectivity of , it is given by:

(3)

The final part denotes the transmission loss through the transmission medium, which is given by:

(4)

is the extinction coefficient of the transmission medium. The extinction is due to the particles within the transmission medium would scatter and absorb the laser.

(a) Overall structure of CEREMA’s platform
(b) Instruments in CEREMA’s platform.
Fig. 3: CEREMA’s Adverse Weather facility

Iii-A2 Pulse detection

After transforming the laser returns into electrical signals, a signal processing unit detects the received laser signal from background noises. Finally, the time difference is got from the detected return signal and the range is calculated as in Eq. 1. Eq. 2 reveals that, the energy of received laser decrease quadratically with regard to the distance. Simply increasing the power of transmitted laser is infeasible due to eye-safety restrictions, such as IEC 60825[12]

. Therefore, advanced signal processing algorithms capable of detecting the true return signal in low SNR (signal-to-noise ratio) are required. To increase the SNR, a low-pass filter or a band-pass filter is usually applied inside the signal processing circuits

[28]. A thresholding algorithm is applied on the raw data to detect the true return signal. Adaptive thresholding methods capable of learning the statistics of the background noises are widely applied, such as the well-known constant false alarm rate (CFAR) detector [20], or the methods in [19] or [2].

Iii-B Influences of Adverse Weather

From the short review of a ToF LiDAR’s principle, it can be inferred that the adverse weather, such as fog or rain, enlarges the transmission loss and hence leads to lower received laser power , which fails the following signal processing step. In fact, LiDAR’s performance degrades due to the change of extinction coefficient [11] and target’s reflectivity [17]:

  • Impact on extinction coefficient : The droplets in the fog or the rain would absorb or scatter the near infrared laser [29]. The severity depends on the water content percentage, droplet size distribution [24], etc.

  • Impact on surface reflectivity . A wet surface always looks ”darker” than dry surface [17], because a thing film of liquid on an obstacle’s wet surface lead to weaker diffuse reflection. The decreased surface reflectivity hence leads to a reduced maximum detection range in adverse weather.

Iv Experiments

Apart from theoretical analysis, we are interested to know the empirical results of how a ToF LiDAR perform under varying fog conditions. We realized The tests within CEREMA Adverse Weather facitliiy444https://www.cerema.fr/fr/innovation-recherche/innovation/offres-technologie/plateforme-simulation-conditions-climatiques-degradees – a center in Europe generating controlled adverse weather conditions, as shown in Fig. 3. In our experiments, the popular Velodyne UltraPuck was chosen because of its wide applications in autonomous vehicles. A technical summary of this sensor is shown in Tab. I. Under fog environment, various targets were put at different distances with regard to the LiDAR, and the correspondent LiDAR measures are recorded for further analysis.

Velodyne UltraPuck
Max range 200m ( reflectivity)
Range accuracy 5cm
Horizontal FOV
Vertical FOV
Horizontal angular resolution
Vertical angular resolution (min)
Laser wavelength 903nm
Max scan rate 20hz
TABLE I: A summary of Velodyne UltraPuck

Iv-a CEREMA’s Adverse Weather Facility

The CEREMA Adverse Weather Platform, was developed to investigate all transport systems that could be affected by adverse conditions, including fog and rain [6, 8]. It allows to reproduce various scenarios, as detection of vulnerable road users or fixed obstacles, in clear conditions, night conditions, and with various ranges of fog and rain precipitations on a total length of 30 meters (Fig. 3 (a)). Dedicated to research and development, it is also open to private companies looking for a testing facility with controlled conditions. It has been used for years in partnership or collaborative projects in order to investigate various scientific topics, as humans’ perception in adverse conditions, vision systems capabilities in fog or rain conditions [3, 22]

or computer vision algorithms for objects detection

[7, 5]. The physical characteristics of rain and fog produced in the platform are described in a recent study on LiDARs performances in fog and rain [16].

This platform has got a high-level instrumentation to evaluate the performance of perceptual sensors for autonomous vehicles in adverse conditions. Some weather instruments are dedicated to characterizing the atmosphere in fog or rain conditions (as shown in Fig. 3 (b)):

  • Transmissometer, for meteorological visibility in fog from 5 to 1000 with 1HZ recording,

  • Optical granulometer, for fog droplet size distribution from 0.4 to 40 with 1 minute step recording,

  • Rain gauge and a spectro-pluviometer for rainfall rate from 0.001 to 1200 with 1HZ recording.

Iv-B Test Methodology

Iv-B1 Artificial fog and visibility data

As shown in Fig. 3 (b), nozzles are distributed inside the chamber. These nozzles used at high pressure are capable of mechanically producing water droplets that are similar in size to a natural fog. As the atmosphere is not saturated enough with water, the droplets will gradually evaporate, we call it dissipation. Thus, by precisely monitoring in real time the quantity of water injected into the test chamber, it is possible to regulate the meteorological visibility, thanks to the usage of transmissometers (Fig. 3 (b)). In the beginning of each test, we generate a dense fog reaching the minimum available meteorological visibility of 10m. Then, we let the fog gradually dissipate. The fog dissipation leads to an increase of meteorological visibility, until the air becomes clear. (From a meteorological point of view, there is no fog if the meteorological visibility is more than 1000m. But a french road standard consider that there is no fog when the meteorological visibility is more than 400m). Fig. 4 shows a real example of the visibility recordings during a test of around 600 seconds. The change of visibility reflects the change of fog’s density.

Fig. 4: A sample of visibility recording in an artificial fog: the visibility reaches almost 10m and then gradually increases to more than 300m due to dissipation.
(a) Testing setup inside the fog chamber of CEREMA
(b) Top: used targets (3 calibrated boards, vehicle, a dummy model and 2 traffic signs). Bottom: Velodyne UltraPuck on the table and an example scenario
(c) LiDAR measures for all the targets (at 15m)
(d)

Reflectivity distributions of diffuse targets (at 15m in clear weather, fitted by a Gaussian distribution)

(e) Reflectivity distributions of retro-reflected targets (at 15m in clear weather, fitted by a Gaussian distribution)
Fig. 5: Scenarios and targets in testing

Iv-B2 Targets

Fig. 5 (a) sketches a setup of our tests within the fog chamber. The Velodyne UltraPuck is put on a height-adjustable table (as shown in Fig. 5 (b)). It works at 10HZ in the strongest return model. Several typical road targets are put in the platform with various distances. Those targets are: (1) three well-calibrated Zenith Polymer boards (A, B, C) with reflectivities A: , B: and C: , (2) a dummy model, (3) a car and (4) two traffic signs (TFS1 and TFS2). The used targets and correspondent LiDAR measures are shown in Fig. 5 (b) and (c). Velodyne UltraPuck returns a calibrated reflectivity byte (0-255) for each range measure, enabling distinguishment of retro-reflectors (e.g. road sign, license plate) from diffuse reflectors (e.g. road, tree trunk). The measured reflectivity has either:

  • a value between 0 to 100 for diffuse reflectors, an approximation of reflectivity based on the ratio of emitted and received laser power.

  • a value between 101 to 255 for retro-reflectors, characterizes a continuum from a dirty or imperfect retro-reflector to a more robust retro-reflector at an ideal angle.

Within the utilized targets, the three calibrated boards, the dummy model, and the car except plate region belong to diffuse reflector. Fig. 5 (d) shows fitted Gaussian distributions of reflectivities for these diffuse targets at 15m, (e) demonstrates the fitted reflectivity distributions of three retro-reflectors (car plate and two traffic signs). Three calibrated boards obviously distinguish each other. The dummy model and car without plate are similar: both of the two targets’ reflectivities range between 0 to 35.

(a) Ground truth of a test scenario (Model, 3 boards, R=15m)
(b) All the LiDAR measures (Model, 3 boards, R=15m, V=55m)
(c) LiDAR measures of targets (Model, 3 boards, R=15m V=40m)
(d) LiDAR measures of targets (Model, 3 boards, R=15m V=80m)
(e) LiDAR measures of targets (Model, 3 boards, R=20m, V=40m)
(f) LiDAR measures of targets (Model, 3 boards, R=20m, V=80m)
(g) LiDAR measures of targets (Two traffic signs, R=15m,V=15m)
(h) LiDAR measures of targets (Car, R=10m, V=20m)
(i) LiDAR measures of targets (Car, V=40m, R=10m)
(j) LiDARmeasures of targets (Car, V=80m, R=10m)
Fig. 6: Examples of LiDAR recordings in several scenarios. Color encodes the reflectivity. The axis represents the LiDAR.

Iv-C LiDAR Recordings

Weather Condition Targets Distance Target-LiDAR
Fog dissipation:
meteorological
10m visibility to
clear condition
Dummy model, three boards 10 times: 5-25m(every 2.5m),27meter
Car 4 times: 10-25m, every 5m
Two traffic signs 5 times: 10-25m every 5m, 22.5m
None Background ground truth
TABLE II: All the tested scenarios

In our tests, one or several targets were put at varying distances (from 5m to 30m) in front of the LiDAR. All the tested scenarios are summarized in Tab. II. In each test, a ground truth LiDAR data was logged at the beginning without fog. Then, we started to generate the artificial fog controlled by visibility sensors. The LiDAR measurements, which contains range, azimuth angle, ring number, reflectivity and timestamp, were recorded until the targets were fully and stably detected. The logged LiDAR data was synchronized with meteorological visibility data as well.

For each test, we manually extract region of interest (ROI) of lasers hitting on the targets. For Velodyne UltraPuck, every transmitted laser can be indexed by a ring number (between 0 to 31) and an azimuth angle between 0 to 360 degrees, encoded by 0 – 1800 when operating at 10HZ. The ring numbers and azimuth angles of the lasers hitting on the targets are manually extracted and saved as laser ROIs. Only LiDAR measures within the ROIs are retained for further analysis. For each test as in Tab. II, all the recorded data can be represented as:

(5)

where are the laser index comprising ring number and azimuth angle. , and are the range, reflectivity, and visibility measures at time , respectively. and are the start time and end time of this test.

V Analysis of Experimental Results

V-a Modeling ranging process impacted by fog

According the power model in Eq. 2, the received laser power from the target is mainly decided by the distance , surface reflectivity of the target, and the extinction coefficient of the transmission medium. Since the signal processing unit is a blackbox embedded inside the sensor, we exclude this factor from consideration. The ranging process of a ToF LiDAR can be modeled as:

(6)

where and are the target’s reflectivity and the extinction coefficient during the fog test. is the target’s reflectivity in clear condition.

Under the homogeneity assumption of fog, the extinction coefficient can be characterized by the meteorological visibility measurement . Meanwhile, as introduced in Sec. III-B, surface reflectivity is also influenced by fog, which is measured by visibility . Therefore, during our tests, the fog impacts on ranging measures can be modeled as:

(7)

Where denotes a specific ranging process under such situation. Eq. 7 qualitatively illustrates that, during our test, the LiDAR performance is comprehensive influenced by three factors: the distance , the target’s reflectivity , and the severity of fog . Due to the ignorance of the internal signal processing unit, analytical form of can hardly be obtained.

V-B Qualitative analysis

Based on the models in Eq. 2 and Eq. 7, we can infer the following LiDAR characteristics under fog:

Fig. 7: Average range measures with meteorological visibility for randomly selected individual lasers of various targets at 15m. Disappear visibilities are marked as the crossings of green lines.
  1. For a target at given distance , visibility is proportional to the ranging capability: the higher the visibility at time , the bigger the chance of .

  2. For a given visibility and given distance , surface reflectivity is proportional to

    the ranging capability: the higher the target’s surface reflectivity, the bigger the probability of

    .

  3. For a target under a certain visibility fog condition, distance is reverse proportional to the ranging capability: the bigger the , the less possibility of .

Those three characteristics can be verified by the tests summarized in Tab. II. Fig. 6 visualizes the LiDAR measures of several objects under various visibilities and distances. Fig. 6 (a) and (b) show the difference of LiDAR outputs between clear and foggy environments. The clutter points in (b) are the ranging noises caused by fog, and part of the boards are not detected comparing with (a). Fig. 6 (c) and (d) show the range measures for the three calibrated boards and dummy model at 15m with visibility of 40m and 80m respectively. In Fig .6 (c), board A ( reflectivity, on the left) and the lower part of the model are barely visible, while all targets are detected in (d). Fig .6 (e) and (f) demonstrate similar phenomenon when the targets are at 20m. The comparison between (c)-(d) and (e)-(f) verify the relation between ranging capability and visibility. From Fig .6 (c) to (f), we also observe that the objects with higher reflectivity appear earlier than others. The board C of strongest reflectivity (, in the middle) is detected before the other two boards, as comparing (e) with (f). Comparing (f) with (d), we can find that, under the same visibility, board A and the lower part of the model do not appear in (f). However, these parts are detected in (d) when the distance is smaller. This reveal the third property summarized above, which can also be found from comparing (c) with (e).

Fig. 6 (g) shows the testing results of two traffic signs at 15m. Since the reflectivity of two traffic signs are much higher than the others (as in Fig. 5 (e)), they are fully detected even when – much earlier than the test of three boards as shown in (d) when . As the tests of car demonstrated in Fig. 6 (h) (j) no surprise to observe that the car’s plate appears earlier than other parts. Because the tyre and the window parts of the car have the lowest reflectivities, those parts are still invisible for the LiDAR when the other parts are detected, as shown in Fig. 6 (j). All the three qualitative properties summarized above can be verified in the test examples.

V-C Quantitative analysis

(a) The minimum visibilities for three boards
(b) The minimum visibilities for the two traffic signs
(c) The minimum visibilities for the model (divided by upper part and lower part)
(d) The minimum visibilities for the car (divided by plate, strong reflection and weak reflection parts
Fig. 8: The minimum visibilities for different objects with regard to various distances

The above qualitative analysis gives a general picture of Velodyne UltraPuck’s performance with regard to distance, targets and fog density. In this section, we quantitatively evaluate the behavior of Velodyne UltraPuck in foggy environment.

V-C1 Individual ranging process

Being synchronized with meteorological visibility data, the ranging process of each individual lasers within ROI is able to be visualized. Since the LiDAR is running at 10HZ while the visibility data is 1HZ, we utilize the average range measures during every second:

(8)

Fig. 7 (a) - (i) visualize the of several randomly selected lasers within ROI, their true ranges without fog and the synchronized visibility measures. Fig. 7 (a) plots the (red) and (blue) for a certain laser hitting on Board A (measured ground truth reflectivity 3 by LiDAR) at 15m. In the beginning, due to low visibility, the measured range starts from false values much smaller than the true value (15m). Then, along with the fog dissipation, gradually increases until reaches the true value. After reaching true value, although sometimes deviates from the ground truth, it is generally stable. The range measure’s trend of increasingly close to true value with regard to the visibility augment is clearly observed. Similar tendency can be discovered from (b) to (i), which are samples of LiDAR measurements for the other targets.

V-C2 Disappear visibility

Apart from quantitatively demonstrating the relation between the range measures and meteorological visibility, through Fig. 7 (a) - (i), we can find the time when the LiDAR measures start to be true values. By associating this time with the recorded meteorological visibility, we can define a disappear visibility representing the ranging capability:

Definition 1

Given a certain distance and target’s surface reflectivity , disappear visibility is the minimum visibility that allows the correspondent ranging process return the true distance measure. For an individual laser within ROI in our tests, its disappear visibility is:

subject to
where

Where is a small threshold that decides whether the measured range equals to the truth or not. In Fig. 7 (a), the green lines point out the disappear visibility (77m) for a scanned point on Board A (relectivity 3) at 15m distance. The disappear visibilities for other tests are also shown in Fig. 7 (b) - (i).

is an important indicator describing a LiDAR’s measurability under fog environment. As an important indicator describing the measurability of a LiDAR under fog environment, has two-fold meanings or usages:

  • For a given obstacle at given distance, can be used for benchmarking different types of LiDARs. A low represents a good anti-interference capability within fog.

  • For a given LiDAR, points out its functionality under fog environment. Therefore, for a nature fog with measured visibility , comparing with can provide an evaluation of operational feasibility for a LiDAR based autonomous vehicle.

Fig. 8 (a) - (d) visualize the for all the tested objects. The 3 calibrated boards and 2 traffic signs can be assumed to have homogeneous surface of reflectivity, while for the car and dummy model, we group the measured reflectivities into similar clusters. The average reflectivities for each targets (at 15m) are: Board A: 2.75, Board B: 22.2, Board C: 45.54, Model Upper part: 17.7, Model lower part: 2.67, traffic sign 1: 169.9, traffic sign 2: 209.2, Car plate: 133.04, car strong part: 15.6, car weak part: 1.17.

From those experimental results, the disappear visibility is principally influenced by the surface reflectivity. In general, the stronger reflectivity, the lower disappear visibility. In Fig. 8 (a), the order of disappear visibilities of Board A/B/C aligns with the order of their average reflectivities: . This effect is repeatedly verified by the average disappear visibilities for the model, car and traffic signs. As the traffic sign 2 has the highest reflectivity measures, it is not surprising to observe that it has the lowest disappear visibility.

V-D Modeling disappear visibility by machine learning

After discussing the experimental results of LiDAR measures (in Fig. 6, 7) and the disappear visibilities (in Fig. 8), we are interested to model the disappear visibility based on the recorded data. However, for a specific LiDAR, giving an analytical form of for a certain target at a certain distance is too complicated to be achieved. Therefore, based on the recorded dataset in CEREMA’s adverse weather facility, we propose a data-driven method to model for the tested LiDAR.

From Eq. 7 and the definition of , we can simply infer that is influenced by the :

(9)

where is a function implying the relationship between and .

Fig. 9: The histograms of ranges (left) and reflectivities (right) for the recorded data

V-D1 Gaussian Process Regression (GPR)

Gaussian process (GP) [23] is a non-parametric machine learning tool that they do not give an explicit function between the inputs and outputs. A Gaussian process is an infinite dimensional Gaussian distribution. A GP model is entirely defined by a mean function and a covariance function :

(10)

Usually we assume . One of the most popular usages of GP is regression. Having a training set of observations, , where denotes a

-dimensional input vector and

denotes a scalar output, we assume the observations have additive i.i.d Gaussian noise with variance

: . We are interested in making inferences based on the relationship between inputs and outputs. Under the framework of GP, an inference for a test point involves the computation of the mean and variance :

(11)
(12)

,

. Training a GP is to optimize the hyperparameters

to maximize a marginal likelihood:

(13)

Given the definition of GP in Eq. 10, the function can be modeled as a 2D Gaussian Process:

(14)

In this paper, we use the collected experimental data to learn through the representation of Gaussian Process.

V-D2 Training GP model

The average range measures of each second (in Eq. 8) of all the lasers within ROI are used for jitter removing. In our test, distinctive samples are collected. The distribution of the collected dataset is not evenly distributed. As shown in Fig. 9, there are more samples in short range () and in low reflectivity () (Note that we don’t have enough samples of relectivities from 60 to 100). There are 565 training data manually selected within and . In Fig. 10 (a), the blue circles are the training data when the range is around 10m.

The Matérn 3/2 kernel function is chosen because of its finite differentiability that it is able to match physical processes more realistically [26]:

(15)

where are the hyperparameters and . The training is realized by the GPML toolbox. The hyperparameters of the trained GP model are: ( and are normalized into [0,1] for training). Fig. 10 shows the training results when .

Prediction (in meter)
[width=4em]R 1 5 10 30 50 80 120 180 250
10m 54.7 48.3 40.6 25.9 22.8 21.0 13.0 11.9 10.4
15m 65.3 57.6 47.4 37.5 32.7 25.3 18.5 12.4 10.7
20m 74.7 72.4 70.2 62.9 54.5 42.5 29.7 18.9 13.8
25m 103.6 94.1 82.2 71.9 66.4 50.2 37.5 23.7 16.2
TABLE III: Predictive on some typical distances and reflectivities
Failure rate
[width=4em]R [0-100] [100-255]
0-10m 2.4% 0
10-15m 11.9% 0
15-20m 15.2% 1.0%
20-25m 19.7% 0
Overall 7% 0
TABLE IV: Failure rates of the predictions
(a) An example of trained GP when m, and compared with collected dataset. The blue circles represents the samples utilized to train the GP model. The yellow polyline and shadow region are the predictive mean and 95% (2) confidence region. Crossings are the collected dataset for verification.
(b) Predictions by the trained GP model of for from 10m to 30m, between 0 and 255.
Fig. 10: Trained Gaussian Process and its predictions
Average predicted errors
[width=9em]DistanceReflectivity [0-10) [10-20) [20-30) [30-40) [40-50) [50-100) [100-200) [200-255] Overall [0-255]
10m - 15m 8.03m 6.58m 6.27m 3.65m 1.76m 2.19m 2.52m 0.55m 4.41m
15m - 20m 10.49m 7.74m 8.25m 1.59m 2.35m 2.25m 1.43m 0.40m 6.79m
20m - 25m 23.13m 17.70m 12.99m 10.62m 4.75m 14.61m 4.23m 1.58m 12.88m
25m - 30m 32.50m 18.24m 10.49m 13.43m 4.36m 17.06m 2.81m 2.40m 14.09m
TABLE V: Prediction errors of the trained GP model

V-D3 Results:

The trained GP model is used to predict the for all the reflectivities [0-255] between 10m to 30m, as shown in Fig. 10 (b), some typical predictions are shown in Tab. III. To evaluate the accuracy of the GP based prediction, we compare compare the predictions with the real values from the 3853 samples. In the comparison, when the real measured

prediction is out of the 95% confidence region of the prediction, it is classified as a failed prediction. Tab.

IV demonstrates the failure rates. It shows that our GP model is quite stable for the retro-reflective objects (), while for the diffuse reflection targets (), the failure rate increases with the distance. While since the dataset contains more samples for short distances, the overall failure rate reaches 7% for the total dataset.

For the non-failed predictions, we take the absolute difference between the prediction and real values as the predictive errors. The results are shown in Tab. V. Similar to the failure rates, the prediction errors increase along with the distance, particularly for the low reflectivity targets (). While the prediction error for the retro-reflected targets () are stable and just around 2 meters. The fact that low reflectivity targets prone to be more disturbed than retro-reflected targets could explain this effect. The reason for the worse performance for long range and low reflective targets is that the ranging process becomes much more noisy than the short and strong reflective targets.

Vi Conclusion

In this paper, experimental results of a typical ToF LiDAR under fog environment are demonstrated. Starting from the ranging principle, the factors impacting ToF LiDAR under fog are investigated. Furthermore, we quantitatively evaluate the experimental results and propose a concept of ”disappear visibility”. Following the data-driven principle, we use Gaussian Process to model the distribution of disappear visibility. This method is quite meaningful for evaluating the safety of a LiDAR based autonomous vehicle in fog conditions.

In the future, we want to test more targets (especially for the ones have reflectivities between 50 to 100) for longer distance as far as more than 100m. Also, more impact factors for the disappear visibility, such as the incident angle between the laser and target’s surface, will be considered.

Acknowledgment

This research was funded by the European Union under the H2020 ECSEL Programme as part of the DENSE project (Grant Agreement ID: 692449). DENSE is a joint European project which is sponsored by the European Commission under a joint undertaking. The project was also supported by Groupe RENAULT. We gratefully acknowledge the support from CEREMA and Velodyne.

References

  • [1] I. Ashraf and Y. Park (2018) Effects of fog attenuation on lidar data in urban environment. In SPIE Smart Photonic and Optoelectronic Integrated Circuits, Cited by: §I, §II.
  • [2] M. Beer, J. F. Haase, J. Ruskowski, and R. Kokozinski (2018) Background light rejection in SPAD-based LiDAR sensors by adaptive photon coincidence detection. Sensors 12, pp. 4338–4354. Cited by: §III-A2.
  • [3] F. Bernardin, R. Bremond, V. Ledoux, M. Pinto, S. Lemonnier, V. Cavallo, and M. Colomb (2014) Measuring the effect of the rainfall on the windshield in terms of visual performance. Accident Analysis and Prevention 63, pp. 83–88. Cited by: §IV-A.
  • [4] M. Bijelic, T. Gruber, and W. Ritter (2018) A benchmark for lidar sensors in fog: is detection breaking down?. In IEEE Intelligent Vehicles Symposium (IV), Cited by: §I, §II.
  • [5] M. Bijelic, F. Mannan, T. Gruber, W. Ritter, K. Dietmayer, and F. Heide (2019) Seeing Through Fog Without Seeing Fog: Deep Sensor Fusion in the Absence of Labeled Training Data. CoRR abs/1902.0. External Links: 1902.08913, Link Cited by: §IV-A.
  • [6] M. Colomb, K. Hirech, P. André, J.J. Boreux, P. Lacôte, and J. Dufour (2008) An innovative artificial fog production device improved in the European project “FOG”. Atmospheric Research 87, pp. 242–251. Cited by: §IV-A.
  • [7] K. Dahmane, P. Duthon, F. Bernardin, M. Colomb, N. Essoukri Ben Amara, and F. Chausse (2016) The Cerema pedestrian database : A specific database in adverse weather conditions to evaluate computer vision pedestrian detectors. In 7th Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT), Cited by: §IV-A.
  • [8] P. Duthon, F. Bernardin, F. Chausse, and M. Colomb (2016) Methodology used to evaluate computer vision algorithms in adverse weather conditions.. In Proceedings of 6th Transport Research Arena, Cited by: §IV-A.
  • [9] A. Filgueira, H. Gonzalez-Jorge, S. Laguela, L. Diaz-Vilarino, and P. Arias (2017) Quantifying the influence of rain in lidar performance. Measurement 95, pp. 143–148. Cited by: §II.
  • [10] C. Goodin et al. (2019) Predicting the Influence of Rain on LIDAR in ADAS. Electronics 8, pp. 89–98. Cited by: §II.
  • [11] B. Hassler (1998-12) Atmospheric transmission models for infrared wavelengths. Ph.D. Thesis, Linkoping University. Cited by: §III-B.
  • [12] Cited by: §III-A2.
  • [13] M. Ijaz, Z. Ghassemlooy, and E. Bentley (2013) Modeling of fog and smoke attenuation in free space optical communications link under controlled laboratory conditions. Journal of Lightwave Technology 31, pp. 1720–1726. Cited by: §II.
  • [14] M. Jokela, M. Kutila, and P. Pyykönen (2019) Testing and validation of automotive point-cloud sensors in adverse weather conditions. Applied Sciences 9, pp. 2341–2355. Cited by: §II.
  • [15] S. Kraemer et al. (2018) LiDAR based object tracking and shape estimation using polylines and free-space information. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Cited by: §I.
  • [16] M. Kutila, P. Pyykonen, H. Holzhuter, M. Colomb, and P. Duthon (2018) Automotive LiDAR performance verification in fog and rain. In 21st IEEE International Conference on Intelligent Transportation Systems (ITSC), Cited by: §II, §IV-A.
  • [17] J. Lekner and M. C. Dorf (1988) Why some things are darker when wet. Applied Optics 27, pp. 1278–1280. Cited by: 2nd item, §III-B.
  • [18] J. L. Leonard. (2008) A perception-driven autonomous urban vehicle. Journal of Field Robotics 25, pp. 727–774. Cited by: §I.
  • [19] Cited by: §III-A2.
  • [20] T. Ogawa and G. Wanielik (2016) ToF-LiDAR signal processing using the CFAR detector. Advances in Radio Science 14, pp. 161–167. Cited by: §III-A2.
  • [21] J. Pascoal, L. Marques, and A. Almeida (2008) Assessment of laser range finders in risky environments. In IEEE/RSJ International Conference on Intelligent Robots and Systems, Cited by: §II.
  • [22] N. Pinchon, O. Cassignol, A. Nicolas, P. Leduc, J. P. Tarel, R. Bremond, G. Julien, N. Pinchon, O. Cassignol, A. Nicolas, F. Bernardin, N. Pinchon, O. Cassignol, A. Nicolas, F. Bernardin, P. Leduc, and J. Tarel (2016) All-weather vision for automotive safety : which spectral band ?. In International Forum on Advanced Microsystems for Automotive Applications, Cited by: §IV-A.
  • [23] C. E. Rasmussen and C. Willams (2006) Gaussian processes for machine learning. The MIT Press. Cited by: §V-D1.
  • [24] R. H. Rasshofer, M. Spies, and H. Spies (2011) Influences of weather phenomena on automotive laser radar systems. Advances in Radio Science 9, pp. 49–60. Cited by: §II, 1st item, §III-A1.
  • [25] J. Ryde and N. Hillier (2009) Performance of Laser and Radar ranging device in adverse environmental conditions. Journal of Field Robotics 26, pp. 712–727. Cited by: §I, §II.
  • [26] M. L. Stein (1999) Interpolation of spatial data. Springer. Cited by: §V-D2.
  • [27] U. Wandinger (2005) Introduction to LiDAR. Springer Series in Optical Sciences, Springer 102. Cited by: §III-A1.
  • [28] A. D. Whalen (1971) Detection of signals in noise. Academic Press. Cited by: §III-A2.
  • [29] J. Wojtanowski and M. Kaszczuk (2014) Comparison of 905nm and 151550 semiconductor laser rangefinders’ performance deterioration due to adverse environmental conditions. Opto-Electronics Review 22, pp. 183–190. Cited by: §II, 1st item.
  • [30] J. Zhang and S. Singh (2014-07) LOAM: lidar Odometry and Mapping in Real-time. In Robotics: Science and Systems, Cited by: §I.