Simultaneous localization and mapping (SLAM) is quite often used by robots of industrial, service and rescue applications, to name a few. To get a better result of SLAM, light detection and ranging (Lidar) is used for its high accuracy, long detection range, and high stability. However, reflective materials like glass and mirrors will cause problems when using Lidar. It not only can cause the sensor to report the wrong range data to the reflecting obstacle, thus potentially causing a collision, but it can also provide get wrong reflected points, which will cause errors in the maps generated by a SLAM algorithm. For example, Fig 1 shows a SLAM result in a scene full of glass. The red points where reflected by the window, which are placed outside of the room by the SLAM algorithm oblivious to the reflection. Obviously, those reflected points will significantly degrade the map quality. So reflection detection in Lidar is very important for robotics and especially SLAM. Detection and removal of the reflection can improve the map quality of SLAM and safety of autonomous robots.
The paper is organized as follows. Chapter I introduces the background and related work. Chapter II introduces the sensor and reflection modeling. In Chapter III, we present methods to detect and remove the reflection and build the map. Experiments, results, and discussion are presented in Chapter IV. Finally, conclusions are given in Chapter V.
The experiment and data collection is done by the MARS Jackal Mapper, a fully hardware-level synchronized mapping robot platform for 3D mapping and SLAM build by ShanghaiTech Mobile Autonomous Robotic Systems Lab (MARS Lab). This is a robot with powerful computation, RGB cameras, IMUs, robot odometry and two Velodyne HDL-32E Lidars (one installed vertically and the other horizontally). However, for reflection detection, we only use data from the horizontally scanning Velodyne.
We record the test and experimental data by driving a robot in the second floor of the STAR Center (ShangahiTech Automation and Robotics Center). The map of the STAR Center and the path of robot are shown in Fig 2. The red line approximates the robot path, blue lines show the main glass panes, while some small windows and glass are not marked. In the data, we have collected different kinds of reflection scenes, including windows, glass railing, glass door and floor-to-ceiling windows. These are common situations which usually cause problems when for indoor SLAM. We will use this data to test our detection method and evaluate the result.
I-B Related Works
There are already a number of papers on reflection detection in Lidar. Most of them are using 2D Lidar. Others are about post-processing of data acquired by a 3D laser scanner with a camera. Only little work has been done about reflection detection using 3D Lidars like the Velodyne scanners.
The major approach of these works can be clustered to four methods. The first is detected by material characteristics. Different material has different characteristics when lighting with laser, including reflection, scattering and absorbing. Using these characteristics can help to detect the reflection surface. The second is detected by mirror symmetry. When a point is reflected, it will follow the rule of light mirroring, which can be used to detect mirrors. The third is detected by a common phenomenon caused by reflection, for example a square shape with a hole in its center and a gap on the edge of reflective material. The last is detected by special sensor readings, like multi-echo Lidar and intensity values.
Shao-Wen Yang and Chieh-Chih Wang used mirror symmetry in 2D Lidar to resolve mirror reflection. They firstly use a distance-based criterion to determine gaps in a laser scan. Then they use a Gaussian model to predict a potential mirror. After that, they use the Euclidean distance function to calculate the likelihood of the mirror for verification. Finally, they use ICP to match the reflected points and find the mirror.
Rainer Koch and his group have done some work with 2D multi-echo Lidar. They used a Hokuyo 30LX-EW Lidar, which records up to three echoes of the returning light wave, also with distances and intensities. They first use the sensor and integrate the pre-filter and post-filter to TSD slam to detect the reflection in 2D slam. They also use different reflective intensity values to detect the transparent and specular reflective material, and improve their 2D slam. For 3D mapping they rotated the 2D Lidar by mounting it on a motor.
Ruisheng Wang used a Velodyne 3D Lidar to detect windows while driving outdoors. First he clustered the Lidar points and then detects the facades of the buildings. After that he calculated the surface normal by using PCA to detect potential windows. Finally, he projected potential window points to localize the windows.
Jae-Seong Yun and Jae-Young Sim purpose a method to remove the reflection for large scale 3D point clouds
. They find glass points on the unit sphere according to the number of echo pulses. Then they estimate the reliability of detected glass point and reflection symmetry to remove the virtual point.
Ii Sensor Modeling
Ii-a Lidar Sensor
Light detection and ranging (Lidar) is using light detection to obtain the range of obstacles. The Lidar sensor we use is a Velodyne HDL-32E. It is a widely used Lidar in SLAM and automatic driving. The sensor has three different modes of processing the laser beam pulses: dual return mode, strongest return mode, and last return mode. Upon receiving the light from the single laser beam in one direction, the sensor analyzes the strongest and the last return and calculates their distance and intensity. Then the sensor is returning the values according to the chosen mode. A sensor in dual return mode will return both the strongest and the latest. If the strongest return is the same as last return, the second-strongest return will returned as the strongest. In any case, no matter whether the return is the strongest or the last, it needs to have enough intensity, or it will be ignored. This feature is useful for reflection detection.
Ii-B Reflection Model of Different Material
Different material have different reflectivities and optical properties. Normal materials like wood, walls, or clothes mainly have diffuse reflection the laser light, with little absorption and specular reflection, which is ideal for Lidars. Reflective materials will reflect the incident laser light like mirrors or glass do. Glass mainly has specular reflection and transmittance with a little diffuse reflection on laser light. Using these optical properties we can build a reflection model about how laser light interacts with the glass.
In work on lasers hitting different material has been performed. Research on light behavior of hitting the glass at different angles is presented in  and . Fig (a)a shows the reflection model of a laser beam interacting with glass. When the laser beam hits the glass, there may be three different results of laser beam return. When the laser beam hits the glass almost perpendicularly, the intensity received is high and it is the peak of return intensity. When the angle of incidence decreases, the intensity drops quickly. As shown in Fig (b)b, if the angle of incidence decreases to a certain degree (also related to the distance), the return intensity will become too low to detect. As glass has transmittance, some of the light can pass through it. If there is something behind the glass, it will diffusely reflect the light and the return light can pass through the glass again to the receiver. So a Lidar can get the distance of things behind the glass with weakened intensity due to twice passing through the window. Glass also has specular reflection, so some of the light is reflected by the glass at the angle of incidence. If specularly reflected light hits something in front of the glass, it will reflect from the reverse path to the receiver, and thus the Lidar may get a reflected reading from an object, with decreased intensity. The sensor, unaware of the reflection, will report the reflected point to be located behind the glass.
Fig (c)c shows an example for the dual return mode of the Velodyne 3D laser. In this case the sensor will measure three peaks. The first return is from the glass, because it is the nearest and thus the time of flight is the shortest. The second return is from the obstacle in front of the glass, and in this case it is the strongest. The last return is from the light that passes through the glass and is reflected by the obstacle behind glass. It is last in this example, because the path to the obstacle behind glass is longer than the path to the obstacle in front of the glass. So in this example, the strongest return will give a reflected point, the last will be the object behind the glass, while the glass itself is ignored because it is neither the strongest nor the last return.
From this analysis, we can conclude that: 1. The return from glass must be the nearest point of either the strongest or the last point. Moreover if there is more than one return, the return from the glass could only be the strongest point but not the last point. 2. The intensity returned from the glass will be the strongest when the incident laser beam is perpendicular to the glass and lower as the angle decreases. 3. Ignoring special conditions such as fog or smoke, if the strongest and last point differs, there must be a piece of glass in this direction (except for a few points with measurement errors, which may also have different strongest and last points even without glass).
We identified two phenomenons for being useful for reflection detection, which we will motivate below by analyzing example scans.
Ii-C Intensity Peak Analysis
As mentioned before, if there is more than one return, the return from glass will be only possible to show in the strongest point but not the last point. So for this method we only use the strongest point cloud to analysis.
The first point cloud is a classroom with some windows, and it marked as No. 1 in the robot path Fig 2. In this scene, there is no intensity peak, because the robot is not high enough, so it is lower than the windows, which prevents the Velodyne from being perpendicular to the windows.
The second point cloud, shown in Fig (a)a, is a corridor with a glass railing. It is marked as No. 2 in the robot path Fig 2. Since the background obstacles have enough intensity, the glass can only be observed as an intensity peak perpendicular to the Velodyne in strongest point cloud.
The third point cloud, shown in Fig (b)b, is a corridor with lots of floor to ceiling windows, which is No. 3 in the robot path in Fig 2. The background obstacles are far enough, so they do not have enough intensity. As a result, most of the glass can be observed, and the intensity peak is in the center of it.
These three examples show, that the intensity peak can indeed occur when the sensor is just perpendicular to the reflective material. However, it is not always effective for a robot that is not high enough, since it may not be just able to fire the laser beams perpendicularly to the glass.
Ii-D Dual Return Reflection Analysis
To highlight the dual return analysis we get two point clouds in two different places and separate the last and the strongest point cloud. For the analysis, we manually choose the windows and remove (crop) points before the windows. Then we count all remaining points, the points just on the windows, the wrongly reflected points and the points of obstacles behind the windows.
In the strongest point cloud in the cropped classroom scene, about 25% of the points are on the windows, which are in the degrees from about -20 to +20 perpendicular to the glass. About 45% points are reflected points and about 30% points are outside obstacles. For the last point cloud, there are about 1% points on windows. About 30% are reflected points and 69% points are outside obstacles.
In the strongest point cloud in corridor scene, there are about 35% points on windows, which are in the degree of about -16 to +16 perpendicular to the glass. About 16% points are reflected points and about 50% points are outside obstacles. For last return point cloud, there are about 10% points on windows. About 5% are reflected points and 85% points are outside obstacles.
From these two scene analyses, we can find these phenomenons: 1. Last point cloud has fewer points on windows than the strongest point cloud. 2. Last point cloud has less reflected points than strongest point cloud. 3. Last point cloud has more outside obstacle points than the strongest point cloud.
We define the set purely with points of glass to be , with points of reflection to be , with outdoor obstacles to be , the set of normal, unreflected indoor points , the set of last return points to be and strongest return to be .
First we consider the case where, measured from the glass, the outdoor obstacle is farther than the indoor obstacle: if , then ; if , then ; if , then . When , we have and .
Secondly, if, again measured from the glass, the outdoor obstacle is closer than the indoor obstacle: if , then ; if , then ; if , then . When , we have and .
Since we can use the phenomenon above to detect the glass, and we can have knowledge of the depth of the scenario and the outdoor obstacle through the scan, we may able to classify points in a scan into either or . After we have the coefficients of the reflective plane fitted using , we are able to mirror the reflection back to where the real objects are to achieve greater scan point utilization and actually also map areas not in the line of sight of the scanner.
According to the phenomenons and rules we concluded above, we design a pipeline to process the point cloud and detect reflection.
Process Every Scan
Process Velodyne packet and convert to organized point cloud
Detect reflection using two approaches:
Intensity Peak: find intensity peak and get a perpendicular infinite plane
Dual Return: Choose nearer different strongest points and get an infinite plane
Find Boundaries: Get boundaries of reflective infinite plane based on the frame
Classify Points and Mirror Reflected Points
Integrate to SLAM Framework: Use filtered point cloud to do SLAM and get transform
Iii-a Process Vedolyne Data Packet and Convert to Organized Point Cloud
We use PCL to store, reorder and process the point cloud. We firstly separate the packets from Velodyne into strongest and last returns. These point clouds are unorganized. An organized point cloud can be considered as a 2D matrix. Velodyne HDL-32E has 32 rings, so we set the row to be 32. The step azimuth is 2.79 mrad and there are about 2251 points a ring, so the size of the organized point cloud is . We organize these point clouds according to the azimuth and channel id.
Iii-B Intensity Peak
To find the intensity peak, we firstly choose the horizontal ring of the Velodyne from a total of 32 rings using the whole organized point cloud and traverse all points in the ring. We check if the intensity increases from a low threshold to a max threshold first and then decreases in the same way. Also, the distance of two adjacent points in the sequence should be less than a small threshold, because the points of the intensity peak in the horizontal ring on the glass are close. If there is a gap in the distance, the sequence is invalid or it should be ended at the gap. There may be multiple potential peaks, so we need to verify the points near the potential peak. We choose the points of the same degree in two up and two down rings and check the intensity on vertical is also in such order. If this is verified, we can store these points, and later fit a plane through those points. This algorithm 1 is described below.
We use random sample consensus (RANSAC) to fit the planes. We get the inliers of the plane as well as the plane parameters.
Iii-C Dual Return
As described before in the dual return analysis, we can use the dual return method to find reflections. Firstly, we use both organized point clouds we separated from the dual return packet. In each ring, we check if the strongest and last point cloud of the same beam is the same. If they are the same, we keep them. If not, in that direction there exists glass. The closer point between the strongest and last point may be the one on the glass. We use RANSAC to fit the planes, which only works if the plane haves enough inliers. The algorithm 2 is described below.
Iii-D Find Boundary
Through the two methods above, we can have some information including planes from intensity peak and planes from dual return and degrees that contain glass from dual return. Using this information we can estimate the boundaries of the glass. We have a hypothesis that the glass is installed in a metal frame or on a wall. So it should have a boundary in the form of a frame. Velodyne only has 32 rings vertically, so the upper and lower boundary may not be observed directly but the left and right boundary can be observed if there is a line of sight (LoS). Since we have the planes, we search for the left and right points that are near the plane and in the direction of the glass. Choose the nearest boundary for each plane as the left and right boundary. For the up and down boundary, check the angle that contains glass in each ring, choose the uppermost (lowermost) ring that contains glass as the upper (lower) boundary.
Iii-E Classify Points
Now we have found some glass with boundary, so we need to classify all the points. We classify the points as inside objects , glass , reflected points and obstacles behind glass . The points exactly on the same plane within the boundary of the glass is . The points in front of the glass plane is and the points behind glass consist of both and . Since we detected the glass, it is simple to separate and , but it is not easy to distinguish and . Here we propose a three-step method to recognize whether a point is in or in . We classify points using all detected glass planes and we ignore the plane if the number of outside points classified by that plane is too low.
The first and second step is to determine the points definitely belonging to . In first step, we mirror the inside points against the glass plane. Outside points more far away than the mirrored points are considered as outside obstacles . We then mirror the points behind glass back against the glass plane (to the inside) and trace the laser beams on the direction of mirrored points to check if there is another point further than it. If so, the origin point of this mirrored point is an outside obstacle. The procedure in step three is identical to step two, except we now have the points of outside obstacle, so we can determine the points in by tracing the laser beams on the direction of outside points and check if there is another point further than that in the same direction. However the third step will be interfered by multiple layers of parallel glass (like an enclosed balcony) but fortunately in most cases such interfering points do not have enough intensity, so we can ignore it.
Iii-F Reflection Back Mirroring
Iii-G Integrattion to SLAM framework
To understand how can our method can improve SLAM quality, we integrated it to a SLAM framework. For 3D SLAM, we use HDL Graph SLAM, which is a state of art SLAM method and robust in different environments. We add inside points, mirrored points and glass points and outside obstacles points to the SLAM point cloud. We modified HDL Graph SLAM to SLAM with the SLAM point cloud and get the transform. We save the position of detected glass planes with the transform in the map frame. When we can not find the glass plane by previous methods, we find the saved glass plane based on the current pose to classify the point. After SLAM we have a good quality map without glass interference and with points classified to different labels. We can also use the transform of the SLAM to build a map of glass for obstacle avoidance and further use.
Iv Experiments and Discussion
We record the experiment data by driving a robot in the STAR Center second floor. The map and the path of the robot are shown in Fig 2. The experiment data includes the raw Velodyne packet and the original dual point cloud with 10Hz, respectively.
The photo of the 3 scenes in the map is shows in the following. Fig (a)a is a classroom with some windows, Fig (b)b is a corridor with glass railing, Fig (c)c is a corridor with lots of floor to ceiling glass.
Iv-a Glass Detection and Point Classification
We evaluated our glass detection and point classification in the three scenes. Fig (a)a and Fig (c)c demonstrate the result of this procedure with points colored according to their classification results. The black points are the real objects inside, the green points are the points of glass, the blue points are the points of obstacles outside and the red points are the reflections mirrored back against the glass. It is shown in Fig (a)a the red points almost perfectly matches the pillars inside, so we actually achieved mapping of the back-side of the pillars via reflection on the windows! To demonstrate how we use the intensity peak to find a glass plane, we colored the points (in the lower part) in Fig (b)b according to their intensities. There is an obvious intensity peak in the lower part of this scan. We fitted and marked the glass plane using the points in the intensity peak and found its boundary using the gaps. The points in the upper part of this subfigure are intended to be colored red to represent the points behind the glass. What’s more, the metal frame of glass railing is also reflective, it sometimes can cause false positive for the intensity peak method. But the false positive will be ignored when classify the points.
Iv-B SLAM Experiment
We integrate the method to HDL Graph SLAM and run the mapping twice with all the data recorded, the first time using the original dual point cloud and then using the SLAM point cloud described above. For comparison and statistics, we label the points in the original SLAM manually.
|Outside Obstacle points||164,336||8.09|
|Inside Obstacle points||1,792,397||88.26|
|Unclassified reflection points||2,799||0.16|
|Inside Obstacle points||1,672,622||94.72|
|Outside obstacle points||63,528||3.60|
|–Correct Marked Outside obstacle points||47,601||2.70|
Fig (a)a is the SLAM result with the original point cloud. Fig (b)b is the SLAM result with the classified point cloud. Table I shows the counts of points in different SLAM results. From the result, we can find that original SLAM has far more reflection points than classified SLAM, which shows that our method is effective. Most of the glass points are detected and classified. A lot of reflection points have been mirrored back in the classified result and the reflections of the ceiling are also detected successfully, which makes it possible to have the knowledge of the room height using just one scan that is not supposed to see the ceiling. However, there are still a few unfiltered reflection points in the classified results. This is because we may not observe the full glass and can only find the nearest boundary of the glass, so we can not remove the reflection outside the boundary of glass. For the obstacles in the classified SLAM, our method marks points behind the glass railing as outside obstacles. Some outside obstacles are not marked because the points have same distance in strongest and last point cloud, and thus it can not be classified. Most importantly, the mirrored reflection points can successfully map object not in the line of sight, for example, the back side of the three pillars and the ceiling in the classroom can not be observed in the original SLAM.
We developed a method for reflection detection and utilization using dual return Lidar, which is useful for indoor SLAM. This method is able to detect different reflective materials, such as glass railing, glass door and floor-to-ceiling windows. The intensity peak approach can successfully detect the glass. Using the dual return method, we can successfully find the glass plane and the range of scan-angles that contains glass. Classification of the points is achieved with the boundary of glass fitted through the detected planes. This classification also enables us to mirror the reflected points back to achieve mapping of objects behind the line of sight and out of field of view. Finally, integrating our method to an existing SLAM framework demonstrated the usefulness and improvements of the map quality.
-  (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on robotics 32 (6), pp. 1309–1332. Cited by: §I.
-  (2019) Towards generation and evaluation of comprehensive mapping robot datasets. In Workshop on Dataset Generation and Benchmarking of SLAM Algorithms for Robotics and VR/AR, 2019 IEEE International Conference on Robotics and Automation (ICRA), Cited by: §I-A.
-  (1981-06) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24 (6), pp. 381–395. External Links: Cited by: §III-B.
-  (2013) Visagge: visible angle grid for glass environments. In Robotics and Automation (ICRA), 2013 IEEE International Conference on, pp. 2213–2220. Cited by: §II-B.
-  (1958) Unitary triangularization of a nonsymmetric matrix. Journal of the ACM (JACM) 5 (4), pp. 339–342. Cited by: §III-F.
-  (2016) Localization of a mobile robot using a laser range finder in a glass-walled environment. IEEE Transactions on Industrial Electronics 63 (6), pp. 3616–3627. Cited by: §II-B.
-  (2017) Detection and purging of specular reflective and transparent object influences in 3d range measurements. ISPRS Arch 42, pp. 377–384. Cited by: §I-B.
-  (2016) Detection of specular reflections in range measurements for faultless robotic slam. In Robot 2015: Second Iberian Robotics Conference, pp. 133–145. Cited by: §I-B.
-  (2017) Identification of transparent and specular reflective material in laser scans to discriminate affected measurements for faultless robotic slam. Robotics and Autonomous Systems 87, pp. 296–312. Cited by: §I-B, §II-B.
-  (2019-02) A portable 3d lidar-based system for long-term and wide-area people behavior measurement. International Journal of Advanced Robotic Systems 16, pp. . External Links: Cited by: §III-G.
-  (2011-May 9-13) 3D is here: Point Cloud Library (PCL). In IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China. Cited by: §III-A.
Window detection from mobile lidar data.
Applications of Computer Vision (WACV), 2011 IEEE Workshop on, pp. 58–65. Cited by: §I-B.
-  (2011) On solving mirror reflection in lidar sensing. IEEE ASME Transactions on Mechatronics 16 (2), pp. 255. Cited by: §I-B.
Reflection removal for large-scale 3d point clouds.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4597–4605. Cited by: §I-B.