A Point Cloud-Based Method for Automatic Groove Detection and Trajectory Generation of Robotic Arc Welding Tasks

04/26/2020 ∙ by Rui Peng, et al. ∙ IEEE 0

In this paper, in order to pursue high-efficiency robotic arc welding tasks, we propose a method based on point cloud acquired by an RGB-D sensor. The method consists of two parts: welding groove detection and 3D welding trajectory generation. The actual welding scene could be displayed in 3D point cloud format. Focusing on the geometric feature of the welding groove, the detection algorithm is capable of adapting well to different welding workpieces with a V-type welding groove. Meanwhile, a 3D welding trajectory involving 6-DOF poses of the welding groove for robotic manipulator motion is generated. With an acceptable error in trajectory generation, the robotic manipulator could drive the welding torch to follow the trajectory and execute welding tasks. In this paper, details of the integrated robotic system are also presented. Experimental results prove application value of the presented welding robotic system.



There are no comments yet.


page 2

page 3

page 4

page 5

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Nowadays, industrial robotic manipulators are used extensively in many factories around the world, with the teach-playback method dominating the robotic welding field. More specifically, in order to manipulate a robotic arm for welding tasks, a human operator needs to set every path point with precise 3D positioning and 3D orientation in advance. However, although CAD-based approaches [norberto2004cad] can generate rather accurate welding trajectories, they still require a great deal of human intervention, as with teach-playback method. These conventional robotic welding applications are already unable to cope with the growing welding demand of the construction industry.

To improve robotic welding efficiency, sensors are considered to assist manipulators in automatically locating the welding groove. Prevailing sensors involve vision sensors [li2009measurement, liu2014iterative, diao2017passive], RGB-D sensors [li2016new, rodriguez2017feasibility, ahmed2016object], infrared sensors and laser sensors. But laser sensors are much more expensive than vision sensors. To control cost in research, vision sensors are more appealing. During actual robotic welding tasks, it is essential to plan a high-precise trajectory [ma2010robot] of the welding groove between two workpieces to achieve acceptable welding quality in complex and unpredictable situations. Recently, 2D imaging-based methods have become useful in assisting robotic welding in various industrial environments [rao2018real]. But with respect to common welding tasks, 2D vision sensors could capture only one frame image of the workpieces. An image acquisition system using a CCD video camera, is established by L. Nele et al. [nele2013image] for real-time weld seam tracking.

However, 2D imaging processing algorithms, which rely on color information, cannot deal with dramatic environmental brightness variation, particularly in the welding fields. Researchers prefer 3D vision sensors which provide abundant information of the welding environment. Furthermore, with advancement of the Point Cloud Library (PCL) [rusu20113d]-specifically designed for 3D point cloud processing, it is possible to extract and locate the welding seam region in the surface point cloud of welding workpieces. The stereo camera as a 3D sensor, is used to realize welding seam reconstruction and tracking [xu2017welding]. The tracking and planning accuracy is good for welding requirements. An unorganized point cloud-based edge and corner recognition approach [ahmed2018edge] is proved for applicable robotic welding. Also the algorithm has accuracy beyond some of relevant algorithms on 3D point cloud processing. Some methods which combine depth data and RGB images have been proposed [song2014sliding]. These methods implement low-cost 3D sensors (e.g. Intel RealSense) [yang2018pixor]. Through integrating an RGB-D sensor into the robotic system [maiolino2017flexible], the controller is able to cope with the dynamic welding environment. Li et al. [jing2016rgb] proposed a welding groove detection approach by an RGB-D sensor. The approach employs RGB images to recognize the weld groove and a point cloud to acquire the pose of the targeted weld groove.

One of the main problems in the aforementioned research works based on 3D sensors is that their experimental results lack sufficient testing of different types of welding workpieces; moreover, the targeted welding groove is simple. Inspired by [patil2019extraction], the proposed method in this paper focuses on welding groove detection and 3D motion trajectory generation. Experimental results involve three aspects: runtime efficiency, groove detection accuracy, and trajectory execution. The resulting performance of the system on four types of workpieces with V-type grooves proves feasibility in actual welding applications.

The main contributions of this paper are as follows:

  1. Develop an integrated intelligent robotic system to automatically execute welding tasks without much human intervention.

  2. Propose a point cloud based welding groove detection algorithm for unpredictable workpieces.

  3. Implement the automatic 3D welding trajectory generation method on a 6-DOF robotic arm.

Ii Robot System Implementation

The integrated robotic system contains a robotic manipulator (Universal Robot 3), an RGB-D camera (Intel Realsense D415), and a welding torch. The end-effector and whole experimental platform are shown in Fig. 1. The welding torch is tightly installed on the end-wrist of the robotic manipulator by a supportive metal structure aimed at stable welding execution. In addition, the RGB-D camera is physically connected to the welding torch instead of at a fixed position within the welding platform. The RGB-D camera moves with the welding torch. This design enables the system to flexibly cope with unpredictable welding situations. It is also convenient for hand-eye calibration to obtain the accurate transformation matrix from the camera coordinate to the UR3 base coordinate.

Fig. 1: (Left) End-effector of the proposed welding robotic system comprises an RGB-D camera and welding torch. The camera is situated above the welding torch. (Right) Experimental welding platform.
Fig. 2: Workflow of the proposed robotic system: (1) the RGB-D camera captures the surface point cloud of the workpiece; (2) the welding groove detection algorithm locates the welding groove region in the input point cloud; (3) the trajectory generation method processes the groove point set and outputs a 3D welding trajectory; (4) the manipulator automatically executes the welding motion while tracking the generated trajectory.

The system running procedure is shown in Fig. 2 in which the PC with ubuntu 16.04 is a Lenovo-Thinkpad whose CPU uses Intel i5. ROS [quigley2009ros]

, An open-source robot operating system is used to build a software framework that makes it convenient to develop algorithm modules. The welding groove detection algorithm completely relies on PCL to process the point cloud and extract the geometric feature. The Moveit

[chitta2012moveit] package, as one part of the software system, is used to control the UR3 robotic arm with motion planning and collision avoidance.

Iii Welding Groove Detection

The welding groove detection algorithm extracts the groove region by computing the geometric feature of the input point cloud, which represents the surface profile of the welding workpiece. The geometric feature is defined as the surface variation. In other words, the extent of surface slope change. Compared with flat and smooth regions, the groove region has higher surface variation (see Fig. 3).

Fig. 3: Geometric feature diagram of surface. (a) Flat region. (b) Groove region.

In order to mathematically describe the surface variation, an efficient 3D feature histogram in the algorithm was designed. Furthermore, a surface variation descriptor is computed for each point of the input point cloud. By setting a threshold value for the descriptor, the groove region can be separated from other regions.

Iii-a Input Point Cloud Preprocessing

Fig. 4: The RGB-D camera captures the surface point cloud of the workpiece with a straight-line welding groove. (a) A robotic manipulator moves so that the camera can capture the complete point cloud. (b) The surface point cloud is shown in the simulation (Rviz). (c) The actual workpiece is viewed from the side. (d) The raw point cloud is organized.

First, the RGB-D camera attached to the robotic arm moves to an appropriate position and captures one frame of a raw point cloud which covers the whole surface of the workpiece, as shown as Fig. 4. Then the raw point cloud is taken as an input of the welding groove detection algorithm.

In general, there is noise data in the raw point cloud owing to camera hardware factors. Thus, the point cloud needs smoothing before proceeding to the next step. PCL provides a Moving Least Squares (MLS) surface reconstruction method to smooth the point cloud surface and reduce noisy data. Fig. 5 shows the before and after results of the smoothing process for the point cloud of testing the welding workpieces in Fig. 4. To some extent, it demonstrates that the point cloud surface becomes more even and gentle after smoothing.

Fig. 5: Raw point cloud smoothing. (a) Point cloud before smoothing has too many noisy regions. (b) Point cloud after smoothing has more of an even surface.

Surface normal computation is an important fundamental part of welding groove detection. Also, PCL offers a mathematical method to estimate the surface normal of each point. Theoretically, given a point cloud cluster, computing one point normal is actually a problem of estimating a normal of a plane tangent to the point cloud surface. Of course, the plane must pass through this point. Simply put, one point’s neighbor (a sphere with a constant radius and the point being the sphere’s center) is a small cluster of a point cloud which can fit onto a plane. Therefore, normal of this plane is regarded as normal of the point.

Fig. 6: Surface normal map. Each white arrow represents the normal of each point. (a) Groove region. (b) Flat region. (c) Edge region. (d) Surface point cloud. (e) Groove region.

Then, a normalized map of the surface normal (shown in Fig. 6) for the after-smoothing point cloud is obtained by iterating every point with least-square plane fitting. Moreover, each point normal shown as a white arrow in Fig. 6

is a unit vector defined as:


where represents the -th point in the organized point cloud.

Inspired by [rusu2010fast] regarding object recognition, a descriptive method called a groove feature histogram (GFH) was designed to quantify surface variation extent. There are two types of GFH considered for each point in a point cloud: local GFH and global GFH.

Iii-B Local GFH

The neighbor of each point is defined as a sphere (the point is its neighbor’s central point) with a constant radius . Within this neighbor, the kdTree search method is used to find other neighbor points (the Euclidean distance to the central point is not more than ). Neighbor points are regarded as an individual point set where the normal of each point can be obtained from the surface normal map (Fig. 6). Then the central point is paired with every other neighbor point and every pair has two unit vectors which form one included angle. Fig. 7 shows one point’s neighbor with one pair.

Fig. 7: (Left) The neighbor of one point is defined as a sphere whose radius is . Within the neighbor , the red point is the central point of the sphere and the blue points are neighbor points. (Right) The central point with its normal is paired to one neighbor point with its normal . is the included angle between and .
Fig. 8: Local groove feature histogram. The groove region has higher variation extent than the flat region. X-axis shows the number of pairs in one point’s neighbor. Y-axis shows the included angle of one pair.

Generally, since the 3D position of every point is relevant to the manipulator base coordinate system through rigid transformation, all the point normals are based on the same coordinate (see Fig. 7 [Right]). Furthermore, the included angle of the -th pair within the center point’s neighbor is computed as:


where is normal of the center point and is normal of the -th neighbor point paired to the center point. By using Equation (2) for iterating every pair of the neighbor, all the computation results are arranged as a set (, is number of the neighbor points), which builds up the local GFH of the center point. Fig. 8 shows the local GFH of one point from the groove region and one point from the flat region.

Iii-C Global GFH

For global GFH, the whole point cloud needs to be taken into account. Therefore, the unit benchmark normal representing the main direction of the whole point cloud is defined as:


where is the -th point normal of the point cloud.

Back to the neighbor of the previous point (the central point in Fig. 7), each point normal in the neighbor is paired to the benchmark normal (Fig. 9).

Fig. 9: Every point normal of the neighbor is paired to the benchmark normal. For example of one pair (the central point normal and the benchmark normal ), is the included angle between and .
Fig. 10: Global groove feature histogram. The groove region has a higher variation extent than the flat region. X-axis shows the number of pairs in one point’s neighbor. Y-axis shows the included angle of one pair.

Under the same principle as local GFH, the -th included angle of the neighbor is computed as:


Therefore all the resultant included angles are put into one set () which builds up the global GFH of the central point. Fig. 10 shows the global GFH of one point from the groove region and one point from the flat region.

Iii-D Surface Variation Descriptor

According to analysis of the local GFH III-B and global GFH III-C, the variations for both histograms of one point are defined respectively as:


where is average value of the local GFH and is average value of the global GFH. is the number of pairs in the point’s neighbor. Then the surface variation descriptor for -th point of the point cloud is defined as:


where is variation of local GFH of and is variation of global GFH of .

Through computing of every point, a map which represents the surface variation extent of the whole point cloud is obtained and shown as Fig. 11. It intuitively shows the extent of surface variation on the whole point cloud, although there are noisy places. The blue regions, such as the groove region and the edges, have high variation, while the white regions are almost flat.

Fig. 11: Surface Variation Map. (a) Front view. (b) Side view.

By analyzing the surface variation map, points of the groove region with descriptor values more than threshold: 4.5-5 are centralized and almost tightly connected. In terms of the descriptor threshold, all the unqualified points are deleted. Consequently, only the groove point set remains. The groove detection results (blue region) are shown in Fig. 12. In the next step, the groove point set is used to generate the 3D welding trajectory for the robotic arm.

Fig. 12: Groove Detection Result. (a) Front view. (b) Side view

Compared with the PFH descriptor (computational complexity is ) [alexandre20123d], the surface variation descriptor solely considers angular values instead of combining angular values and the distance of a single pair. One of the two components of the surface variation descriptor, global GFH, is related to all the point normals, meaning that it can adapt to different types of workpieces. Since PFH focuses on local geometric features, it is not as adaptive as global GFH. Moreover, the other component, local GFH, is simplified SFPH [rusu2009fast] without involving the distance information. Therefore, the proposed surface variation descriptor reduces the computational complexity to , where the is the number of points in the point cloud and is the number of pairs in each point’s neighbor. In an actual experiment, the proposed algorithm could run in real-time.

Iv Welding Trajectory Generation

Based on the groove point set (marked in blue) obtained by the groove detection algorithm (see Fig. 12), the 3D welding trajectory is generated accordingly. The welding motion direction is closely dependent on the layout of the groove point set. Then, along the direction, the point set is evenly segmented into 50-60 consecutive regions of the same width (blue, green, and red for distinction), as shown in Fig. 13. The width of each segmented region is related to the total length of the welding groove; and each segmented region generates a single way point. Together, all the way points form the final 3D welding trajectory.

Fig. 13: Groove point set segmentation. (a) Front view of segmented groove region. (b) Side view of segmented groove region.

The sum of the distance from the way point of each segmented region to every point of the same segmented region should be as short as possible. The generation accuracy of the way points is dependent on groove detection results. If one resultant segmented region does not perfectly match the actual groove, its way point will deviate from the central position. The issue can be defined as a function for each segmented region:


where is the unknown way point and is the -th point of one segmented region. When gets to its minimum, is obtained as the way point. During experimentation, the gradient descent method is used to optimize the function. Specifically, the value 1e-4 is set as the threshold for the aborting iteration and the maximum of the iteration loop is set as 1000.

After iterating every segmented region of the groove point using Equation (7), the final welding trajectory generation result is shown as Fig. 14.

Fig. 14: Welding trajectory generation. (a) Generated welding trajectory inside the segmented groove point set. (b) The trajectory inside the raw point cloud.

The 3D orientation of each way point of the generated trajectory is represented as a unit vector , defined as:


where is the -th point normal of the segmented region. Then the unit vector is transformed as three Euler angles by the Eigen library. Ultimately, a 3D welding trajectory with both 3D positions and orientations for the welding groove in workpiece is generated. The trajectory will then be sent to the manipulator control for executing the actual tracking motion.

V Results

An open welding environment (Fig. 1) with four types of welding workpieces was prepared for experiments. Fig. 15 shows the experimental welding workpieces which are straight-line, curve-line, box and cylinder respectively. The reason why the wood workpieces are regarded as experimental objects, is due to their convenience to simulate real welding groove like straight type or curve type. The material of welding workpieces has no effects on the proposed algorithm, because the inputting 3D point cloud contains no optical properties.

Fig. 15: Experimental welding workpieces with their welding groove marked as red regions. According to the shape of the welding groove, the workpiece is defined as: (a) straight-line; (b) curve-line; (c) box; (d) cylinder.

In order to objectively evaluate the performance of the proposed method, three vital elements are considered:

  1. The whole processing runtime from the raw point cloud to generating the motion trajectory.

  2. The overlapping rate of the detected groove region and the actual groove region.

  3. The disparity between the generated trajectory and standard welding trajectory.

The processing runtime as an essential factor could evaluate the efficiency of the proposed robotic system for automatic welding. The overlapping rate is used to illustrate the accuracy of the welding groove detection algorithm. The disparity is presented by actual automatic welding execution. Thus, by focusing on the aforementioned three elements, each workpiece shown in Fig. 15 was tested. In fact, although the RGB-D camera is well calibrated, an error in the measurement of depth still exists.

V-a Processing Runtime Results

To measure the processing runtime of the proposed method for the surface point cloud of one workpiece, the function “clock()” of C++ is used to capture the system start time and end time of the method. Due to the running performance of the PC, each workpiece was tested ten times and each runtime from inputting the raw point cloud to generating the motion trajectory was recorded (Table I).

Types (s)

265800 14.09

266497 14.08

127031 7.51

121429 6.05

= Number of points. = Average time of runtime.

TABLE I: Results of processing runtime for each workpiece

V-B Groove Detection Results

To evaluate the accuracy of the welding groove detection algorithm, the results of the algorithm need to be compared with the ground truth (actual groove region defined by the authors). Inspired by the concept of IoU (intersection over union) in 2D image processing, the 3D overlapping rate of the detected groove region and actual groove region is introduced:


where is the number of points of the detected groove region, is the number of points of the actual groove region and is number of points of overlapping region between the detected groove region and the actual groove region.

The welding groove detection process for each workpiece in Fig. 15 is shown in Fig. 16, and the detection accuracy (defined by 3D overlapping rate) results are presented in Table II.

Fig. 16: Welding groove detection workflow. (1) Input raw point cloud of each workpiece. (2) Global normal map. (3) Surface variation map by groove feature histogram (GFH). (4) Filtering results (detected groove region). (5) Ground truth of groove region in raw point cloud. (6) Overlapping map by adding detected groove region and ground truth of groove region. Eventually the overlapping map is used to compute the 3D overlapping rate to evaluate the accuracy of the groove detection algorithm.

92.74 93.01 92.81 92.49 91.35 92.48
Curve-line 81.24 82.58 82.13 81.87 82.29 82.02
Box 81.48 82.65 82.02 80.49 81.97 81.72
Cylinder 63.27 61.79 64.69 67.57 65.75 64.61

- = Accuracy (%) of each test. = Average accuracy.

TABLE II: Results of groove detection accuracy for each workpiece

For types of straight-line, curve-line and box, the proposed welding groove detection approach has quite high detection accuracy. But for type of cylinder, the performance of the detection approach is not acceptable, because of poor robustness to model surface which is not almost plane.

V-C Motion Execution Results

As discussed in Section IV, the motion trajectory generation is based on the point set of the detected groove region (see Fig. 16). Therefore, the robotic manipulator drives the welding torch which follows the motion trajectory to execute the welding tasks. Fig. 17 presents the actual motion execution of the robotic manipulator.

Fig. 17: Actual motion execution of the robotic manipulator. (a)-(b) represents type of straight-line, curve-line, metal box and metal cylinder workpiece respectively. And their motion order is from left side to right side.

As shown in Fig. 17, the motion of the manipulator fits the generated trajectory tightly without terrible mismatch. Also, by using cartesian path planning of the Moveit [chitta2012moveit] package, the welding torch is able to complete welding tasks well (smoothly following the trajectory of the groove). However, it is difficult to evaluate the welding torch’s motion accuracy through external devices such as Vicon [clarisse1986vicon]. In this case, evaluating the actual welding quality is the most reasonable course of action.

According to the running performance (runtime and detection accuracy) of the proposed method, the robotic system has the capability to realize automatic welding tasks efficiently, as compared to the conventional teach-playback method.

Vi Conclusions

This paper presents an integrated robotic system for industrial welding with an automatic groove detection and trajectory generation method. The system is composed of a robotic manipulator (Universal Robot 3), an RGB-D camera (Realsense D415), and a welding torch. The system also has good flexibility when facing different welding situations. The software framework is built on ROS, with 3D point cloud processing being key. Four types of general welding workpieces were tested. After evaluating the accuracy between the welding trajectory generated by the proposed method and the ground truth of the welding groove, the motion execution performance proves good feasibility of the designed robotic system. However, a problem with the proposed method is that it cannot cope with a larger 3D point cloud of welding workpiece surfaces due to the more complex geometric regions and noise. In future work, a neural network instead of a geometric feature-based method could be introduced to improve robustness and accuracy of the welding groove detection algorithm. Also, we may incorporate additional functionalities to the robot, e.g. the capability to interact with the workpice