For a safe human robot interaction to occur, the robot must possess the information associated with its environment via exteroceptive sensors such as lidar(s), cameras and radars . The placement of these sensors in the environment, determine the sensing volume coverage of the robot workspace. Using sensors that are mounted on the robot can provide information from the its perspective whilst removing the constraints of planning the placement of the sensors in the environment. They can also provide direct observations without the need of applying transformations to elicit relevant distance information associated with the human. This was implemented in our previous work  and .
One of the ways of maintaining the safety of a human operator during human-robot interaction is speed and separation monitoring methodology (SSM) . To achieve SSM, the minimum separation distance and relative velocities between the robot and the human must be determined. The previous work  shows the implementation of a SSM safety configuration using three sensor arrays/rings consisting of eight Time-of-Flight laser-ranging sensors (also known as single-unit solid state lidar(s)) fastened around the robot links as shown in Figure 1.
Each array is considered as an augmentation to the robot body such that each observation incoming from an array is interpreted as an extension of the kinematic chain of the robot. This enables the sensing strategy to leverage the robot motion and provide exclusive coverage from the areas in the workspace where the robot is, and headed to. The setup also allows flexibility in terms of on-robot placement; the arrays can positioned anywhere on the robot links to achieve an optimal sensing coverage. Unlike the 2D scanning lidars, that provide planer information of separation and relative speed, this approach provides a 3D information with respect to the robot joint positions.
Determining the minimum distance between two bodies is a non-trivial problem and central to any speed and separation monitoring methodology. In order to ensure correct and accurate measurements, the ToF sensor arrays should have the ability to sense a volume of the robot workspace. This study analyses the volume coverage of the ToF sensor arrays and presents a methodology to calculate sensing volume using octree based volumetry .
Using off-robot sensors that are positioned around the robot or on the human operator to maximize coverage of sensors in the workspace has been the focus of many recent works. Recently, a 2D Lidar was used in conjunction with an IMU based human motion tracking setup . In , the authors used RGB-D cameras and proposed a novel approach to compute minimum distances in depth space instead of the Cartesian space and also introduced the idea of robot body approximation using few key-points. We rely on the contribution of the aforementioned and , where the authors provided metrics for speed and separation monitoring, with two 2D lidar(s) that were used to track the human position with respect to a suspended manipulator.
All the approaches mentioned so far have exclusively used proximity or inertial based sensing modalities and approaches, extrinsic to the robot. However, in , the authors introduced a new type of intrinsic perspective capacitive sensor that encouraged close operation between the human and the robot. In  and , the authors assessed the placement and orientation of IR distance sensors on a robot manipulator and implemented a kineostatic safety assessment algorithm, respectively. A reactive collision avoidance strategy was also implemented in  using on-robot proximity sensors. In  and , authors used distance sensors for potential fields and tested the sensors placement on the robot body to examine the sensing volume coverage of the work space. In the work , the authors segment the volume of operating workspace using point-clouds with application in safe human robot interaction. Unlike the work in , where infrared distance (IR) sensors were placed individually, in this work Time-of-Flight sensor arrays mounted on robot links as ‘rings’ were implemented .
This work analyzes the sensing volume coverage of robot workspace as well as the shared human-robot collaborative workspace for various configurations of ToF rings. It presents a methodology for volumetry using octrees to quantify the detection/sensing volume of the sensors, and how its coverage can be used to determine the choice of placement of ToF rings based on the task specific human robot interaction.
The remainder of the paper is organized as follows: Section II describes the methodology for calculating sensing volume coverage for Time-of-Flight (ToF) range sensor configurations. The experiment setup is described in Section III. The results of the experiments are shown and discussed in Section IV. Conclusions are drawn and future work discussed in Section V.
Ii-a Time-of-Flight Sensor Array Setup
In this work, we analyse the sparsity of ToF sensors per ring and the number and placement of rings on the robot links and its effect on the sensing volume coverage. Each ToF sensor in the ToF ring is a single unit solid-state lidar with a maximum detection range of and a field-of-view (FOV) of . The sensing volume of each sensor in a ToF ring can be represented as a cone of height and angle degrees as shown in Fig 2. More details about the sensor setup and its use for safer HRC can be found in our previous works  and .
Ii-B Sensing Volumes
Maximum Sensing Volume is the ideal workspace in Cartesian coordinate that the sensors should cover around the robot to ensure safe human robot interaction. Sensing volume coverage is the subset of this volume that is covered by the FOV of the ToF sensor rings. Maximum sensing volume differs based on the task, the application and the amount of human robot interaction. In this work four different maximum volumes are suggested, and are shown in Figure 4. They are described as follows:
Operating Workspace Volume (): This is the operating workspace of the robot. The robot used here is a UR 10 robot and its maximum reaching workspace is a sphere of radius . Sensing Volume Coverage of can be used to determine how much the ToF sensors cover near the robot. For tasks that require human close proximity to the robot, the ToF Setup that gives maximum coverage of this Volume can be considered.
Tool (Tool Control Point -TCP) Volume (): This is the sphere defined around the TCP of the robot . Here the sphere radius is the maximum detection range of the ToF sensor i.e. . The TCP velocity and distance from TCP to human is mainly considered of safety in HRC  . Hence, for scenarios where the robot performs Speed and Separation Monitoring (SSM) , coverage in this volume space can be used to choose a ToF sensor configuration.
Operating Workspace + Tool Volume (): This is the combined volume in workspace. In order to determine the optimal ToF Sensor configuration that gives coverage for far and near volumes of the robot, the coverage in this combined volume can be used.
Shell Volume (): This is a tubular volume or a shell of fixed radius. The shell is defined along the curve comprised of the all robot link endpoints, with its starting point at the base link to the end at the TCP. This shell represents a more exact volume for which the sensing volume coverage should be maximized.
FOV Volume for a sensor on ToF ring can be written as . The combined sensing volume for (i.e. all ToF sensors in all the ToF rings). The overlap of this volume with the aforesaid volumes is used to determine the sensing coverage of a ToF sensor setup configuration.
Inner Volume of the Robot can be defined by the space occupied by the robot itself. Here we approximate it as a shell around the robot. In this work for UR 10 a shell of inner radius is assumed (based on the maximum width of the bounding box of the largest link UR 10). This volume space is subtracted from all volumes.
In order to calculate the Shell Volume, Bezier interpolation of the robot pose using the end points of the robot links is done, refer Figure 5. This is detailed further in the following sections.
Ii-B1 Robot Pose as a Bezier curve
For this work, a piece-wise Bezier curve approach has been used to generate a curve representing the robot pose. It essentially means part of the line segments between two points is not interpolated if it is co-linear, see Figure 5. This was helpful as the interpolation was needed around the joints of the robot.
A piece-wise Bezier interpolation determines at what point on the line segment between two control points does Bezier interpolation needs to be done. This is defined based on Bezier interpolation factors and the number of interpolation points (refer Figure 6). The readers can refer to V-REP API  and  for more details.
Ii-B2 Shell Volume
Alternatively according to Pappus Centroid Theorem  as
Determining the volume covered by shell above can be computationally expensive, difficult to quantify especially with intersections of other volumes. Hence an approximation using Octree based volumetry has been done .
Ii-B3 Octree based Volumetry
Octrees are used for the voxel representation of any shape in a 3D Cartesian space. Octree representation of volume region is represented as . The volume can be calculated from octree by counting the number of voxels in . A voxel is a cube with side length then the volume of the region occupied by octree can be written as . Octrees can be used to merge or subtract voxels from other octrees. Before a shape is converted into octrees, it is decimated into discs of varying radius spaced by voxel size . This is done as the V-REP represents shapes as hollow and the inside volume of the shape needs to be calculated. An octree based volumetry pipeline for a FOV cone of a ToF sensor is shown in Figure 8.
Ii-C Coverage of a ToF sensor configuration
Given a maximum coverage volume , the coverage , of a ToF sensor configuration () with the field-of-view volume of can be written as:
Alternatively as V-REP allows only addition and subtraction of voxels from Octrees, Equation 5 can be re-written as:
Iii Experiments and Validation
The experiment setup is a generic robot pick and place task of placing 10 products in a box (refer previous work , ). The robot movement involves moving the base joint degrees between the pick and place positions on the tables (refer Figure 2). This task was chosen as the base joint of a robot has the largest braking distance when moving at high joint speeds. This results in a radial motion of the Tool-Control-Point(TCP) i.e. the end-effector.
In this work, the coverage at the robot pose when the robot is least safe i.e. moving at the highest speed during the task is measured. This was done with the reasoning that the ToF sensor arrays have the maximum coverage to detect and anticipate human/operator in the workspace. This setup is task specific but can be extended to any task which requires coverage either near or farther from the robot based on the human robot interaction during the task. Hence, in this study different volumes are considered that represent ideal maximum coverage both near and farther from the robot. For this study a UR 10 robot was simulated and the octree-based volume calculations were done using V-REP .
In Speed and Separation Monitoring (SSM) based collaborative tasks  the minimum distance calculations and directed velocities of human and robot are generally done with respect to the base and TCP of the robot. That is why the sensing volume coverage in a sphere representing the operating workspace () and a detection sphere centered at the TCP () are analyzed. However, according to the ISO standards  and also in other works  the minimum distance and directed speeds can be w.r.t. any point on the robot. So a more exact sensing volume coverage is analyzed using a shell () around the robot self, that changes with the robot pose.
The sensing volume coverage is measured by determining the overlap of the ToF sensor arrays volume, with the maximum ideal volume , which can be (refer Section II-B).
Iii-a Sensing Volume for ToF Sensor Arrays
The sensor detection volume of a ToF laser ranging sensor is modelled as a cone with a field-of-view given by the beam angle of degrees and detection range i.e. the cone height as . In order to verify that the detection volume can be approximated as a cone a simple test of projecting the laser beam emitted by a ToF sensor on a white board was done and the image of the projection enhanced and the contour of the projection was checked. As shown in Figure 9 it can be seen that the projection shape approximates to a circle, which validates the modelling of ToF sensor detection volume as a cone. It can be seen in Figure 10 the different sensing volumes for different ToF sensor configurations.
Iii-B Sensing Coverage Measurements
In order to analyse and compare the sensing volume coverage the following measurements were taken :
Impact on Sensing Coverage for ToF Configurations with different number of rings for all as shown in Figure 10-Top Row. The configurations compared were for single rings on elbow and shoulder robot links with 8 and 16 sensors per ring ( , ), dual rings with varying ()and also three rings with () which is mounting rings at the end of robot links at an angle and also the center of the robot link.
Sensing Coverage for ToF Configurations with varying for all . The is varied in the range of . This is to measure the impact of change in to the coverage in the near and farther zones of the robot.
Sensing Coverage for ToF Configurations with varying for shell volume with varying radius ( examples shown in Figure 11).
The results of these comparisons are shown and discussed in the following Section.
Octree based approximations were used to calculate the sensing volumes and also the ToF Sensor array volume . A measurement for a shell of radius is shown as an example in Figure 12. In Figure 12(Top Left) is shown, where the gray discs represent the Bezier interpolated points, where as the straight links are represented with other colors (refer Section II-B1). The for configuration is shown in Figure 12(Top Right), where red, green and blue cones represent the sensing volume of ToF rings mounted on tool, elbow and shoulder links of UR10 robot, respectively. The Octree approximation and are shown in Figure12(Bottom Left). For visual clarity, is shown as a violet pointcloud where the points represent the center of the voxels in the octree. The left-over volume of not covered by the ToF rings is shown in Figure12(Bottom Right). Using Eq. 6 sensing volume coverage was calculated (see Section II-B3).
The first set of measurements were done for calculating of ToF sensor configurations to observe impact of increasing number of sensors per ring i.e. to and increasing the number of rings per link i.e. , and . The results are shown in the bar-graph in Figure 14. The observations were as expected, an increasing coverage with more sensors per ring and more rings per link. Another observation that was made is that for the coverage is similar to . This is because the the coverage of the two rings at overlap (as shown in Figure 10) and thus behave similar to a single ring in the center. Thus change in impacts the sensing volume coverage.
To further observe the impact of change in in sensing volume coverage , is varied from to for the ToF configuration. The results are shown in Figure 15. It is observed that as the overlap of the volume for a given set of ToF rings on a link increases, drops. As observed before the coverage of is minimum and equivalent to . It is observed that the most optimized and maximum coverage is given at and ToF sensor configuration.
In order to observe coverage in the range of to from the robot for varying in ToF configuration, shell based volume of radius was considered. In the previous work , in the SSM implementation for safety using ToF sensors, and were considered as distance thresholds for varying the speeds of the robot. Thereby, if the human is working in close proximity, a sensor configuration that has more coverage closer to the robot can be used. Contrarily for farther distances and better anticipation of human encroaching on the robot workspace, farther coverage becomes important. Hence, sensing volume coverage is calculated with varying and . The results are shown in Figure 16. It can be observed that as increases the coverage near the robot also increases. It can be seen that for and , the coverage .
In order to maximize the closer and father coverage, a sensor configuration that combines and i.e. , which is placing three rings on the elbow and shoulder links of the robot is also implemented and the sensing volume coverage measured. It results in a over coverage for all (shown in Figure 14), and a coverage of and for with shell radius of and respectively. This is shown in Figure 13. The leftover can be compared to Figure 12(Bottom Right) to see the difference in coverage.
The minimum distance accuracy for a human moving in robot workspace for the experiment described in  for the aforementioned ToF sensor configurations is shown in Figure 17. Root Mean Square Error (RMSE) and the Maximum Distance Error between the measured minimum distance from the sensors between human-robot w.r.t. the absolute minimum distance i.e. the ground truth (the distance between the closest points on robot and the human) is shown. It was observed that as the sensing volume coverage increases with the number of sensors per ring and number of rings per link the error decreases.
In this paper, a methodology to quantify sensing volume coverage of on-robot Time-of-Flight laser ranging sensor arrays/rings was presented. The measurements of this sensing volume coverage were presented for various ToF sensor configurations. It was observed that for the configuration of dual rings mounted on robot links with an angle of and degrees gave the optimal coverage for closer and farther regions. Increasing the number of sensor per ring and reducing the blind-spots increases the coverage and minimum distance accuracy and increasing the number of rings per link also helps with the sensing volume coverage of the robot.
The current ongoing work is validating the affect on minimum distance accuracy for these configurations with different sized objects and human in the workspace in the closer and farther zones of the robot workspace.
The authors are grateful to the staff of Multi Agent Bio-Robotics Laboratory (MABL) and the CM Collaborative Robotics Research (CMCR) Lab for their valuable inputs.
-  B. Siciliano and O. Khatib, Springer handbook of robotics. Springer, 2016.
-  S. Kumar, C. Savur, and F. Sahin, “Dynamic Awareness of an Industrial Robotic Arm Using Time-of-Flight Laser-Ranging Sensors,” in 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Oct. 2018, pp. 2850–2857.
-  S. Kumar, S. Arora, and F. Sahin, “Speed and separation monitoring using on-robot time–of–flight laser–ranging sensor arrays.” arXiv:1904.07379v1, Apr. 2019.
-  ISO, “ISO/TS 15066:2016 - robots and robotic devices – collaborative robots.” [Online]. Available: http://www.iso.org/
-  R. Szeliski, “Rapid octree construction from image sequences,” CVGIP: Image understanding, vol. 58, no. 1, pp. 23–32, 1993.
-  M. Safeea and P. Neto, “Minimum distance calculation using laser scanner and IMUs for safe human-robot interaction,” Robotics and Computer-Integrated Manufacturing, vol. 58, pp. 33–42, Aug. 2019.
-  F. Flacco, T. Kröger, A. D. Luca, and O. Khatib, “A Depth Space Approach for Evaluating Distance to Objects - with Application to Human-Robot Collision Avoidance,” Journal of Intelligent and Robotic Systems, vol. 80, pp. 7–22, 2015.
-  J. A. Marvel, “Performance Metrics of Speed and Separation Monitoring in Shared Workspaces,” IEEE Transactions on Automation Science and Engineering, vol. 10, pp. 405–414, 2013.
-  T. Schlegl, T. Kröger, A. Gaschler, O. Khatib, and H. Zangl, “Virtual whiskers — Highly responsive robot collision avoidance,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nov. 2013, pp. 5373–5379.
-  N. M. Ceriani, G. B. Avanzini, A. M. Zanchettin, L. Bascetta, and P. Rocco, “Optimal placement of spots in distributed proximity sensors for safe human-robot interaction,” in 2013 IEEE International Conference on Robotics and Automation, May 2013, pp. 5858–5863.
-  B. Lacevic and P. Rocco, “Kinetostatic danger field - a novel safety assessment for human-robot interaction,” in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Oct. 2010, pp. 2169–2174.
-  N. M. Ceriani, A. M. Zanchettin, P. Rocco, A. Stolt, and A. Robertsson, “Reactive task adaptation based on hierarchical constraints classification for safe industrial robots,” IEEE/ASME Transactions on Mechatronics, vol. 20, no. 6, pp. 2935–2949, Dec 2015.
-  M. J. Rosenstrauch and J. Krüger, “Safe human robot collaboration — operation area segmentation for dynamic adjustable distance monitoring,” in 2018 4th International Conference on Control, Automation and Robotics (ICCAR), April 2018, pp. 17–21.
-  J. A. Marvel and R. Norcross, “Implementing speed and separation monitoring in collaborative robot workcells,” Robotics and Computer-Integrated Manufacturing, vol. 44, pp. 144–155, Apr. 2017.
-  E. Rohmer, S. P. N. Singh, and M. Freese, “V-REP: A versatile and scalable robot simulation framework,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nov. 2013, pp. 1321–1326.
-  L. Shao and H. Zhou, “Curve fitting with bezier cubics,” Graphical models and image processing, vol. 58, no. 3, pp. 223–232, 1996.
-  T. Huckaby, “Curved axis revolutions,” 05 2013. [Online]. Available: https://www.researchgate.net/publication/264309866˙Curved˙Axis˙Revolutions
-  A. W. Goodman and G. Goodman, “Generalizations of the theorems of pappus,” The American Mathematical Monthly, vol. 76, no. 4, pp. 355–366, 1969. [Online]. Available: http://www.jstor.org/stable/2316426