Sensing Volume Coverage of Robot Workspace using On-Robot Time-of-Flight Sensor Arrays for Safe Human Robot Interaction

by   Shitij Kumar, et al.

In this paper, an analysis of the sensing volume coverage of robot workspace as well as the shared human-robot collaborative workspace for various configurations of on-robot Time-of-Flight (ToF) sensor array rings is presented. A methodology for volumetry using octrees to quantify the detection/sensing volume of the sensors is proposed. The change in sensing volume coverage by increasing the number of sensors per ToF sensor array ring and also increasing the number of rings mounted on robot link is also studied. Considerations of maximum ideal volume around the robot workspace that a given ToF sensor array ring placement and orientation setup should cover for safe human robot interaction are presented. The sensing volume coverage measurements in this maximum ideal volume are tabulated and observations on various ToF configurations and their coverage for close and far zones of the robot are determined.



There are no comments yet.


page 1

page 2

page 3

page 4

page 5

page 6


Speed and Separation Monitoring using on-robot Time--of--Flight laser--ranging sensor arrays

In this paper, a speed and separation monitoring (SSM) based safety cont...

Force-Sensing Tensegrity for Investigating Physical Human-Robot Interaction in Compliant Robotic Systems

Advancements in the domain of physical human-robot interaction (pHRI) ha...

Human Position Detection Tracking with On-robot Time-of-Flight Laser Ranging Sensors

In this paper, we propose a simple methodology to detect the partial pos...

Real-time Robot-assisted Ergonomics

This paper describes a novel approach in human robot interaction driven ...

SQRP: Sensing Quality-aware Robot Programming System for Non-expert Programmers

Robot programming typically makes use of a set of mechanical skills that...

Through the Looking Glass: Diminishing Occlusions in Robot Vision Systems with Mirror Reflections

The quality of robot vision greatly affects the performance of automatio...

Triple-Poles Complementary Split Ring Resonator for Sensing Diabetics Glucose Levels at cm-Band

Microwave sensors are very promising for sensing the blood glucose level...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

For a safe human robot interaction to occur, the robot must possess the information associated with its environment via exteroceptive sensors such as lidar(s), cameras and radars [1]. The placement of these sensors in the environment, determine the sensing volume coverage of the robot workspace. Using sensors that are mounted on the robot can provide information from the its perspective whilst removing the constraints of planning the placement of the sensors in the environment. They can also provide direct observations without the need of applying transformations to elicit relevant distance information associated with the human. This was implemented in our previous work [2] and [3].

One of the ways of maintaining the safety of a human operator during human-robot interaction is speed and separation monitoring methodology (SSM) [4]. To achieve SSM, the minimum separation distance and relative velocities between the robot and the human must be determined. The previous work [3] shows the implementation of a SSM safety configuration using three sensor arrays/rings consisting of eight Time-of-Flight laser-ranging sensors (also known as single-unit solid state lidar(s)) fastened around the robot links as shown in Figure 1.

Fig. 1: The UR10 robot with time-of-flight sensor arrays mounted on the robot link centers. Each array has eight single unit lidar(s) This sensing system is modelled in simulation to determine its sensing coverage. [3]

Each array is considered as an augmentation to the robot body such that each observation incoming from an array is interpreted as an extension of the kinematic chain of the robot. This enables the sensing strategy to leverage the robot motion and provide exclusive coverage from the areas in the workspace where the robot is, and headed to. The setup also allows flexibility in terms of on-robot placement; the arrays can positioned anywhere on the robot links to achieve an optimal sensing coverage. Unlike the 2D scanning lidars, that provide planer information of separation and relative speed, this approach provides a 3D information with respect to the robot joint positions.

Determining the minimum distance between two bodies is a non-trivial problem and central to any speed and separation monitoring methodology. In order to ensure correct and accurate measurements, the ToF sensor arrays should have the ability to sense a volume of the robot workspace. This study analyses the volume coverage of the ToF sensor arrays and presents a methodology to calculate sensing volume using octree based volumetry [5].

Using off-robot sensors that are positioned around the robot or on the human operator to maximize coverage of sensors in the workspace has been the focus of many recent works. Recently, a 2D Lidar was used in conjunction with an IMU based human motion tracking setup [6]. In [7], the authors used RGB-D cameras and proposed a novel approach to compute minimum distances in depth space instead of the Cartesian space and also introduced the idea of robot body approximation using few key-points. We rely on the contribution of the aforementioned and [8], where the authors provided metrics for speed and separation monitoring, with two 2D lidar(s) that were used to track the human position with respect to a suspended manipulator.

All the approaches mentioned so far have exclusively used proximity or inertial based sensing modalities and approaches, extrinsic to the robot. However, in [9], the authors introduced a new type of intrinsic perspective capacitive sensor that encouraged close operation between the human and the robot. In [10] and [11], the authors assessed the placement and orientation of IR distance sensors on a robot manipulator and implemented a kineostatic safety assessment algorithm, respectively. A reactive collision avoidance strategy was also implemented in [12] using on-robot proximity sensors. In [11] and [10], authors used distance sensors for potential fields and tested the sensors placement on the robot body to examine the sensing volume coverage of the work space. In the work [13], the authors segment the volume of operating workspace using point-clouds with application in safe human robot interaction. Unlike the work in [10], where infrared distance (IR) sensors were placed individually, in this work Time-of-Flight sensor arrays mounted on robot links as ‘rings’ were implemented [2].

This work analyzes the sensing volume coverage of robot workspace as well as the shared human-robot collaborative workspace for various configurations of ToF rings. It presents a methodology for volumetry using octrees to quantify the detection/sensing volume of the sensors, and how its coverage can be used to determine the choice of placement of ToF rings based on the task specific human robot interaction.

The remainder of the paper is organized as follows: Section II describes the methodology for calculating sensing volume coverage for Time-of-Flight (ToF) range sensor configurations. The experiment setup is described in Section III. The results of the experiments are shown and discussed in Section IV. Conclusions are drawn and future work discussed in Section V.

Ii Methodology

Ii-a Time-of-Flight Sensor Array Setup

In this work, we analyse the sparsity of ToF sensors per ring and the number and placement of rings on the robot links and its effect on the sensing volume coverage. Each ToF sensor in the ToF ring is a single unit solid-state lidar with a maximum detection range of and a field-of-view (FOV) of . The sensing volume of each sensor in a ToF ring can be represented as a cone of height and angle degrees as shown in Fig 2. More details about the sensor setup and its use for safer HRC can be found in our previous works [2] and [3].

Fig. 2: (a)The ToF Sensor Rings with 8 Sensor nodes i.e. n1_8_0. There is a loss of coverage both far and near the robot. The simulated representation of the ToF ring mounted on the Tool is shown (bottom). (b) The ToF Sensor Rings with 16 Sensor nodes i.e. n1_16_0 showing overlapped coverage to compensate the lost coverage.
Fig. 3: The sensor configuration for single i.e. n1_16_0 and double rings i.e. n2_16_ (on shoulder and elbow links of UR 10) to measure sensing volume coverage. This configuration aims to cover volume near the robot.

Different ToF sensor configurations are used to quantify the effect of blind-spots on sensing volume coverage. For brevity and ease of reference, different ToF sensor setup configurations as shown in Figure 2 and 3 are represented as follows:

Ii-B Sensing Volumes

Fig. 4: Maximum Sensing Volume used to quantify the sensing volume coverage of sensors with a total sensing volume coverage represented as . The volume occupied by the robot self , that is subtracted during Sensing Volume Coverage analysis.

Maximum Sensing Volume is the ideal workspace in Cartesian coordinate that the sensors should cover around the robot to ensure safe human robot interaction. Sensing volume coverage is the subset of this volume that is covered by the FOV of the ToF sensor rings. Maximum sensing volume differs based on the task, the application and the amount of human robot interaction. In this work four different maximum volumes are suggested, and are shown in Figure 4. They are described as follows:

  • Operating Workspace Volume (): This is the operating workspace of the robot. The robot used here is a UR 10 robot and its maximum reaching workspace is a sphere of radius . Sensing Volume Coverage of can be used to determine how much the ToF sensors cover near the robot. For tasks that require human close proximity to the robot, the ToF Setup that gives maximum coverage of this Volume can be considered.

  • Tool (Tool Control Point -TCP) Volume (): This is the sphere defined around the TCP of the robot . Here the sphere radius is the maximum detection range of the ToF sensor i.e. . The TCP velocity and distance from TCP to human is mainly considered of safety in HRC [14] [4]. Hence, for scenarios where the robot performs Speed and Separation Monitoring (SSM) [3], coverage in this volume space can be used to choose a ToF sensor configuration.

  • Operating Workspace + Tool Volume (): This is the combined volume in workspace. In order to determine the optimal ToF Sensor configuration that gives coverage for far and near volumes of the robot, the coverage in this combined volume can be used.

  • Shell Volume (): This is a tubular volume or a shell of fixed radius. The shell is defined along the curve comprised of the all robot link endpoints, with its starting point at the base link to the end at the TCP. This shell represents a more exact volume for which the sensing volume coverage should be maximized.

FOV Volume for a sensor on ToF ring can be written as . The combined sensing volume for (i.e. all ToF sensors in all the ToF rings). The overlap of this volume with the aforesaid volumes is used to determine the sensing coverage of a ToF sensor setup configuration.

Inner Volume of the Robot can be defined by the space occupied by the robot itself. Here we approximate it as a shell around the robot. In this work for UR 10 a shell of inner radius is assumed (based on the maximum width of the bounding box of the largest link UR 10). This volume space is subtracted from all volumes.

Fig. 5: Generating Shell Volume (in image of radius ) along the Bezier curve generated by the UR10 robot link end points

. The Bezier interpolation is represented as the grey and where no interpolation was done is represented with differnt color.

In order to calculate the Shell Volume, Bezier interpolation of the robot pose using the end points of the robot links is done, refer Figure 5. This is detailed further in the following sections.

Ii-B1 Robot Pose as a Bezier curve

For this work, a piece-wise Bezier curve approach has been used to generate a curve representing the robot pose. It essentially means part of the line segments between two points is not interpolated if it is co-linear, see Figure 5. This was helpful as the interpolation was needed around the joints of the robot.

Fig. 6: The Bezier Curve Interpolation for defining the curve given three control points [15].

A piece-wise Bezier interpolation determines at what point on the line segment between two control points does Bezier interpolation needs to be done. This is defined based on Bezier interpolation factors and the number of interpolation points (refer Figure 6). The readers can refer to V-REP API [15] and [16] for more details.

Ii-B2 Shell Volume

Fig. 7: The shell volume calculation using the washer-method for curves and around - Curved Axis Solids of Revolution.

This volume can be calculated by rotating a solid revolution along a curve. This is also know as ‘Curved Axis Solids of Revolution’ [17] (refer Figure 7). This can be formulated as :


Alternatively according to Pappus Centroid Theorem [18] as




Determining the volume covered by shell above can be computationally expensive, difficult to quantify especially with intersections of other volumes. Hence an approximation using Octree based volumetry has been done [5].

Ii-B3 Octree based Volumetry

Fig. 8: Octree based volumetry pipeline for a Cone shape. The function represents a given shape as an octree and the operator quantifies the volume occupied by the octree. Volume of Cone is and volume reported by Octree is .

Octrees are used for the voxel representation of any shape in a 3D Cartesian space. Octree representation of volume region is represented as . The volume can be calculated from octree by counting the number of voxels in . A voxel is a cube with side length then the volume of the region occupied by octree can be written as . Octrees can be used to merge or subtract voxels from other octrees. Before a shape is converted into octrees, it is decimated into discs of varying radius spaced by voxel size . This is done as the V-REP represents shapes as hollow and the inside volume of the shape needs to be calculated. An octree based volumetry pipeline for a FOV cone of a ToF sensor is shown in Figure 8.

Ii-C Coverage of a ToF sensor configuration

Given a maximum coverage volume , the coverage , of a ToF sensor configuration () with the field-of-view volume of can be written as:


Alternatively as V-REP allows only addition and subtraction of voxels from Octrees, Equation 5 can be re-written as:


Iii Experiments and Validation

The experiment setup is a generic robot pick and place task of placing 10 products in a box (refer previous work [2], [3]). The robot movement involves moving the base joint degrees between the pick and place positions on the tables (refer Figure 2). This task was chosen as the base joint of a robot has the largest braking distance when moving at high joint speeds. This results in a radial motion of the Tool-Control-Point(TCP) i.e. the end-effector.

In this work, the coverage at the robot pose when the robot is least safe i.e. moving at the highest speed during the task is measured. This was done with the reasoning that the ToF sensor arrays have the maximum coverage to detect and anticipate human/operator in the workspace. This setup is task specific but can be extended to any task which requires coverage either near or farther from the robot based on the human robot interaction during the task. Hence, in this study different volumes are considered that represent ideal maximum coverage both near and farther from the robot. For this study a UR 10 robot was simulated and the octree-based volume calculations were done using V-REP [15].

In Speed and Separation Monitoring (SSM) based collaborative tasks [14] the minimum distance calculations and directed velocities of human and robot are generally done with respect to the base and TCP of the robot. That is why the sensing volume coverage in a sphere representing the operating workspace () and a detection sphere centered at the TCP () are analyzed. However, according to the ISO standards [4] and also in other works [7] the minimum distance and directed speeds can be w.r.t. any point on the robot. So a more exact sensing volume coverage is analyzed using a shell () around the robot self, that changes with the robot pose.

The sensing volume coverage is measured by determining the overlap of the ToF sensor arrays volume, with the maximum ideal volume , which can be (refer Section II-B).

Fig. 9: Verification of the Time-of-Flight sensor node sensing volume can be modelled as a cone.

Iii-a Sensing Volume for ToF Sensor Arrays

The sensor detection volume of a ToF laser ranging sensor is modelled as a cone with a field-of-view given by the beam angle of degrees and detection range i.e. the cone height as . In order to verify that the detection volume can be approximated as a cone a simple test of projecting the laser beam emitted by a ToF sensor on a white board was done and the image of the projection enhanced and the contour of the projection was checked. As shown in Figure 9 it can be seen that the projection shape approximates to a circle, which validates the modelling of ToF sensor detection volume as a cone. It can be seen in Figure 10 the different sensing volumes for different ToF sensor configurations.

Fig. 10: The ToF Sensor rings setups (Top Row) shows the 3 major ToF sensor configurations for and as single rings with 8 & 16 sensors respectively and - dual rings on shoulder and elbow links of the robot. The ToF Sensor rings setups (Bottom Row) shows the angle variation of sensors on the ring for where .

Iii-B Sensing Coverage Measurements

In order to analyse and compare the sensing volume coverage the following measurements were taken :

  • Impact on Sensing Coverage for ToF Configurations with different number of rings for all as shown in Figure 10-Top Row. The configurations compared were for single rings on elbow and shoulder robot links with 8 and 16 sensors per ring ( , ), dual rings with varying ()and also three rings with () which is mounting rings at the end of robot links at an angle and also the center of the robot link.

  • Sensing Coverage for ToF Configurations with varying for all . The is varied in the range of . This is to measure the impact of change in to the coverage in the near and farther zones of the robot.

  • Sensing Coverage for ToF Configurations with varying for shell volume with varying radius ( examples shown in Figure 11).

Fig. 11: The ToF Sensor rings setups are used to determine coverage for changing with varying radius . The figure shown are .

The results of these comparisons are shown and discussed in the following Section.

Iv Results

Fig. 12: An example of Octree based approximation for calculating sensing volume coverage for shell volume of radius (Top Left). The of configuration (Top Right) octree approximation(Bottom Left). The volume not covered by the sensors in the shell (Bottom Right).

Octree based approximations were used to calculate the sensing volumes and also the ToF Sensor array volume . A measurement for a shell of radius is shown as an example in Figure 12. In Figure 12(Top Left) is shown, where the gray discs represent the Bezier interpolated points, where as the straight links are represented with other colors (refer Section II-B1). The for configuration is shown in Figure 12(Top Right), where red, green and blue cones represent the sensing volume of ToF rings mounted on tool, elbow and shoulder links of UR10 robot, respectively. The Octree approximation and are shown in Figure12(Bottom Left). For visual clarity, is shown as a violet pointcloud where the points represent the center of the voxels in the octree. The left-over volume of not covered by the ToF rings is shown in Figure12(Bottom Right). Using Eq. 6 sensing volume coverage was calculated (see Section II-B3).

Fig. 13: An Octree based approximation for calculating sensing volume coverage for shell volume of radius for configuration .

The first set of measurements were done for calculating of ToF sensor configurations to observe impact of increasing number of sensors per ring i.e. to and increasing the number of rings per link i.e. , and . The results are shown in the bar-graph in Figure 14. The observations were as expected, an increasing coverage with more sensors per ring and more rings per link. Another observation that was made is that for the coverage is similar to . This is because the the coverage of the two rings at overlap (as shown in Figure 10) and thus behave similar to a single ring in the center. Thus change in impacts the sensing volume coverage.

Fig. 14: Sensing Volume Coverage of ToF sensor configurations for all to observe impact of increasing number of sensors per ring i.e. to and increasing the number of rings per link i.e. , and .

To further observe the impact of change in in sensing volume coverage , is varied from to for the ToF configuration. The results are shown in Figure 15. It is observed that as the overlap of the volume for a given set of ToF rings on a link increases, drops. As observed before the coverage of is minimum and equivalent to . It is observed that the most optimized and maximum coverage is given at and ToF sensor configuration.

Fig. 15: Sensing Volume Coverage for all to observe impact of increasing in ToF sensor configurations .
Fig. 16: Sensing Volume Coverage for shell volume with varying radius and varying in ToF sensor configurations .

In order to observe coverage in the range of to from the robot for varying in ToF configuration, shell based volume of radius was considered. In the previous work [3], in the SSM implementation for safety using ToF sensors, and were considered as distance thresholds for varying the speeds of the robot. Thereby, if the human is working in close proximity, a sensor configuration that has more coverage closer to the robot can be used. Contrarily for farther distances and better anticipation of human encroaching on the robot workspace, farther coverage becomes important. Hence, sensing volume coverage is calculated with varying and . The results are shown in Figure 16. It can be observed that as increases the coverage near the robot also increases. It can be seen that for and , the coverage .

In order to maximize the closer and father coverage, a sensor configuration that combines and i.e. , which is placing three rings on the elbow and shoulder links of the robot is also implemented and the sensing volume coverage measured. It results in a over coverage for all (shown in Figure 14), and a coverage of and for with shell radius of and respectively. This is shown in Figure 13. The leftover can be compared to Figure 12(Bottom Right) to see the difference in coverage.

The minimum distance accuracy for a human moving in robot workspace for the experiment described in [2] for the aforementioned ToF sensor configurations is shown in Figure 17. Root Mean Square Error (RMSE) and the Maximum Distance Error between the measured minimum distance from the sensors between human-robot w.r.t. the absolute minimum distance i.e. the ground truth (the distance between the closest points on robot and the human) is shown. It was observed that as the sensing volume coverage increases with the number of sensors per ring and number of rings per link the error decreases.

Fig. 17: Root Mean Square Error (RMSE) and the Maximum Distance Error between the measured minimum distance from the sensors between human-robot w.r.t. the absolute minimum distance i.e. the ground truth; for different ToF sensor configurations.

V Conclusion

In this paper, a methodology to quantify sensing volume coverage of on-robot Time-of-Flight laser ranging sensor arrays/rings was presented. The measurements of this sensing volume coverage were presented for various ToF sensor configurations. It was observed that for the configuration of dual rings mounted on robot links with an angle of and degrees gave the optimal coverage for closer and farther regions. Increasing the number of sensor per ring and reducing the blind-spots increases the coverage and minimum distance accuracy and increasing the number of rings per link also helps with the sensing volume coverage of the robot.

The current ongoing work is validating the affect on minimum distance accuracy for these configurations with different sized objects and human in the workspace in the closer and farther zones of the robot workspace.


The authors are grateful to the staff of Multi Agent Bio-Robotics Laboratory (MABL) and the CM Collaborative Robotics Research (CMCR) Lab for their valuable inputs.


  • [1] B. Siciliano and O. Khatib, Springer handbook of robotics.   Springer, 2016.
  • [2] S. Kumar, C. Savur, and F. Sahin, “Dynamic Awareness of an Industrial Robotic Arm Using Time-of-Flight Laser-Ranging Sensors,” in 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Oct. 2018, pp. 2850–2857.
  • [3] S. Kumar, S. Arora, and F. Sahin, “Speed and separation monitoring using on-robot time–of–flight laser–ranging sensor arrays.”   arXiv:1904.07379v1, Apr. 2019.
  • [4] ISO, “ISO/TS 15066:2016 - robots and robotic devices – collaborative robots.” [Online]. Available:
  • [5] R. Szeliski, “Rapid octree construction from image sequences,” CVGIP: Image understanding, vol. 58, no. 1, pp. 23–32, 1993.
  • [6] M. Safeea and P. Neto, “Minimum distance calculation using laser scanner and IMUs for safe human-robot interaction,” Robotics and Computer-Integrated Manufacturing, vol. 58, pp. 33–42, Aug. 2019.
  • [7] F. Flacco, T. Kröger, A. D. Luca, and O. Khatib, “A Depth Space Approach for Evaluating Distance to Objects - with Application to Human-Robot Collision Avoidance,” Journal of Intelligent and Robotic Systems, vol. 80, pp. 7–22, 2015.
  • [8] J. A. Marvel, “Performance Metrics of Speed and Separation Monitoring in Shared Workspaces,” IEEE Transactions on Automation Science and Engineering, vol. 10, pp. 405–414, 2013.
  • [9] T. Schlegl, T. Kröger, A. Gaschler, O. Khatib, and H. Zangl, “Virtual whiskers — Highly responsive robot collision avoidance,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nov. 2013, pp. 5373–5379.
  • [10] N. M. Ceriani, G. B. Avanzini, A. M. Zanchettin, L. Bascetta, and P. Rocco, “Optimal placement of spots in distributed proximity sensors for safe human-robot interaction,” in 2013 IEEE International Conference on Robotics and Automation, May 2013, pp. 5858–5863.
  • [11] B. Lacevic and P. Rocco, “Kinetostatic danger field - a novel safety assessment for human-robot interaction,” in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Oct. 2010, pp. 2169–2174.
  • [12] N. M. Ceriani, A. M. Zanchettin, P. Rocco, A. Stolt, and A. Robertsson, “Reactive task adaptation based on hierarchical constraints classification for safe industrial robots,” IEEE/ASME Transactions on Mechatronics, vol. 20, no. 6, pp. 2935–2949, Dec 2015.
  • [13] M. J. Rosenstrauch and J. Krüger, “Safe human robot collaboration — operation area segmentation for dynamic adjustable distance monitoring,” in 2018 4th International Conference on Control, Automation and Robotics (ICCAR), April 2018, pp. 17–21.
  • [14] J. A. Marvel and R. Norcross, “Implementing speed and separation monitoring in collaborative robot workcells,” Robotics and Computer-Integrated Manufacturing, vol. 44, pp. 144–155, Apr. 2017.
  • [15] E. Rohmer, S. P. N. Singh, and M. Freese, “V-REP: A versatile and scalable robot simulation framework,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nov. 2013, pp. 1321–1326.
  • [16] L. Shao and H. Zhou, “Curve fitting with bezier cubics,” Graphical models and image processing, vol. 58, no. 3, pp. 223–232, 1996.
  • [17] T. Huckaby, “Curved axis revolutions,” 05 2013. [Online]. Available:˙Curved˙Axis˙Revolutions
  • [18] A. W. Goodman and G. Goodman, “Generalizations of the theorems of pappus,” The American Mathematical Monthly, vol. 76, no. 4, pp. 355–366, 1969. [Online]. Available: