Robotics Meets Cosmetic Dermatology: Development of a Novel Vision-Guided System for Skin Photo-Rejuvenation

05/21/2020 ∙ by Muhammad Muddassir, et al. ∙ IEEE 0

In this paper, we present a novel robotic system for skin photo-rejuvenation procedures, which can uniformly deliver the laser's energy over the skin of the face. The robotised procedure is performed by a manipulator whose end-effector is instrumented with a depth sensor, a thermal camera, and a cosmetic laser generator. To plan the heat stimulating trajectories for the laser, the system computes the surface model of the face and segments it into seven regions that are automatically filled with laser shots. We report experimental results with human subjects to validate the performance of the system. To the best of the author's knowledge, this is the first time that facial skin rejuvenation has been automated by robot manipulators.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 6

page 7

page 8

page 9

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

There are two main ageing processes that affect a person’s skin condition [Journals:Holck2003]. Ageing due to the biological clock, which up to this day is considered to be irreversible, and photo-ageing, which results from exposure to ultraviolet radiation coming from the sun; The latter is widely considered to be treatable, to some extent [Oblong2009, NARURKAR2009281, OBLONG2009301]. With the aim of “reversing” skin damage, in the past decades, people have turned to the so-called beauty clinics for receiving various types of non-invasive dermatological procedures. These treatments are typically performed with cosmetic instruments based on laser light [Goldberg1997], intense pulsed light [Journals:Babilas2010], radio-frequency [Journals:Lolis2012], etc. Worldwide, the beauty industry has seen an exponential increase in the demand for aesthetic skin rejuvenation treatments.

A typical rejuvenation procedure conducted in these beauty clinics is shown in Fig. 1, where the dermatologist (or “skin technician” [Journals:RESNECK2008]) visually examines the skin condition to determine the type of treatment to be performed along with the appropriate laser light parameters [Journals:Eldomyati2011]. A non-ablative instrument is then manipulated with repetitive motion patterns over different areas of the face to stimulate the skin tissues; It must be activated with the exact amount of energy and time to produce the expected result without causing damage [Goldberg1999]. The complete rejuvenation procedure lasts around 25 minutes—It is a tedious and tiring task for practitioners, who must perform it several times in a single day, a situation that contributes to the existing high turnover rate of experienced professionals in the industry [RESNECK200450]. These issues clearly show the need to develop robotic systems that can automate the manipulation of instruments.

Fig. 1: Conventional skin photo-rejuvenation procedure: (a) examination of the facial skin condition, (b) manipulation of the handpiece over the forehead, (c) manipulation of the handpiece on the left jaw.

Cosmetic dermatology is currently an under-explored application field in robotics as compared to medical robots [nathoo2005touch, berkelman2004body, haidegger2008future, kwoh1988robot, beasley2012medical, MHwang2019, CLi2020, Vazquez2019]. It presents many interesting challenges and opportunities. Note that there are very few commercially available robots for these types of applications [Journals:Draelos2011]. One such system is the ARTAS Follicular Unit Extraction (FUE) robot that can remove healthy follicles from a donor and autonomously transplant them onto the patient’s scalp [bernstein2012integrating]. Another example is reported in [Lim2014], which presents a dermatology system to perform hair removal; This system can automatically conduct the task by using a manipulator that activates a laser instrument over regions defined by a supervising practitioner [Lim2014, Lim2015, Lim2017, park2015method, Koh2017]. However, none of these robotic systems is specifically designed for facial skin rejuvenation procedures.

To provide a feasible solution to the open problem, in this paper we present an innovative system capable of autonomously performing the skin rejuvenation procedure on human faces. The developed system is composed of a 6-DOF robot manipulator with the custom-built end-effector that carries a laser cosmetic instrument. This system is equipped with a RGB-D to reconstruct the facial geometric model and a thermal sensor to monitor the procedure. To the best of the author’s knowledge, this is the first time that a robotic approach for facial skin rejuvenation has been reported in the literature. The original contributions are:

  1. Development of a specialised mechanical prototype for cosmetic procedures.

  2. Design of a new sensor-based method for controlling the stimulation of skin tissues.

  3. Experimental validation of the developed robotic system with human subjects.

The rest of the manuscript is organised as follows: Sec. II presents the proposed prototype; Sec. III describes the sensing system; Sec. IV introduces the control algorithm; Sec. V reports the experiments; Sec. VI gives the final conclusions.

Fig. 2: (on left) Proposed setup. Robotised facial skin rejuvenation system. (on right) Exploded view of the customised end-effector

Ii Robotising Skin Photo-Rejuvenation

Ii-a Common Practice

The operational principle of skin photo-rejuvenation procedure is the thermal stimulation of the collagen in the skin [OBLONG2009301]. It is done by transferring the laser’s energy into the skin tissue (which is a slow process that can potentially keep accumulating the laser energy). Fig. 1(a) depicts the conventional setup to preform skin photo-rejuvenation at the beauty clinics. The procedure starts with the close examination of the client’s skin by a practitioner, then a practitioner empirically sets the parameters of the laser generator machine based on the skin tone and condition. These parameters include laser diameter, laser energy and fluence (laser energy per centimetre square )[Farkas2013]. Fig. 1(b) and 1(c) shows how the laser handpiece is positioned in a normal direction to the skin surface. To operate on a particular skin region, the practitioner typically follows an S-shaped path and uses a foot pedal to switch the laser on/off. Throughout this manuscript, we refer to the instance of delivering the laser light energy on the skin surface as a “laser shots”.

While delivering the laser light energy, the practitioner has to monitor the traces of each laser shots to avoid overlapping and gaps between each laser shot. If the laser shots could not distribute uniformly, then the overlapping of laser shots can cause serious side effects (i.e. erythema, hyper-pigmentation, and crusts), whereas the gaps between each laser shot can lead to a sub-optimal stimulation of the skin. In our study, we have used a Q-switched Nd: YAG (1064 nm) laser, with a pulse duration of 6–20 ns, and an adjustable repetition rate of 1–10 Hz. The wavelength of 1064 nm lies within the infra-red spectrum (invisible to human eye). The laser generator machine used in this study fires a low energy flash of visible light with each laser shot for the convenience of an operator. Even with this visual aids, an experienced practitioner can easily lose the track of the degree of the stimulation induced onto the skin.

Ii-B Surface Coverage by Laser Shots

The problem of performing the photo-rejuvenation over a skin surface can be explained as a filling of the same radii circles on a surface (radius of the circle is equal to the radius of laser light). For humans, it may be an intuitive way to fill the circles on a surface by start placing the circles from one boundary of the surface to the other. But for a machine to perform this task, there should be a rigid mathematical description. Mathematically, this problem can be defined as an optimisation problem which aims to find the optimal locations of the same radius circles while minmising the distance between the boundaries of each circle. Then it can be written as a constrained optimisation problem.

(1)
(2)

Here denotes the centre of a circle in Cartesian coordinate, the distance function, the surface where the and defines the lower and upper boundary of the surface . is the radius of the laser light.

In this article, the close form solution of the optimisation problem in Eq. (1) is not found directly. Instead, a sampling-based method is developed to compute the centre of circles of radius . The reasons to find the solution of the optimisation problem via sampling-based methods are:

  • A well-defined description of the surface is needed to find the solution of the Eq. (1). But the facial surface is captured in a form of a point cloud in the proposed system. The point cloud can be parametrised to a surface mathematical function which can complicate the definition of upper and lower boundaries of the surface. In Fig. 5(b) shows that the surface of each part of the facial model have the boundaries shape contained both lines and curves. Thus, defining the boundaries of the surface is more complicated than the parametrisation of the surface in our case.

  • In most of the sampling-based methods, the local optimality of the solution is guaranteed. Collectively, these local solutions can nearly satisfy the global optimisation problem.

  • Unlike the global optimisation, each local solution is stable and satisfies the constraints locally, which can ultimately avoid any divergence from the exact solution.

This method is further explained in Sec. IV. The validation tests conducted in Sec. V prove the optimality of the solution, obtained from the sampling-based method.

Ii-C Proposed Structural Setup

Fig. 2 conceptually depicts the proposed robotic system and its various components. A robot manipulator is placed on top of a wheeled platform that provides flexibility in positioning the system around the beauty clinic. A custom-made rejuvenation end-effector is also developed for the robotic system. The purpose of this end-effector is to carry all the sensors and laser handpiece during the procedure. This new end-effector was designed considering the following requirements:

  • The skin area that is stimulated by the laser shots must be observable by the depth and thermal cameras.

  • The structure must be rigid enough to maintain a stable view with both cameras during manipulation motions.

  • The laser should be able to stimulate any point over the surface of the face.

  • The structure must be compact.

To fulfil the above requirements, a rectangle shell structure is designed to house the laser generator and the sensors, as shown in Fig. 2. The tilted placement of the thermal and depth cameras on the custom end-effector ensures the visibility of laser shot for both cameras. Whereas, the extended structure for the depth camera avoids any occlusion could cause by the thermal camera. For the sake of simplicity, we called the custom-made rejuvenation end-effector as an end-effector of a robot manipulator throughout this research paper.

Ii-D System Architecture

Fig. 3 illustrates the detailed control and structural architecture of the proposed robotic prototype. Two PCs are used for executing the automated rejuvenation procedure; One PC runs under Windows 10 OS, and the other under Linux OS (Ubuntu 16.04). Both systems communicate over a Redis server through a TCP/IP socket. The prototype utilises an Orbbec Astra Mini S depth camera to reconstruct the geometric model of a face. To monitor the thermal changes produced by a laser shot, the prototype acquires the thermal data from FLIR Lepton 3.5 thermal camera with a PureThermal 2 board. Both of these vision sensors are fixed on the external case structure, as shown in Fig 2. The stainless steel protective case houses a laser generator inside it. An aluminium case fixture attaches this protective case with the robot manipulator. To manipulate the laser generator over the facial skin cosmetic instrument, a UR5 manipulator from Universal Robots is used. The control box of the robot is placed inside the mobile platform; The Linux PC communicates with the robot’s servo controller via a TCP/IP socket.

All the proposed algorithms (except user control interface and the thermal measurements) run in the Linux PC under ROS [quigley2009ros]. The complete system can be divided into two main subsystems: a sensing system and a control system. Both systems are interconnected with each other and are responsible for dedicated tasks. The sensing system performs the computation of the geometric model of a face, the model’s segmentation into different regions and the processing of thermal measurement. Whereas, the control system is responsible to plan the stimulation trajectories, synchronising the laser’s shooting, and controlling the robot’s end-effector motion. A user interface is also developed so that the practitioner can operate the system with a control screen.

Fig. 3: Architecture of the proposed photo-rejuvenation system. (a) Linux PC. (b) Windows PC. (c) Orbbec Astra Mini S depth camera. (d) PhidgetInterfaceKit 8/8/8 w/6 Port Hub. (e) SSR Relay Board. (f) laser generator. (g) custom end-effector. (h) UR5 robot manipulator. (i) FLIR Lepton 3.5 with PureThermal 2.

Iii Sensing System

Iii-a Facial Geometric Model Computation

The facial surface model reconstruction starts by searching for a face in the scene with the face detector reported in [kazemi2014one, king2012dlib]. Once a face is detected, the system extracts the key facial landmarks such as eyes, eyebrows, nose, lips and face boundary, using the [kazemi2014one]. The mean position of the left and right eye is considered as the position of the detected face in the scene. Say, and

are the position vectors of

of left and right eye of the face in a camera coordinate, respectively. Then the position vector of the detected face can be computed by + . As, the rotation matrix of a coordinate frames consist of three orthonormal column vectors of , i.e. . The rotation matrix

that defines the orientation of the face in the camera coordinate can be computed by obtaining three orthonormal vectors as follows:

(3)

where , and . Now the pose of the face in the camera coordinate is:

(4)

where the is a rotation matrix and is a

translation vector. After estimating the pose of the face in the camera coordinate, it is transformed to the robot’s base coordinate using:

(5)

Here defines the pose of the face in robot’s base coordinate, is the homogeneous transformation from robot’s base to end-effector. is defining the pose of face in camera coordinate.

The commercial depth cameras have a limited field of view (FOV), thus the facial model without holes can not be reconstructed. To completely reconstruct the facial model, the sensing system estimates number of viewpoints around the detected face. The capturing of images from different viewpoints avoids the possible occlusion and provides more structural details of the surface of a human face. These viewpoints are estimated with the increments and decrements of a predefined angle around the detected face, latitudinally and longitudinally. Algorithm 1 estimates the different viewpoints around .

Input
      : minimum range of the depth camera.
      : angle in rad around -axis or -axis
      : number of viewpoints
Output
      : viewpoints
Routine

1:for  to  do
2:     if latitudinal viewpoint then
3:         
4:         
5:     else if longitudinal viewpoint then
6:         
7:         
8:     end if
9:     
10:end for
Algorithm 1 Estimate the viewpoints

Here, the defines the minimum range of the depth camera and this distance is chosen on the fact that the measurement error in the depth cameras is directly proportional to the measured distance. For the scanning process in this article, we took viewpoints, one front view, latitudinal views (4 on left and 4 on the right-hand side) and longitudinal views (2 up and 2 down) around the detected face. Each latitudinal and longitudinal views have angle difference between each other. The estimated viewpoints around the face are shown in Fig. 4.

Fig. 4: Estimating the viewpoints using Algorithm 1. (a) depth camera. (b) estimated viewpoints. (c) pose of a human face

To acquire the visual data from the estimated viewpoints, the control system commands the end effector to visit each estimated viewpoint . After reaching a viewpoint, sensing system captures the visual data and extracts only the region having the detected face from the captured RGB and depth images. Then, these regions from both images are converted into the point cloud. As each point cloud is plotted in the camera coordinate and at the time of acquiring visual data the depth camera has pose. Thus, each point cloud has an offset transformation in their origin to the point clouds captured from other viewpoints. To align all the point clouds, the relative pose among each viewpoint should be known. As the control system receives continuous feedback of end-effector position from the robot manipulator. The relative pose between each viewpoint can be computed as . Additionally, the Point to Plane ICP [low2004linear] followed by CICP [park2017colored] is applied on the point clouds captured from several viewpoints. Furthermore, to remove the noisy data points, the voxel grid down-sampling, with voxel leaf length of , is performed on the final facial model. This down-sampling keeps the density consistent and remove the holes from the final facial model. Each point in a point cloud is a structure of three vectors, denotes as , where is a position vector, represents the normal vector and defines the colour.

Iii-B Automatic Face Segmentation

In this journal paper, an algorithm to automatically segment the human face into seven regions is also proposed, see Fig. 5. The segmented regions comprise the left cheek, left jaw, right cheek, right jaw, forehead, nose and upper lips. This segmentation of a human face is based on the fact that the laser energy level requires stimulating the skin depends on the thickness of fats layer beneath it. The fats layer beneath the facial skin is uneven throughout the face, e.g. the cheek region has thicker fats layer than the forehead region. Therefore, defining each region into segments helps to set the appropriate energy level for the laser, depending on the region to be stimulated. The segmented model also enables to compute trajectory separately for the laser instrument that smoothly follows the curved surface of the skin with the required normal orientation to it; Note the if such geometric constraints are not considered within the trajectory generation, the manipulated instrument can potentially collide with protruding areas of the face (e.g. the nose)

The initial facial model that is computed with the depth camera is in the form of a point cloud of unorganised 3D data (where neighbouring points in space are not necessarily adjacent in computer indexing, and vice-versa). To plan a rejuvenation trajectory over the surface, the control system needs to first determine which 3D points lie in each of the different regions. As a solution to this problem, we developed an algorithm that uses 2D facial landmarks to cluster the unstructured data into seven point clouds, see Fig. 5. Lets the (for ) denotes the th region of facial model and an arbitrary th point in is .

Initially, the auto-segmentation algorithm detects the facial landmarks using [kazemi2014one, king2012dlib], from RGB image captured from first viewpoint , which is in the frontal direction to the face, hence provides the most reliable view to detect the landmarks (hence no facial feature occludes). Once these key landmarks have been detected, the algorithm draws polygons on the image plane to define the seven regions on the image plane, as shown in Fig. 5(a). Then, a sorting routine runs seven parallel processes to back-project each 3D point of the facial model on the image plane, by using the camera’s perspective projection relation:

(6)

Here is a column vector and denotes the projection of a 3D point on an image plane. Where as the matrix of intrinsic parameters, and as the extrinsic parameters (the rotation matrix and translation vector) of the camera, these all are known due to pre-calibration. In Fig. 5(b), the algorithm tests whether a given 3D point lies in the polygon , by using the method reported in [o1998computational]. Fig. 5(c) depicts the segmented 3D facial model (in the form of point cloud). The whole process to segment a facial model is sketched in Algorithm 2.

Input
      : point cloud of face model
      : number of points in
      : single point of face model,
Output
      : point cloud of th segments,
Routine

1:Detect facial landmarks
2:Draw polygons on image plane
3:for  to  do parallel
4:     for  to  do
5:         
6:         if  then
7:              push
8:         end if
9:     end for
10:end for
Algorithm 2 Automatic Face Segmentation
Fig. 5: Segmentation of facial model using 2D facial landmarks. (a) polygons defines each region. (b) back-project the 3D point to 2D image plane. (c) segments point cloud into segments, where

Iii-C Temperature Measurement

In the proposed prototype, the sensing system acquires the thermal feedback from a thermographic infrared camera, during the skin rejuvenation procedure. A thermographic or thermal camera is attached to the custom end-effector of the robot manipulator. Each pixel in an image of a thermal camera corresponds to the real temperature value of an observed scene. During the skin rejuvenation procedure, the thermal camera is continuously monitoring the thermal changes occurred on the facial skin, especially before and after a laser shot.

The thermal camera used in the proposed prototype generates a voltage value corresponds to the observed temperature. This voltage can be calculated by [flirets3xx]:

(7)

The thermographic cameras are set a value of emittance according to the object to measure. The measured output voltage of the camera is converted to the equivalent voltage if the measurement had been performed on a black-body . Removing the theoretical contribution from the environment, the reflected radiation and atmospheric contributed radiation is . The conversion from black-body voltage output to temperature depends on the proprietary FLIR calibration curves, which are calibrated against objects as close as possible to the one of a theoretical black-body.

Iv Control System

Iv-a Path Planning

To plan the paths for each segment a sampling-based path planning algorithm is devised. This path planning algorithm takes point cloud of a surface as an input and outputs a set of vertical or horizontal paths (in the form of point cloud). These paths are objected to filling the input surface with laser shots (according to the diameter of laser ). The path can be planned vertically or horizontally, depending on the operating segment. For example, the forehead is a wider segment so horizontal paths can decrease the number of the lane changes for a robot manipulator. Similarly, the nose region has more height than width, in this case, vertical paths are more appropriate.

Fig 6 illustrates that the path planning algorithm divides the segmented region into number of strips along the -axis where is th strip of th segment.

(8)

and are the maximum and minimum values along the -axis, respectively. The width of each strip is equal to the diameter of laser shots , so a line passing through the centre of each strip can be considered as an optimal path for the robot end-effector. Optimal path in the sense that if the robot manipulator follows it, no overlapping of laser shots would occur among the paths of a segment. This path can be computed using a polynomial fit, but this method can not grantee that the fitted polynomial will pass from the centre of a strip of points. Furthermore, the points in each strip are sparsely distributed which can also bias the polynomial fit. Then the fitted polynomial could generate a path deviated from the centre of the strip. That is why this method has been devised to ensure that the planned paths always pass through the centre of each strip.

After the strip is partitioned from a point cloud of a segment, a kernel of width and height of is placed on the one side of the strip. and are the separation distance of laser shot and inter-path distance respectively. The kernel is a buffer of points which re-populates the points after every incremental/decremental step where each step is equal to the diameter of laser Mathematically, can also be defined as a set of points:

(9)

where and . will be in if .

In Fig. 6, a kernel , in second strip from the top, is moving from left to right with incremental step. The position of in a strip can be defined as:

(10)

where and are the minimum and maximum values along -axis of a strip .

Now, the are the paths for a segment and a path point can be defined as,

(11)

and are the position and normal vectors of . These two entities obtain from the average of position and normal vector of the points inside the kernel :

(12)
(13)

is the number of points inside the kernel. The kernel can move to both directions (left or right). But the kernel sweeps in an opposite direction for adjacent strips. This condition enforces the algorithm to generate an S-shaped path. Fig. 7 illustrates results from the different algorithms (scanning to path planning).

Fig. 6: (on left) Partition of point cloud along -axis. , laser diameter. , laser shot separation distance. , inter-path distance. averaging kernel. (on right) is incrementing with through a strip of points.
Fig. 7: Facial Reconstruction to Path Planning. (a) raw facial model, obtained from scanning. (b) clean facial model, unnecessary point discarded using GUI. (c) and (d) 3D segmentation. (e) planned path for each region

Iv-B Robot Control Framework

The required manipulation task of the end-effector for rejuvenation procedure is achieved by a 6 DOF industrial robot manipulator. This robot manipulator can achieve the repeatability about if this robot is controlled by its built-in Joint/Cartesian position controller. That is why all the manipulation tasks use the position control for the experiments in Sec. V. Each point in a planned path is a structure of two vectors and . For this dermatological procedure, the laser handpiece should be normal to the skin surface at the time of a laser shot. This can maximise the energy transfer to the skin during each laser shot. and are converted into a position command for a robot manipulator. can directly define the desired position of robot end-effector. To define the orientation of robot end-effector, first, the rotation matrix derives from the normal vector. As a rotation matrix is composed of three column vectors and each vector is orthonormal to others, i.e. . Let’s consider that the and are parallel. The cross product of the second column of rotation matrix of robot base and the third column of desired rotation matrix at a path point gives . can be computed as . Fig. 8 provides better illustration of vector calculation. The robot manipulator used in the proposed prototype can only recognise the rotation in the axis-angle representation. Suppose a unit vector is parallel to the rotation axis of a rigid body rotation defined by a rotation matrix of .

(14)

Then the vector is calculated as [baker2012matrix]

(15)

The magnitude of is equal to and is the angle to which a rigid body rotates around [baker2012matrix]. Where is computed as

(16)

Now the representation of rotation of a rigid body in axis angle is

(17)

To define the pose of the robot end-effector for each path point, the pose vector will be;

(18)

where is a pose vector of for each path point . The computed pose vectors for each path point are then sent to robot manipulator via TCP/IP socket. The velocity and acceleration are empirically constrained to and respectively.

Fig. 8: Calculation of three column vectors , and of a rotation matrix from a normal vectors .

Iv-C Laser Shot Control

To deliver the laser energy uniformly over the under-operated skin surface, the control of the laser firing instance is paramount. To define a laser-firing instance, an impulse function is implied. Where the is the argument of the impulse function and is defined as the distance covered by the robot end-effector from the position of last laser shot to current position , so . Where defines the diameter of a laser shot. This impulse function generates as an output only when its argument becomes zeros. Then the instance of laser shot can be defined as,

(19)

The laser shot instance only occurs when the robot is following the pose vectors . Otherwise, the output from will not consider. So the pose vector of the end-effector when the laser shot occurs can be defined as,

(20)

denotes the pose vector of of end-effector where the laser shot instance occurred. This laser fire control relies on , where the depends only on the current position of robot end-effector and the position of last laser shot. After every laser shot instance, the is reset to zero. This simple control of laser shot performs elegantly to distribute the laser shot uniformly.

Iv-D User Control Interface

A user control interface is developed for an end-user (physician and practitioner). In this user control interface, a reconstructed facial model is visualised for user interaction. One can discard the unnecessary or floating points if present in the facial model before feeding the model to the automatic face segmentation algorithm. It will ensure that the noisy points in the model do not affect the paths planned from the segments.

The user control interface allows the user to initialise the procedure for any segmented region and a selection tool is also provided in the case user wants to operate on a custom-defined region of skin. Besides the visualisation of a reconstructed facial model, the thermal data is also registered on the surface of the reconstructed model during the procedure. A colour, from a custom colour scale, is assigned to each thermal value before registering the thermal value on the surface of the model.

V Results

The accuracy and precision to deliver the laser energy on the skin surface of proposed robotic prototype depend on two key factors, the accurate reconstruction of facial model and the minimum error in transformation matrix of robot end-effector to depth camera . the paths of the end-effector to performs the rejuvenation procedure are computed from the reconstructed facial model. The errors in the facial model can directly perturb the path planning process which can ultimately lead to sub-optimal rejuvenation procedure. the rigid body transformation from the end effector to the camera has significance in this proposed system. Because the facial model is reconstructed in the camera coordinate and if the in not precise, then the paths planned from the reconstructed face model can not be projected accurately to the robot coordinates. For example, if the desired position to fire laser shot is on the edge of eyebrow then few millimetres displacement can harm the aesthetics condition of the face. Also, re-firing of laser shots on the region where shots already fired is not desirable. That is why, the validation of reconstructed facial model , transformation and laser shot instance is performed in this section. The testing of the proposed prototype on human subjects is also a part of this section.

V-a Validation of the Reconstructed Facial Model

To validate the accuracy of the proposed method to reconstruct a facial model, we have scanned a mannequin’s face from an industrial-grade 3D handheld 3D scanner, HandySCAN 3D from CreaFoam (accuracy of this 3D scanner is mm, this scan has been considered as ground truth), as shown in Fig 9(a). In Fig 9(b) and (c), the reconstructed face by our method and its comparison with the ground truth is illustrated. Where [girardeau2015cloud] is used to compare two point clouds. In Fig. 9

(c), the purple colour represents the lowest error where the yellow colour corresponds to the highest error. The errors mean and standard deviation are

and respectively. These low error metrics increase the confidence of the reconstructed facial model by the proposed method, hence increases the reliability of planned paths.

Fig. 9: Error plot of reconstructed face (units are in meters). Facial model reconstructed by (a) HandySCAN 3D from CreaFoam and (b) our system. (c) error between two models.

V-B Cross-Calibration

To evaluate the accuracy of the transformation , a cross-calibration technique is used to validate the , as shown in Fig. 10. An AR marker on the metal plate is glued beside the robot base. The transformation from robot base to AR marker is already known. Initially, we have obtained the transformation between the end-effector and the depth camera , using [tsai1989new]. To achieve more accurate , an iterative optimisation technique is implemented.

(21)

and are the translation vectors of and respectively. The and are the pose of same AR marker but from different transformation chains. Where is the direct transformation from the robot base to the AR marker and will be considered as ground truth. Whereas, is obtained by the transformation chain of , and . is computed from robot manipulator’s odometry and can introduce a maximum error of 0.1mm. defines the transformation from the camera to the marker and can be measured directly by observing the AR marker from the camera. The depth camera used in this study is Orbbec Astra Mini S and has the accuracy of 1mm. Here is the only transformation which is not directly measuring any entity. So is tuned, i.e. the condition is satisfied.

Fig. 10: Setup to validate the transformation between end-effector and camera.

V-C Laser Shots Separation Distance Test

The laser shot control is evaluated by the separation distance test. In this test, a paper printed with the black ink is placed on cardboard and let the robot end-effector followed a linear path while firing the laser shot of 3mm diameter on it. For each run, different laser separation distance was assigned. The laser fire control have been evaluated with three separation distances, . The pose of end-effector when the laser shot instance occurred was recorded, using Eq. (20). In Fig. 11, these laser shot instances are plotted with red circles and the path followed by the robot end-effector with a black line. Furthermore, the distance between each shot is also measured by vernier calibre to validate the measurements, as shown in Fig. 12.

Fig. 11: Laser shot separation 0.01m, 0.005m and 0.002m (top to bottom)
Fig. 12: Laser shot separation distance test: the values shown in blue colour are the measurement from vernier calibre.

V-D Thermal Data Measurement and its Registration

When a laser fire instance happens, a request to record the temperature of the skin at the laser shot instance is sent from the Linux PC to Windows PC. Then, only a small region of the thermal image is considered to measure the temperature of the skin where the laser has been shot, as shown in Fig. 13. To represent the current temperature of the skin, the thermal measurements were plotted as the coloured points on the reconstructed facial model. Whereas the colours of points were picked from the custom thermal scale, correspond to the temperature measurement. In Fig. 13, a red coloured pointer is visualised to simulate laser shot firing. The position of this pointer was updated in the real-time with the pose data coming from the robot manipulator.

Fig. 13: Thermal registration and feedback on control interface.

V-E Test on Human Subjects

The proposed automated skin photo-rejuvenation procedure has been performed on two human volunteers 111Ethics Approval Reference Number: HSEARS20190606002, Human Subjects Ethics Sub-committee, Departmental Research Committee, The Hong Kong Polytechnics University, Hong Kong. Before performing the automated procedure on the human volunteers, the system has been tested repeatedly on a mannequin’s face. Although the effectiveness, accuracy and robustness of the proposed system can be evaluated by performing the automated procedure on a mannequin’s face. But the mannequin’s face is made up of plastic and exhibits a different trend of temperature changes on the exposure of laser. That is why we have invited two volunteers to analyse the change of temperature with the exposure of laser shot on the surface of the skin. Both volunteers wore a set of protected glasses to avoid direct contact of laser shot with their eyes or eyelids. The eyebrows were also covered by medical paper tape. Their heads had been covered with a strip of thick cotton cloth to avoid laser exposure to their hairs.

To validate the proposed system, we chose only the forehead region to perform the automated procedure. For the automated procedure performed on both human subjects, the Q-Switch Nd: YAG laser of 1064nm wavelength with 5mm laser diameter is used and energy was set to 500 where the fluence was 10. In Fig. 14, the path followed by the robot end-effector is represented by a black line and the centre of each red circle is the instance of a laser shot. The diameter of each red circle is equal to the diameter of the laser . The path and the laser shot instances are based on real-time data recorded while performing the procedure. The 3D plots are shown in Fig. 14 illustrates the effectiveness of the procedure in the term of uniform laser distribution and path consistency.

Tests N (m) d(m) (m) (nm) (%)
SDT1 11 0.01 0.1110 0.01009 2.09 77.832
SDT2 26 0.005 0.131 0.00505 2.11 77.940
SDT3 71 0.002 0.146 0.00205 4.05 76.388
HS1 205 0.005 1.017 0.00503 2.98 78.385
HS2 204 0.005 1.019 0.00504 2.97 77.846

SDT: separation distance test. HS: human subject. N: number of laser shots. : radius of laser. d: Distance covered by end-effector. : mean distance of laser shots.

: variance in laser shots.

: area covered by laser shots.

TABLE I: Evaluation of separation distance consistency
Fig. 14: Robot motion and laser firing instances (units are in meters)

In Table. I, the mean and variance along with the calculated error for each test is presented. The mean values for each test are close to pre-defined laser diameter , thus all tests have small variance value. The highest variance is recorded for SDT3 where the pre-defined laser diameter is 2mm. This can be improved by decreasing the end-effector’s speed which provides the system a wider window of time to acquire the end-effector position from robot manipulator. The last column of Table. I illustrates the area covered by the laser shots. Here one fact is worth to mention that an inscribed circle in a square can cover only 78.54% of total area of the circle. Now, if we compare the value in the column under , if provides better comparison of covered vs coverable area. Fig. 15 shows the manipulation of end-effector over the surface of the forehead skin while delivery the laser energy uniformly. In Fig. 16, the automated rejuvenation procedure is performing on human subjects. Fig. 16(a) shows the bird-eye view of procedure and Fig. 16(b) is illustrating the rejuvenation procedure runs in simulation environment. Fig. 16(c) is a view from depth camera attached with the end-effector. In Fig. 16(d), the thermal values recorded after each laser shot is registering on the facial model. Fig. 16(e) depicts the complete facial thermal image, the occurred laser shot has been highlighted from the black circle.

Fig. 15: (a)-(f) The robot end-effector is manipulating over the surface of skin while delivering the laser energy uniformly.
Fig. 16: The robotised skin rejuvenation system is performing the procedure automatically. (a) recorder camera view. (b) robot manipulator simulation runs parallel to real robotic motion, (c) eye on hand view. (d) UI registering the thermal data on the reconstructed facial model after laser shot on the. (e) thermal camera view

Vi Conclusion

In this article, we have demonstrated the automated facial skin rejuvenation robotic prototype. To complete the manipulation task, an industrial-grade robotic manipulator along with the custom end-effector was used. The custom end-effector equipped with a depth camera, thermal camera and laser generator. A method was presented to estimate the pose of the human face and the viewpoint around the face. An accurate facial model was constructed from the point cloud data captured from the different estimated viewpoints. The 3D facial model was segmented into seven regions using the 2D facial landmarks. Then the optimal paths of robot manipulator extracted from each segmented. The optimal path ensured that no overlapping of laser shots between the paths occurred. To control the distance of laser shots while following the optimal path, a control law was devised to fire the laser shot after predefined laser separation distance.

The reconstructed facial model is compared with the ground truth and it shows that the distribution of error in the reconstructed facial model is in sub-millimetres. Initially, the facial model is constructed in the camera coordinate, then it transforms into a robot coordinate. To avoid the possible error could be introduced by , an iterative optimisation technique is utilised. This error reduction is achieved by observing the same AR marker from two transformation chains then minimise the difference between them. The laser separation distance is also evaluated by firing the laser shot on a plain piece of paper. Then the separation of laser shots is also measured using vernier calibre. The laser shots were plotted on a graph to check the possible overlapping of laser shots. All the subsystems are tested on them, from facial model reconstruction to manipulation of end-effector while firing the laser shots.

The results in Fig. 14 shows that the robotic skin rejuvenation system filled the laser shots uniformly on the operated regions while following a smooth path. The uniform laser distribution demonstrates the potential improvement in the outcomes of this procedure, which can not be achieved by a human operator. The proposed methods have some limitations. The proposed robotic prototype is an open-loop system that is why it can not incorporate any change in the pose of the reconstructed facial model. If the subject moves their head a bit, the system is unaware of this discrepancy and cannot compensate in the planned path. Thus, the real-time monitoring of the pose of the face while manipulation is one of the main challenge and our objectives in future work. As the laser fire controller is only using the local information of two positions, and . That is why overlapping of shots may occur at the edges of the paths or the path changing. Future works also include the fusion of current skin tissue temperature in the control of laser firing. This will able the robotics system to decide whether to fire or not to fire on an instance. Furthermore, it is also observed that only temperature measurement of the surface of the skin may not be sufficient to develop any laser fire controller. Because the measurements depend on the emittance of the observed object, each temperature measurement may perturb from a real value [flirets3xx]. The room temperature also influences the temperature of the skin surface. Another reason is a low thermal diffusivity of skin outer layer “stratum corneum”, which makes the outer layer of the skin a good insulator. Due to these reasons, the temperature changes occurred inside the skin tissue due to laser-skin interaction cannot be predicted by the temperature values on the skin surface. The clinical study will be conducted to compare the efficiency of skin photo-rejuvenation procedure by the proposed robotic prototype and a human operator.

References