Robust 2 1/2D Visual Servoing of a Cable-Driven Parallel Robot Thanks to Trajectory Tracking

01/16/2020 ∙ by Zane Zake, et al. ∙ LS2N Inria 0

Cable-Driven Parallel Robots (CDPRs) are a kind of parallel robots that have cables instead of rigid links. Implementing vision-based control on CDPRs leads to a good final accuracy despite modeling errors and other perturbations in the system. However, unlike final accuracy, the trajectory to the goal can be affected by the perturbations in the system. This paper proposes the use of trajectory tracking to improve the robustness of 2 1/2 D visual servoing control of CDPRs. Lyapunov stability analysis is performed and, as a result, a novel workspace, named control stability workspace, is defined. This workspace defines the set of moving-platform poses where the robot is able to execute its task while being stable. The improvement of robustness is clearly shown in experimental validation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Aspecial kind of parallel robots named Cable-Driven Parallel Robots (CDPRs) has cables instead of rigid links. The main advantages of CDPRs are their large workspace, low mass in motion, high velocity and acceleration capacity, and reconfigurability [1]. The main drawback of CDPRs is their poor positioning accuracy. Multiple approaches to deal with this drawback can be found in the literature. The most common one is the improvement of the CDPR model. Since cables are not rigid bodies, creating a precise CDPR model is a tedious task, because it needs to include, for example, pulley kinematics, cable sag, elongation and creep [2] [3] [4]. Besides, cable-cable and cable-platform interferences can affect the accuracy of a CDPR. To avoid those interferences, studies have been done on the definition of CDPR workspace [1]. When modeling has been deemed unsuitable or insufficient, sensors have been used to gain knowledge about some of the system parameters. For example, angular sensors can be used to retrieve the cable angle [5]; cable tension sensors can be used to assess the current payload mass and the location of center of gravity [6]; color sensors can be used to detect regularly spaced color marks on cables to improve cable length measurement [7]. Of course, exteroceptive sensors can be used to measure the moving-platform (MP) pose accurately. To the best of our knowledge, few studies exist on the use of vision to control CDPRs and improve their accuracy. For instance, four cameras are used in [8] to precisely detect the MP pose of a large-scale CDPR. Furthermore, additional stereo-camera pairs were used to detect cable sagging at their exit points from the CDPR base structure. Similarly, [9] used a six infra-red camera system to detect the MP pose of a CDPR used in a haptic application. A camera can also be mounted on the MP to see the object of interest. In this case, control is performed with respect to the object of interest. Thus the MP pose is not directly observed. Such a control algorithm for a three-DOF translational CDPR has been introduced in [10], and it has been extended to six-DOF CDPRs in [11], where the authors used a pose-based visual servoing (PBVS) control scheme. The robustness of this control scheme to perturbations and uncertainties in the robot model was analyzed. The stability analysis of this controller was extended in [12] to find the limits of perturbations that do not yield the system unstable. As a conclusion, as long as the perturbations are kept within these limits, they do not affect the MP accuracy at its final pose. However, even if perturbation levels are kept within the boundaries, they have an undesirable effect along the trajectory to the goal.

To further improve the robustness and the achievement of the expected trajectory, planning and tracking of a trajectory can be used. Trajectory planning and tracking take advantage of stability and robustness to large perturbations of classical visual servoing approaches in the vicinity of the goal [18]. Indeed, when the difference between current and desired visual features is small, the behavior of the system approaches the ideal one, no matter the perturbations. With the implementation of trajectory planning and tracking, the desired features are varying along the planned trajectory keeping the difference between current and desired visual features small at all times.

Under perfect conditions, the PBVS control used in [11] and [12] leads to a straight-line trajectory of the target center-point in the image, which means that the target is likely not to be lost during task execution. Unfortunately, even under perfect conditions the camera trajectory is not a straight line. To have a straight-line trajectory for both the target center-point in the image and the camera in the robot frame, a hybrid visual servoing control, named 2½D visual servoing (2½D VS) [13] [14], has been selected in this paper. It combines the use of 2D and 3D features in order to profit from the benefits of PBVS and Image-Based Visual Servoing (IBVS), while suffering the drawbacks of neither.

Accordingly, this paper deals with robust 2½D VS of a CDPR thanks to trajectory tracking. It allows us to ensure the predictability of the trajectory to the goal. Furthermore, it improves the overall robustness of the system. In addition, it was found in [12] that the stability of the system with given perturbations depends also on the MP pose in the base frame. As a consequence, a novel workspace, named Control Stability Workspace (CSW), is defined. This workspace gives the set of MP poses where the robot is able to execute its task, while being stable from a control viewpoint.

This paper is organized as follows. Section II presents the vision-based control strategy for a CDPR. Section III is dedicated to the addition of trajectory planning and tracking in the control strategy. Stability of both control types is analyzed in Section IV. Section V is dedicated to the definition of a novel workspace named Control Stability Workspace. Section VI describes the experimental results obtained on a small-scale CDPR. Finally, conclusions are drawn in Section VII.

Ii 2½D Visual Servoing of Cable-Driven Parallel Robots

Ii-a CDPR Kinematics

The schematic of a spatial CDPR is shown in Fig. 1. The camera is mounted on the MP, therefore the homogeneous transformation matrix  between the MP frame  and the camera frame  does not change with time. On the contrary, the homogeneous transformation matrices  between the base frame  and the MP frame , and  between the camera frame  and the object frame  change with time.

Fig. 1: Schematic of a spatial CDPR with eight cables, a camera mounted on its MP and an object in the workspace

The length of the

th cable is the 2-norm of the vector

pointing from cable exit point  to cable anchor point , namely,

(1)

with

(2)

where is the unit vector of that is expressed as:

(3)

is the Cartesian coordinates vector of cable exit point  expressed in ; is the Cartesian coordinates vector of cable anchor point  expressed in ; and  are the rotation matrix and translation vector from  to .

The cable velocities  are obtained upon differentiation of Eq. (2) with respect to (w.r.t.) time:

(4)

where  is the MP twist expressed in its own frame ,  is the cable velocity vector, and  is the Forward Jacobian matrix of the CDPR, defined as [15]:

(5)

where  for a spatial CDPR with eight cables. Thus the Jacobian  is a –matrix.

Ii-B 2½D Visual Servoing

The control scheme considered in this paper is shown in Fig. 2

. An image is retrieved from the camera and processed with a computer vision algorithm, from which the current feature vector is defined as 

 [13] [14]. Here,  is the translation vector between the desired camera frame 111In this paper, the superscript  denotes the desired value, e.g. desired feature vector . Similarly,  in  refers to desired camera frame  and the current camera frame  and  are the image coordinates of the object center ; is the third component of  vector, where  is the axis and  is the angle of the rotation matrix . An error vector  is defined by comparing  to , namely

(6)
Fig. 2: Control scheme for visual servoing of a CDPR

As mentioned in the introduction, in perfect conditions, this choice of visual features leads to a straight-line trajectory of the camera (because  is part of ), as well as a straight-line trajectory of object center-point  in the image (as  is also part of 

). The translational degrees of freedom are used to realize the 3D straight line of the camera, while the rotational degrees of freedom are devoted to the realization of the 2D straight line of point 

.

To decrease the error , an exponential decoupled form is selected

(7)

with a positive adaptive gain , that is computed at each iteration, depending on the current value of  [10]. The derivative of the error  can be written as a function of the Cartesian velocity of the camera , expressed in :

(8)

where  is the interaction matrix given by [13] [14] [16]:

(9)

with:

(10)
(11)

being the components of the third row of matrix :

(12)

where

Finally, injecting (7) into (8) the instantaneous velocity of the camera in its own frame can be expressed as:

(13)

where 

is the inverse of the estimation of the interaction matrix 

. Note that the inverse is directly used, because  is a –matrix that is of full rank for 2½D VS [16].

Ii-C Kinematics and Vision

To control the CDPR by 2½D VS, it is necessary to combine the modeling shown in Sections II-A and II-B. It is done by expressing the MP twist  as a function of camera velocity :

(14)

where  is the adjoint matrix that is expressed as [17]:

(15)

Finally, the model of the system shown in Fig. 2 is written from Eqs. (4), (8) and (14):

(16)

where  is the Moore-Penrose pseudo-inverse of the Jacobian matrix .

Upon injecting (14) and (13) into (4), the output of the control scheme, i.e. the cable velocity vector , takes the form:

(17)

where  and  are the estimations of  and , resp.

Iii Trajectory Planning and Tracking

It is well known that having perturbations in the system, which do not cause loss of stability, has an undesirable effect on the trajectory. This was shown in [11] [12] for PBVS and it is also true for the 2½D VS controller. Trajectory planning and following can be used to increase the robustness of the chosen control w.r.t. modeling errors [18] and to preserve the straight-line shape of the trajectory [19].

Indeed, the larger , the bigger the effect of modeling errors on system behavior. When tracking a chosen trajectory, at each iteration the error becomes . Consequently, when  s we have . Since  is now time varying, the control scheme needs to be slightly changed. More precisely, instead of (8) we now have [16] [19]:

(18)

Hence, the new control scheme is shown in Fig. 3.

Fig. 3: Control scheme for VS with trajectory tracking of a CDPR

The new model of the system shown in Fig. 3 is written from Eqs. (4), (18) and (14):

(19)

Injecting (14) and (18) into (4) and expressing cable velocity vector  leads to :

(20)

The success of any trajectory tracking is based on the time available to complete the task. The higher the trajectory time , the more accurate the trajectory tracking. Indeed, the larger , the lower the MP velocity, and the smaller the path step between two iterations. This leads to a smaller difference between  and , which in turn means a smaller difference between  and , thus a better path following.

Iii-a Implementation for 2½D VS

The implementation of the trajectory planning and tracking for 2½D VS is shown in Algorithm 1. There are three distinct phases, the first being the initialization, the second being the trajectory planning, and the third being the trajectory tracking. During the initialization phase, the final desired object pose  and center-point  are defined. They are used to compute the final feature vector . Similarly, the initial feature vector  is defined based on the initial pose  and center-point  of the object of interest that are measured and recorded. This allows us to compute the full error:

(21)

and trajectory time:

(22)

where  stands for the -th component of ; stands for the -th component of the desired average velocity .

The current desired feature vector  varies at a constant velocity  that is expressed as:

(23)

At the trajectory planning phase, we define . At the beginning, when  s, it is clear that . Then for , where  and  is the time interval between two iterations, the trajectory planning is expressed as:

(24)

As a consequence, we can set in (20):

(25)

The third phase iterates until the difference  reaches a defined threshold. At each iteration, the current feature vector  is computed from the current object pose and the current object center-point coordinates. The current desired feature vector  is retrieved from trajectory planning algorithm. This allows us to compute the current error , which is then used as input of the control scheme.

1:  Initialization
2:     Set the desired object pose and center-point coordinates
3:     Define final feature vector
4:     Read and record initial object pose  and center-point coordinates 
5:     Define initial feature vector
6:     Compute trajectory time from (22)
7:     Compute the constant velocity as in (23)
8:  End of Initialization 
9:  
10:  Trajectory Planning
11:     
12:     
13:     for  record
14:        
15:     end for
16:  End of Trajectory Planning 
17:  
18:  Trajectory Tracking
19:     while  do
20:        Retrieve current desired feature vector
21:        Compute current feature vector
22:        Compute current error
23:        Compute current , and
24:        Compute using (20) and send to CDPR
25:     end while
26:  End of Trajectory Tracking 
Algorithm 1: Trajectory planning and tracking

Iv Stability Analysis

The ability of a system to successfully complete its tasks can be characterized by its stability. By analyzing system stability, it is possible to find the limits of perturbation on different variables that the system is able to withstand, that is, to determine whether the system is able to converge accurately to its goal despite the perturbations [20].

In this paper, Lyapunov analysis is used to determine the stability of the closed-loop system.

Iv-a 2½D Visual Servoing

The following closed-loop equation is obtained from (16) and (17):

(26)

From (26), a sufficient condition to ensure the system stability is [20]:

(27)

Indeed, if this condition is satisfied, the error  will always decrease to finally reach .

Iv-B Trajectory tracking with 2½D Visual Servoing

When trajectory tracking is involved, the closed-loop equation is written by injecting (20) into (19). Then, by using (25), we obtain:

(28)

The stability criterion  keeps the form defined in (27). However, even if  is positive definite, the error  will decrease iff the estimations are sufficiently accurate so that

(29)

Otherwise tracking errors will be observed. This can be explained by a simple example from [16], where a scalar differential equation , which is a simplification of (28), is analyzed. The solution is , which converges towards . Increasing  reduces the tracking error. However, if it is too high, it can yield the system unstable. Therefore, it is necessary to keep  as small as possible.

Most importantly, as the current desired feature vector  approaches regularly the final desired feature vector , the desired feature vector velocity  will become  as stated in (25), which makes the tracking errors vanish at the end.

V Control Stability Workspace

Before using a CDPR, one needs to know its workspace. Among the existing workspaces [21] [22], the static feasible workspace (SFW) is the simplest one and is formally expressed as [1]:

(30)

Namely, the workspace  is the set of all MP poses  for which there exists a vector of cable tensions  within the cable tension space  such that the CDPR can balance the gravity wrench , and . Here,  is the wrench matrix and it is related to the robot Jacobian as .

This workspace is a kineto-static workspace that shows all the poses that the MP is physically able to attain. In addition, it is important to evaluate the CDPR ability to reach a pose from a control perspective.

In [12] it was concluded that the results of stability analysis were dependent on the size of the MP workspace. The smaller the desired workspace, the larger the tolerated perturbations within system stability. The MP pose and stability analysis are related to each other, because the MP pose shows up in the stability criterion  through the Jacobian matrix  in the form of rotation matrix  and translation vector .

According to the stability analysis of 2½D VS control, presented in Section IV, the corresponding workspace, named Control Stability Workspace (CSW), is defined as follows:

(31)

The workspace  is the set of all MP poses , for which the stability criterion  is positive definite for any vector of perturbations  that is within bounds . It means that for any MP pose within its CSW, the robot controller will be able to guide the MP to its goal.

It is of interest to create a compound workspace, that takes into account the controller and the kineto-static performance of the robot. Indeed, on the one hand, a MP pose can belong to  while being outside of , namely, it is in a static equilibrium, but it will fail to reach the goal. On the other hand, a MP pose can belong to  while being outside of , namely, the robot controller will make the MP reach the goal although the MP is not in a static equilibrium. Thus we define a compound workspace, named , as the intersection of  and :

(32)

The compound workspace  is the set of all MP poses  for which there exists a vector of cable tensions  within the cable tension space  such that the CDPR can balance the gravity wrench leading to , and for which for any vector of perturbations  that is within bounds , the stability criterion  is positive definite.

Vi Experimental Setup and Validation

Stability criterion (27) is robot model dependent. Thus, ACROBOT, the CDPR prototype used for experimental validation, is presented in Section VI-A. Workspace  is computed in Section VI-C based on the numerical analysis of the stability criterion (27). Finally, experimental results are shown in Section VI-D.

Vi-a CDPR prototype ACROBOT

CDPR prototype ACROBOT is shown in Fig. 4. It is assembled in a suspended configuration, so that all the cable exit points are located at the four corners above the MP. Cables are 1.5 mm in diameter, assumed to be massless and nonelastic. The frame of the robot is a 1.2 m  1.2 m  1.2 m cube. The MP size is 0.1 m  0.1 m  0.07 m and its mass is 1.5 kg.

Fig. 4: ACROBOT: a CDPR prototype located at IRT Jules Verne, Nantes

A camera is mounted on the MP facing the ground. As a simplification of the vision part, AprilTags [23] are used as objects and are put in various places on the ground. Their recognition and localization are done by algorithms available in the ViSP library [24]. The robot is controlled to arrive directly above a chosen AprilTag.

Vi-B Constant and varying perturbations in the system

Two types of perturbations are considered depending on whether they change during task execution or not. Here is a list of perturbed parameters that do not change during the task execution:

  • - the pose of the camera in the MP frame  can be perturbed due to hand-eye calibration errors. It affects the adjoint matrix ;

  • - the Cartesian coordinates vector of cable anchor points expressed in  can be perturbed due to manufacturing errors. It affects the estimation of Jacobian matrix ;

  • - the Cartesian coordinates vector of cable exit points expressed in . Since pulleys are not modeled, there is a small difference between the modeled and the actual cable exit points. It affects the estimation of Jacobian matrix .

(a)
(b)
(c)
Fig. 5: Workspace visualizations for ACROBOT: a) SFW; b) CSW for 2½D VS with minimal perturbations in the system and constant MP orientation; c) CSW for 2½D VS with non-negligible perturbations in the system and MP rotation up to 30 about any arbitrary axis.

Here is a list of the perturbed parameters that vary during the task execution:

  • - the feature vector requires current AprilTag Cartesian pose in  and the image coordinates of its center-point . Those terms are computed from image features and are thus corrupted by noise. The smaller the AprilTag in the image, the larger the estimation error. It affects the interaction matrix ;

  • - the transformation matrix between frames  and  is estimated by exponential mapping:

    (33)

    Since  is computed from , which is perturbed by errors in , and since computed  does not correspond exactly to achieved  due to errors in  and due to the time-response of the low-level controller, then . Furthermore, the initial position is only coarsely known222The knowledge of initial MP pose is usually difficult to acquire when working with CDPRs. The usual approach is to always finish a task at a known home pose. This can be impossible due to a failed experiment or an emergency stop. Furthermore, great care must be taken when measuring the home pose, which in case of ACROBOT was done by hand.. It affects the Jacobian matrix .

Vi-C Workspace of ACROBOT

The constant orientation static feasible workspace of ACROBOT was traced thanks to ARACHNIS software [25] and is shown in Fig. (a)a.

CSW for ACROBOT is shown in Fig. (b)b. Here, for the sake of comparison we also constrain the MP to the same constant orientation. Furthermore, we also take into account hand-eye calibration errors in camera pose in the MP frame , which are simulated as 0.01 m along and 3 about any arbitrary axis. Finally, the MP pose is assumed to be estimated coarsely, allowing for an error of 0.05 m in translation and 10 in rotation along and about any arbitrary axis.

Figure (c)c shows a smaller CSW, where the system will remain stable with non-negligible perturbations. Namely, we add 0.19 m translational error and 8.5 rotational error along and about any arbitrary axis to the initial MP pose. Furthermore, we also simulate a bad hand-eye calibration by adding a 30 error to the camera pose in . Finally, since we are interested in changing the orientation of the MP, CSW shown in Fig. (c)c allows for up to  rotation of the MP about any arbitrary axis.

Vi-D Experimental Validation

An experimental setup was designed to validate the proposed approach. For 2½D VS we used an adaptive gain  [10]:

(34)

where:

  • is the 2–norm of error at the current iteration

  • is the gain tuned for very small values of

  • is the gain tuned for very high values of 

  • is the slope of at

These coefficients have been tuned at the following values: , and .

For the controller with trajectory tracker, has been set, since the error is always small. Additionally, for the planner is set to be equal to the execution time of the classic 2½D VS in order to ease comparability of the results. Finally,  s.

The initial values are the following:

and final desired values are selected to be:

where  denotes the MP pose in the base frame ; denotes the AprilTag pose in the camera frame ; and  stands for the AprilTag center-point coordinates in the image. Note that  and  were measured, while  was estimated through the exponential mapping explained in Section VI-B. Therefore, the and are shown as a reference to Fig. (c)c, but are not used in the control.

Two perturbation sets are defined as and . The former corresponds to the CSW shown in Fig. (c)c and includes: a perturbation of initial MP pose of 0.19 m along axis and about axis ; and a perturbation on the camera orientation expressed in  of  about axis . The set includes: a perturbation of initial MP pose of 0.13 m along axis and 9.5 about axis ; a perturbation of camera pose in of 0.05 m along y axis and 12.5 about axis ; and a perturbation of 0.005 m in a random direction for each cable exit point  and anchor point .

Figure 6 shows the experimental results (see also the accompanying video). Figure (a)a shows the trajectories of the AprilTag center-point in the image, while Fig. (c)c shows the 3D trajectories of the camera in the frame . Additionally, the deviation from the straight-line trajectory in the image and in  is shown in Figs. (b)b and (d)d, respectively. Each controller, the classic 2½D VS and the one with trajectory tracking (named “Traj. tracking” in Fig. 6) was tested without added perturbations and under the effect of each perturbation set  and . Each experiment was repeated 15 times and the results are combined in a bar graph shown in Fig. 7.

(a)
(b)
(c)
(d)
Fig. 6: 2½D VS experiments on ACROBOT: a) the trajectory of AprilTag center-point in the image; b) The pixel deviation from the ideal straight-line trajectory; c) the trajectory of the camera in the frame ; d) the deviation from the ideal straight-line 3D trajectory.
Fig. 7: Bar graph showing the max and mean deviation from the ideal object center-point trajectory in image and the ideal camera trajectory in with and without voluntarily added perturbations. Classical 2½D VS without added perturbation (A), under the effect of perturbation set (C) and (E);2½D VS with trajectory tracker without added perturbation (B), under the effect of perturbation set (D) and (F).

Under good conditions, the behavior is as expected, namely, we see straight-line trajectories both in 3D and in the image. When no perturbation is added, the behavior of 2½D VS controller with and without trajectory tracking is similar. For both controllers the deviation does not surpass 0.01 m and 10 pixels. The superiority of trajectory tracking can be clearly seen when the system is perturbed. Each of the perturbation sets forces the classic 2½D VS to produce deviations from the ideal trajectories. leads to higher deviation on the 3D trajectory (orange line in Fig. (d)d), while has a more pronounced effect on the trajectory in the image (brown line in Fig. (b)b). On the contrary, the perturbation sets have a minimal effect on the trajectories produced by the controller with trajectory tracker as depicted by the gray and cyan lines in Fig. 6 for and , resp. Indeed, for the 3D trajectory three lines corresponding to the trajectory tracking controller remain very near. The behavior is slightly worse in the image, where perturbation set leads to about 18 pixel error (gray line). However, it is three times smaller than the almost 55 pixel error (brown line) obtained with the classic 2½D VS under the same perturbations in Fig. (b)b.

Figure 7 shows the max and mean deviation from the ideal 2D and 3D trajectories for both controllers subject to the three perturbation sets. When there is no perturbation, the behavior of the controller without and with trajectory tracker is similar (groups A and B). No matter the perturbation set, the errors are at least three times smaller when the trajectory tracker is used: groups C and D for ; groups E and F for . Furthermore, the 3D trajectory deviation (and the deviation of the trajectory in image for ) remains similar to the trajectory tracker without perturbation.

Vii Conclusions

This paper dealt with the use of trajectory planning and tracking with 2½D Visual Servoing for the control of Cable-Driven Parallel Robots. First, the proposed controller aims to increase the robustness of the system with respect to perturbations and errors in the robot model. Furthermore, it ensures the straight-line motion of both the center-point of the AprilTag in the image and the camera in the base frame.

Furthermore, a Control Stability Workspace (CSW) was defined and computed for a CDPR prototype ACROBOT, based on the stability analysis of the full system under 2½D visual servoing control. The effect of perturbations on CSW size was highlighted.

The improvement of robustness due to the use of trajectory planning and tracking was clearly shown in experimental validation. While both systems, namely, without and with trajectory tracking, remain stable and achieve the set goal, the trajectory produced by the former is clearly affected by perturbations.

A further improvement would be developing a control law that allows us to detect and counteract the modeling errors, instead of increasing robustness to these errors.

References

  • [1] L. Gagliardini, S. Caro, M. Gouttefarde, A. Girin, “Discrete Reconfiguration Planning for Cable-Driven Parallel Robots”, in Mechanism and Machine Theory, vol. 100, pp. 313–337, 2016.
  • [2] V. L. Schmidt, “Modeling Techniques and Reliable Real-Time Implementation of Kinematics for Cable-Driven Parallel Robots using Polymer Fiber Cables”, Ph.D. dissertation, Fraunhofer Verlag, Stuttgart, Germany, 2017.
  • [3] J. P. Merlet, “Singularity of Cable-Driven Parallel Robot With Sagging Cables: Preliminary Investigation”, in ICRA, pp. 504–509, 2019.
  • [4] N. Riehl, M. Gouttefarde, S. Krut, C. Baradat, F. Pierrot. “Effects of non-negligible cable mass on the static behavior of large workspace cable-driven parallel mechanisms”, in ICRA, pp. 2193–2198, 2009.
  • [5] A. Fortin-Côté, P. Cardou, A. Campeau-Lecours, “Improving Cable-Driven Parallel Robot Accuracy Through Angular Position Sensors”, in IEEE/RSJ Int. Conf on Intelligent Robots and Systems (IROS), pp. 4350–4355, 2016.
  • [6] E. Picard, S. Caro, F. Claveau, F. Plestan, “Pulleys and Force Sensors Influence on Payload Estimation of Cable-Driven Parallel Robots”, in Proceedings - IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Madrid, Spain, October, 1–5 2018.
  • [7] J. P. Merlet, “Improving cable length measurements for large CDPR using the Vernier principle”, in International Conference on Cable-Driven Parallel Robots, pp. 47–58, Springer, Cham, 2019.
  • [8] T. Dallej, M. Gouttefarde, N. Andreff, P.-E. Hervé, P. Martinet, “Modeling and Vision-Based Control of Large-Dimension Cable-Driven Parallel Robots Using a Multiple-Camera Setup”, in Mechatronics, vol. 61, pp. 20–36, 2019.
  • [9] R. Chellal, L. Cuvillon, E. Laroche, “A Kinematic Vision-Based Position Control of a 6-DoF Cable-Driven Parallel Robot”, in Cable-Driven Parallel Robots, pp. 213–225, Springer, Cham, 2015.
  • [10] R. Ramadour, F. Chaumette, J.-P. Merlet, “Grasping Objects With a Cable-Driven Parallel Robot Designed for Transfer Operation by Visual Servoing”, in ICRA, pp. 4463–4468, IEEE, 2014.
  • [11] Z. Zake, F. Chaumette, N. Pedemonte, S. Caro, “Vision-Based Control and Stability Analysis of a Cable-Driven Parallel Robot”, in IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1029–1036, 2019.
  • [12] Z. Zake, S. Caro, A. Suarez Roos, F. Chaumette, N. Pedemonte, ”Stability Analysis of Pose-Based Visual Servoing Control of Cable-Driven Parallel Robots” in International Conference on Cable-Driven Parallel Robots, pp. 73-84. Springer, Cham, 2019.
  • [13] F. Chaumette, E. Malis, “2 1/2 D visual servoing: a possible solution to improve image-based and position-based visual servoings”, in ICRA, pp. 630-635, IEEE, 2000.
  • [14] V. Kyrki, D. Kragic, H. Christensen, “New shortest-path approaches to visual servoing”, in IEEE/RSJ Int. Conf on Intelligent Robots and Systems (IROS), pp. 349–354, 2004
  • [15] A. Pott, “Cable-Driven Parallel Robots: Theory and Application”, vol. 120., Springer, Cham, 2018.
  • [16] F. Chaumette, S. Hutchinson, P. Corke, “Visual Servoing”, in Handbook of Robotics, 2nd edition, O. Khatib B. Siciliano (ed.), pp. 841–866, Springer, 2016.
  • [17] W. Khalil, E. Dombre, “Modeling, Identification and Control of Robots”, Butterworth-Heinemann, 2004, pp. 13–29.
  • [18] Y. Mezouar, F. Chaumette, “Path planning for robust image-based control”, in IEEE Trans. on Robotics and Automation, vol. 18, no. 4, pp. 534–549, August 2002.
  • [19] F. Berry, P. Martinet, J. Gallice, “Trajectory generation by visual servoing”, in IROS, Grenoble, France, September 1997.
  • [20] H. K. Khalil, Nonlinear systems, Macmillan publishing Co., 2nd ed., New York 1996.
  • [21] E. Stump, V. Kumar, “Workspaces of Cable-Actuated Parallel Manipulators”, in Journal of Mechanical Design, vol. 128, no. 1, pp. 159–167, 2006.
  • [22] R. Verhoeven, “Analysis of the workspace of tendon-based stewart platforms”, Ph.D. dissertation, Univ. Duisburg-Essen, 2004.
  • [23] E. Olson, “AprilTag: A robust and flexible visual fiducial system”, in ICRA, pp. 3400–3407, IEEE, 2011.
  • [24] É. Marchand, F. Spindler, F. Chaumette, “ViSP for visual servoing: a generic software platform with a wide class of robot control skills”, in IEEE Robotics & Automation Magazine, vol. 12, no. 4, pp. 40–52, 2005.
  • [25] A. L. C. Ruiz, S. Caro, P. Cardou, F. Guay, “ARACHNIS: Analysis of Robots Actuated by Cables with Handy and Neat Interface Software”, in Cable-Driven Parallel Robots, pp. 293–305, Springer, Cham, 2015.