System Design and Control of an Apple Harvesting Robot

10/21/2020 ∙ by Kaixiang Zhang, et al. ∙ USDA Michigan State University 0

There is a growing need for robotic apple harvesting due to decreasing availability and rising cost in labor. Towards the goal of developing a viable robotic system for apple harvesting, this paper presents synergistic mechatronic design and motion control of a robotic apple harvesting prototype, which lays a critical foundation for future advancements. Specifically, we develop a deep learning-based fruit detection and localization system using an RGB-D camera. A three degree-of-freedom manipulator is then designed with a hybrid pneumatic/motor actuation mechanism to achieve fast and dexterous movements. A vacuum-based end-effector is used for apple detaching. These three components are integrated into a robotic apple harvesting prototype with simplicity, compactness, and robustness. Moreover, a nonlinear velocity-based control scheme is developed for the manipulator to achieve accurate and agile motion control. Test experiments are conducted to demonstrate the performance of the developed apple harvesting robot.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Apple harvesting is a physically strenuous and labor intensive task. It is estimated that the seasonal agricultural workforce required for apple harvesting in the U.S. is more than 10 million worker hours annually, accounting for about 15% of the total production cost

[11]. The long-term profitability and sustainability of the apple industry has been eroded due to the decreasing availability and rising cost in the labor market. Moreover, manual picking activities expose the workers to great risks of ergonomic injury and musculoskeletal pain, as manual picking involves extensive repetitive body motions and awkward postures (especially when picking fruits at high locations or deep in the canopy, and repetitively ascending and descending on ladders with heavy loads) [10]. As such, there is an imperative need for automated apple picking to address the aforementioned concerns.

Existing automated apple harvesting systems can be generally categorized as shake-and-catch harvesting [28, 8, 17, 18] or fruit-by-fruit harvesting [1, 9, 6, 32]. A shake-and-catch harvesting system vibrates the branches or trunk of the tree to detach apples, and a catching device is then employed to catch the falling apples. These systems are efficient in detaching apples. However, they could not avoid bruising caused by apple-to-apple, apple-to-tree, and apple-to-container collisions, and hence they have not been adopted by the apple industry [17]. On the other hand, fruit-by-fruit selective harvesting systems are developed with the aid of mechatronics and robotic technologies, which generally consist of a vision-based perception component, a manipulator, and an end-effector. The manipulator of such systems picks apples sequentially, and thus the fruit damage can be reduced substantially. However, the fruit-by-fruit harvesting system requires multi-disciplinary advances to enable various synergistic functionalities, including perception, manipulation, and control. Specifically, the perception module typically exploits sensors like cameras and lidars mounted on the robot to detect and localize the apples [14, 29, 30]. With the target positions provided by the perception system, the control system directs the manipulator to approach the fruit. A specialized end-effector (e.g., gripper or vacuum tube) then detaches the apple from the tree and drop it to a receiving device or container [25]. As fruit-by-fruit robotic picking is more appealing to the apple industry, it will be the focus of this paper.

Over the past two decades, several fruit-by-fruit robotic apple harvesting systems have been developed [1, 9, 6, 32]. For example, a 7 degree-of-freedom (DOF) industrial manipulator and a silicone funnel shaped gripper with an internal camera are developed in [1]. The system scans the orchard canopy from 40 look-out positions, and for each position the ripe apples are detected and picked one-by-one in a looped task. In [32], an apple harvesting robot is developed with a global camera, a 7 DOF manipulator, and a finger-based end-effector. Instead of attaching the camera to the end-effector as in [1] and [9], the global camera, independent of other parts of the harvesting system, is employed to provide a larger field of view. For both robotic systems reported in [1] and [32], a 7 DOF manipulator is required to approach the fruit. While the 7 DOF manipulator can provide high maneuverability, it is overly complicated and extravagant for practical use.

During the apple picking process, the manipulator needs to approach fruits located at various positions within the workspace, and it thus requires a robust and accurate motion control scheme. Several advances have been made in manipulator control for robotic harvesting. For example, a two-step control method is developed in [1], where the manipulator is initially adjusted such that the camera’s optical axis points straight to the apple and it is then controlled to reach out to the apple along the optical axis. This two-step method can be used in unstructured orchard environments, but it leads to discontinuous manipulation motion and low harvesting efficiency. Another control scheme is developed in [32], which regulates the end-effector along a horizontal path or inclined path to reach the fruits. This scheme is only effective for V-trellis orchard architectures and would not be suitable for other modern structured orchards.

Despite the aforementioned efforts, there are still no commercially available robotic harvesting systems for tree fruits because the developed systems are still unsatisfactory in performance, too complicated or expensive to be economically viable, and unreliable or inefficient for working in the real orchard environment [26, 14]. To lay a foundation for automated apple harvesting, this paper presents the development of a new robotic apple harvesting prototype. Specifically, an RGB-D camera is exploited as the primary sensor, and a deep learning-based perception module is developed for apple detection and localization. A 3 DOF manipulator and a vacuum-based end-effector are designed to approach and detach the apple, respectively. Different from previous studies that rely on high DOF industrial manipulators, the developed 3 DOF manipulator has simple and compact structure while providing high picking efficiency. Furthermore, a motion control scheme is designed to ensure the manipulator can approach apples with satisfactory accuracy. Experiments are conducted to illustrate the performance of the integrated system.

The main contributions of this paper include the following. Firstly, we present the synergistic development of a robotic apple harvesting prototype that is simple in design, fast in actuation, and efficient in fruit picking. Secondly, the nonlinear control strategy is designed by fully exploiting the mechanical structure of the manipulator, which can avoid discontinuous manipulation motion and accomplish more agile apple approaching compared to [1, 32]. Last but not least, the experimental studies validate the design concept and underpin future research.

The remainder of this paper is organized as follows. Section II presents the system design of the apple harvesting robot whereas Section III details the motion control scheme. Experiment results are provided in Section IV. Finally, conclusions are drawn in Section V.

Ii System Design

Our developed harvesting robot prototype is illustrated in Fig. 1. The hardware consists primarily of three modules: an Intel RealSense RGB-D camera, a 3 DOF manipulator, and a vacuum-based end-effector. Auxiliary units (e.g., power supply, vacuum pump) are placed on the rear side of the system (near the left lower corner in the image). Those units are connected to a laptop (Intel i5‐6700 CPU and 16 GB RAM). The robot operating system (ROS) is utilized to fully integrate the entire system and facilitate the communication and control of the modules. Below is a detailed description of each module.

Fig. 1: The developed robotic apple harvesting prototype.

Ii-a Visual Perception

The first and foremost task of automated apple harvesting is the detection and localization of fruits on the tree. Apple detection is to segment apples from the background (i.e., foliage, branches and trunks), while fruit localization refers to calculating the three-dimensional spatial position of the detected apples relative to the camera frame. In our system, an Intel RealSense D435i RGB-D camera is used to capture the environmental information, because of its compactness, low cost, and accuracy ( active stereo depth resolution [20]).

Fig. 2: An example of using the Mask R-CNN based algorithm for detecting ’Gala’ apples, where green bounding boxes represent identified apples.

Apple detection and localization in the complex orchard environment is a challenging task due to partial occlusions by foliage and branches, varying lighting conditions, and color variations in different varieties and ripeness. To address the above challenges and achieve robust and accurate apple detection, a deep learning method based on Mask R-CNN [16]

is utilized. Mask R-CNN is the state-of-the-art deep neural network-based object detection algorithm that has found great successes in various applications, including vehicle detection

[3], nuclei segmentation [34], and fruit detection [12]. It exploits a mask branch network to enhance the end-to-end classification and segmentation capacities. To train the network, we collected a comprehensive orchard image dataset for two apple varieties (i.e., ’Gala’ and ’Blondee’) under sunny and cloudy weather conditions for different time periods of the day (9 am, noon, and 3 pm) from a commercial orchard in Sparta, Michigan, USA during the 2019 harvest season. A total of 1,243 images were collected, among which 933 images were used for training the Mask R-CNN while the remaining for validation. The detection algorithm based on Mask R-CNN achieved a fruit identification accuracy of 92.7% in the test dataset. A detailed report on the implementation of the mask R-CNN algorithm for apple detection and localization is given in [5]. Fig. 2 shows an example of the detection results where green boxes show the identified apples.

With the detected apples in bounding boxes, apple locations are computed by incorporating the depth information in the Intel RealSense RGB-D camera. Specifically, for each detected apple, the disparity map is leveraged to generate a range matrix in the corresponding bounding box. The mean value of the range matrix is then calculated as the apple’s depth range. Combining the depth range with the center of the bounding box pixels, the Cartesian position of the apple can be determined via back-projection [15]. This iterative process is run for each apple area to obtain positions for all detected apples in the image.

Ii-B Manipulator Design

With the target apple locations provided by the perception system discussed above, a 3 DOF manipulator is then designed and assembled to efficiently reach the target locations. As presented in Fig. 3, the manipulator consists of two revolute joints and one prismatic joint. The two revolute joints create a pan-and-tilt mechanism and are affixed to the prismatic base. This design provides a simple and compact mechanical structure, which not only offers sufficient DOF for primary pick and place tasks but also facilitates highly efficient motion control.

Fig. 3: Proposed 3 DOF manipulator with two revolute joints and one prismatic joint.

The pan-and-tilt mechanism is versatile and thus has been widely used in robotic systems [27, 23]. As shown in Fig. 4, the pan-and-tilt module in our system contains two revolute joints that are driven by NEMA 23 Teknic ClearPath Servos motors operating at a maximum velocity of 4,000 RPM and peak torque of 2 Nm. The tilt (vertical) joint is driven through a 90 degree worm gearbox with an 80:1 ratio and 10.17 Nm holding torque, while the pan (horizontal) movement relies on a parallel shaft gearbox with a 45:1 ratio and 10.02 Nm holding torque. The parallel shaft gearbox is a Molon gearbox modified to accommodate the motor’s large drive shaft. There two revolute joints are linked using a -shaped aluminum plate, so that the axes of rotation of the two gearboxes are perpendicular to each other.

Fig. 4: CAD model of the pan-and-tilt module constructed with two revolute joints.

The velocity of the revolute joints (i.e., servo motors) can be adjusted via variable frequency pulses ranging from 0 to 500 kHz. The pulse signals are generated by an Arduino Uno micro-controller. Based on the serial node provided by the ROS environment, communications between the Arduino interfaces and the servo motors are established. Furthermore, to achieve closed-loop control, the position feedback of the revolute joints needs to be measured. The position information cannot be accessed through the peripheral I/O ports of the servo motors, and thus it is necessary to introduce an adscititious sensing scheme. By default, the motor’s user-settable counts per revolution gives an exact representation of the distance that the shaft travels per pulse. Hence, counting the pulses can deduce the position information of the revolute joints. Based on this observation, a Teensy 3.6 micro-controller is used as a counter of the pulse signals, and the real-time position information of the revolute joints is calculated with the counting results. The Teensy 3.6 micro-controller is running at the clock rate of 256 MHz, which can provide an accurate signals counting.

As shown in Fig. 3, a prismatic joint is added as the base of the pan-and-tilt module to extend the depth of the manipulator’s workspace. Specifically, the prismatic joint is a pneumatically actuated Lintra rodless air cylinder with a stroke length of 0.61 m and a slide carriage. The pneumatic system is driven by a 30-gallon air compressor, which enables the slide carriage to travel the entire stroke length in less than one second. High speed is the main advantage and reason for choosing pneumatic actuation over a screw based linear stage or a rack-and-pinion system. Moreover, the Enfield Technologies S2 Valve Positioning System allows for easy control of the prismatic joint through a standard voltage scheme. The carriage position along the stroke length can be read from a Balluff BTL6 MicroPulse transducer, and the control signals are generated with Arduino and ROS interfaces.

Finally, a hollow aluminum link is installed on the pan-and-tilt module to make sure that the end-effector can reach the apple locations. The length and diameter of the link are 0.71 m and 0.04 m, respectively. This link also acts as a vacuum tube for grasping apple fruits in the harvesting process as referenced in [24].

Ii-C End-effector Design

Common issues in fruit harvesting end-effector designs include failure to isolate clustered fruit [22], insufficient gripping strength [2], low harvesting efficiency from high cycle times [7], and damage to the fruit, canopy structures, or the end-effector itself due to bulky mechanical components [19]. Thus, end-effector design is a significant challenging task for researchers, and a wide range of design concepts have been studied with varying degrees of effectiveness and efficiency. In our system, a vacuum-based end-effector is utilized. It has been shown that the vacuum-based end-effector is effective in grasping and detaching tree fruits while minimizing bruising [13]. Additionally, when the manipulator does not approach the apple accurately, the vacuum-based end-effector can tolerate the approaching error since it can attract the fruit within a certain distance when sufficient vacuum flow is provided. This is important for in-field applications where unpredictable environmental factors (e.g., instantaneous movement of fruits due to disturbances from winds and the traveling robot platform, uneven orchard terrain, etc.) may adversely affect the system performance (i.e., accuracy in fruit localization and robot control and movement). Selection of an appropriate diameter of the end-effector is critical, so that it can achieve a proper flow rate and vacuum pressure that is needed to grip and detach fruits of different size, while maintaining the agility of navigating in and out of the tree canopies. Through preliminary studies, the end-effector diameter of 0.04 m is determined to be adequate for the current robotic system.

The rear end of the end-effector tube of the robot is connected to a Craftsman electric powered wet/dry vacuum via a flexible and expandable tube. During fruit picking, the vacuum machine operates in continuous mode, which can generate a peak horsepower of 5.5 HP.

Iii Motion Control

For an apple harvesting system, the manipulator needs to approach the apples located at different positions with high accuracy and flexibility. To achieve this goal, a motion control strategy is presented in this section by fully exploiting the mechanical structure of the developed 3 DOF manipulator.

Iii-a Kinematic Model

Fig. 5: Kinematical description of the 3 DOF manipulator.

The kinematical description of the 3 DOF manipulator is shown in Fig. 5. Let be the position of the end-effector expressed in terms of the base frame of the manipulator. According to the kinematical diagram shown in Fig. 5 and the Denavit–Hartenberg convention [31], the following forward kinematics function can be obtained:

(1)

where , , are the link length, and are the joint variables. The values of link length and joint variables are listed in Table I. It is clear that (1) characterizes the position of the end-effector as a function of the joint parameters . From (1) and the facts that , , it turns out that

(2)

which characterizes the inverse kinematics to calculate the joint parameters from the position of the end-effector . Generally, gradient-based optimization solvers [35, 33] can be used to compute the inverse kinematics of a manipulator. However, since the developed 3 DOF manipulator has a simple and exploitable structure, its inverse kinematics can be determined by the analytical expression in (2), which can avoid iterative and complex optimization procedure that is more time consuming and can induce numerical errors.

Parameter Value
Link 0.0635

Link
0.0889

Link
0.6985

Revolute joint

Revolute joint

Prismatic joint
TABLE I: Model parameters of the 3 DOF manipulator

Iii-B Controller Development

As described in Section II-A, the proposed perception algorithm can provide the apple locations expressed in the camera frame. By using conventional calibration techniques [4], the transformation matrix between the camera frame and the base frame of the manipulator can be determined. Then, the apple location under the manipulator coordinate frame can be calculated via the transformation matrix. Let be the detected apple position. The control objective is to regulate the end-effector to approach the apple position from the home position . Note that the revolute joint parameters , and prismatic joint parameter are driven by distinct dynamic mechanisms (motor-based and pneumatic), and hence different control schemes are designed for these two types of joints. Specifically, the revolute joints , are regulated with the velocity-based control method, while the position-based controller is used to adjust the prismatic joint . The velocity-based control method can generate explicit speed command to smoothly adjust the revolute joints based on real-time position feedback, which is more accurate and robust than the position-based controller. However, the position feedback of the current pneumatic prismatic joint is not available, therefore, position-based control method is used for the prismatic joint.

The velocity-based control scheme for the revolute joints is presented next. Based on (1), it can be found that the end-effector position along the -axis and -axis is determined by and . Therefore, the revolute joints are driven to ensure that converges to . To achieve this task, the quintic function [31] is exploited to generate a reference trajectory and then the aforementioned regulation problem will be transformed into a trajectory tracking problem. More precisely, the quintic function-based reference trajectory has the following form:

(3)

where , are coefficients of the quintic function and denotes time. Given , , and a time domain , the reference trajectory satisfies

(4)

Based on (3) and (4), the coefficients , can be calculated. Note that the constraints presented in (4) ensure that the initial and final positions of the reference trajectory will be and , respectively. Therefore, actuating the revolute joints to make follow the reference trajectory will lead to the convergence of to . The introduction of the reference trajectory also brings several additional advantages. First, the reference trajectory is continuously differentiable, which is conducive to ensuring that the end-effector approaches the desired position along a smooth path. Second, by adjusting the parameter , the velocity profile of the reference trajectory can be modified, and thus the end-effector can reach the desired position within a specific time interval. Based on (1), the time derivative of can be calculated as:

(5)

where , are the angular velocity inputs of the revolute joints and , respectively. Furthermore, the error signals are constructed as

(6)

Based on (5), (6), and by virtue of Lyapunov-based control techniques [21], the velocity controller is designed as

(7)

where , are positive constant gains. The velocity controller (7) can ensure that the end-effector position along the -axis and -axis tracks the reference trajectory asymptotically, and the detailed stability analysis is given in Appendix A.

We next present the position-based scheme for the prismatic joint control. As mentioned in Section II-B, the prismatic joint is driven by a pneumatic system. A voltage-based proportional-integral (PI) controller is utilized to regulate the prismatic joint parameter . Specifically, given the desired position , the inverse kinematics (2) is used to calculate the corresponding desired joint parameters . Then, based on the desired value , the embedded PI controller can adjust the prismatic joint to ensure converges to .

Iv Performance Evaluation

In this section, comprehensive experiments are reported to demonstrate the performance of the developed robotic apple harvesting system. We first validate the motion control scheme developed in Section III and then evaluate the integrated system in apple picking scenarios.

Iv-a Motion Control Validation

Since different control schemes are employed for the revolute and prismatic joints, their performance is evaluated separately in the following.

The velocity controller (7) designed for the revolute joints and is tested firstly. We use open-loop velocity control and position control approaches as the benchmark to facilitate the performance evaluation. In particular, the open-loop velocity controller is given by

(8)

Comparing (7) with (8), it can be found that the developed controller exploits the feedback errors to achieve closed-loop control, while the open-loop velocity controller does not include the feedback error terms. The position control method utilizes the positioning mode provided by the NEMA 23 Teknic ClearPath Servos to rotate the revolute joints. More specifically, given the desired position of the end-effector, the corresponding desired joint values and can be calculated via the inverse kinematics (2), and then the algorithm embedded in the positioning mode is called to regulate the servo motor towards the desired joint values. To conduct a thorough comparison, three cases are selected. Under each case, the manipulator is controlled by the aforementioned three methods from the same home position to a target position. The desired positions for these three cases are chosen as:

According to (2), the corresponding desired joint values can be calculated as follows:

Note that the prismatic joint parameter is set as zero in these three cases, aimed at solely validating the control performance of the revolute joints. Moreover, to measure the control accuracy, as shown in Fig. 6, a QR code is attached to the end-effector. The QR code can be detected and localized by the RGB-D camera stably and precisely, and thus the final position of the end-effector can be determined via the QR code identification. Each control method is tested by running 5 times in each case, and the average distance errors between the final position of the end-effector and the given desired position are calculated to facilitate the evaluation. The corresponding results are shown in Table II. It can be seen that the proposed velocity control scheme actuates the revolute joints and with higher accuracy for all three cases compared to the other two methods.

Fig. 6: Experimental setup for control validation.
Case 1 Case 2 Case 3
Open-loop velocity control 7.1 14.7 10.8

Position control
6.2 8.6 2.9

Proposed velocity control
4.8 8.0 1.9
TABLE II: Comparison of the distance errors () between the final position and desired position of the manipulator for different revolute joint control approaches

The evaluation of the position control scheme for the prismatic joint is presented next. To separately test the prismatic joint, the revolute joints and are set as zero while the prismatic joint is actuated to achieve the following desired values:

The movement of the prismatic joint is measured with the aid of QR code, and the measurement results corresponding to the desired values above are , , and . Moreover, the position control scheme can regulate the prismatic joint to approach the given desired values within one second in different tests, which satisfies the speed requirement for practical applications.

Iv-B Apple Harvesting Validation

To evaluate the performance of the integrated robotic apple harvesting system, picking tests are conducted in a laboratory setting with artificial apple trees as illustrated in Fig. 7. An apple is hung on a movable aluminum frame. By adjusting the aluminum frame, the apple can be placed in arbitrary positions in the manipulator’s workspace. The vision system is installed at the rear upper position of the manipulator to detect and localize the fruit.

Fig. 7: Experimental setup for apple harvesting validation.
(a) 0s
(b) 0.4s
(c) 0.8s
(d) 1.2s
(e) 1.6s
(f) 2.4s
(g) 2.8s
(h) 3.2s
(i) 3.6s
(j) 4s
Fig. 8: Snapshots of a harvesting cycle at different time instants.
Fig. 9: Approaching performance of the manipulator with the apple located in different regions of the workspace.

A total of 60 picking experiments are completed. In each picking experiment, the apple is randomly placed in the workspace. The vision system first identifies the apple and determines the apple location. Once the apple is localized, the manipulator is controlled to approach the fruit, and then the vacuum-based end-effector is actuated to detach the apple. Finally, the manipulator returns to the home position with the detached fruit. The snapshots of a complete picking cycle are presented in Fig. 8, which shows that the developed robotic apple harvesting system can accomplish the primary picking functions. Fig. 9 gives three samples of the harvesting tests, in which the manipulator follows different inclined angles to reach the apple located in diverse regions of the workspace. Different from [32] which regulates the manipulator along fixed paths to approach a specific area, the developed control scheme can agilely adjust the manipulator to reach the apple arbitrarily placed in the workspace, which is a key capability for automated apple picking in structured/unstructured orchard environments. In all 60 harvesting tests, the manipulator approaches the apple accurately with the final approaching error being less than 2 cm, which is considered acceptable based on our prior laboratory and field tests. Within this error range, the vacuum-based end-effector can detach the fruit and firmly hold the fruit, as the manipulator is returning to the home position in all tests. Moreover, on average, 0.3 second is required to detect and localize all fruits in one image. The time for the manipulator to approach an individual apple is approximately 2.0 second, and fruit detaching using an open-loop command is set at 1.0 second.

Iv-C Discussion on Future Work

While the developed prototype demonstrated promising performance, further work is needed to improve the system.

It is necessary to introduce additional sensing to improve the picking efficiency and system robustness. The current command of fruit detaching operates in an open-loop scheme, which is unable to determine if or when the target fruit is detached from the tree. Installing a pressure sensor on the end-effector can provide feedback information about whether the fruit is held by the end-effector and has been separated from the tree after the predetermined movement, which is conducive to accomplishing closed-loop fruit detaching. Besides, the perception system should be further extended to detect other objects, such as branches, to provide a more comprehensive environment perception.

Since many fruits are located deep in the canopy, a path planning algorithm needs to be developed and integrated with the motion controller to ensure that the manipulator approaches target apples without colliding or damage tree branches or other objects in its path to the target fruit.

The current vacuum-based end-effector is made of aluminum tube covered with a thin layer of soft foam at its entrance. While this simple end-effector design is capable to generate sufficient vacuum pressure to detach about 80% of apples from trees in our field tests in 2018 and 2019. Improvements to the end-effector design are needed, so as to achieve at least 95% detaching rate. Different vacuum cup designs with different soft materials should be considered, so that the end effector can easily conform to the variable contours/sizes of apples to allow fast build-up of sufficient vacuum pressure to effectively detach fruits from trees.

V Conclusion

The mechatronic design and motion control of a robotic apple harvesting prototype was presented in this paper. This prototype integrated a vision-based perception system, 3 DOF manipulator, and vacuum-based end-effector to execute apple picking. A control scheme was designed to achieve accurate and agile manipulation motion. Laboratory studies for 60 picking tests demonstrated that the manipulator reached the desired apple positions with the overall error being less than 2 cm, which is considered acceptable for the vacuum-based end-effector to detach fruits from trees. The developed prototype met the primary harvesting functionalities, thus laying a solid foundation for future advancements. Future work will be focused on automated sensing of fruit holding and detaching process, optimal path planning for efficient picking of apples and minimizing potential damage to tree canopies by the manipulator, and better end-effector design for fast, firm holding and detaching of apples from trees.

Appendix A Stability Analysis of the Velocity Controller

Theorem 1

The velocity controller developed in (7) ensures that the end-effector position along the -axis and -axis, i.e., , converges to the reference trajectory asymptotically.

Proof:

To prove Theorem 1, a Lyapunov function is defined as

(9)

where and are the error signals given in (6). Based on (5) and (6), it can be obtained that

(10)

Taking the time derivative of (9) and utilizing (7) and (10), it can be further derived that

(11)

According to (9) and (11), the Lyapunov’s stability theorem [21] can be invoked to conclude that and are asymptotically stable, which indicates that converges to asymptotically.

References

  • [1] J. Baeten, K. Donné, S. Boedrij, W. Beckers, and E. Claesen (2008) Autonomous fruit picking machine: A robotic apple harvester. In Field and service robotics, pp. 531–539. Cited by: §I, §I, §I, §I.
  • [2] A. Bamotra, P. Walia, A. V. Prituja, and H. Ren (2018) Fabrication and characterization of novel soft compliant robotic end-effectors with negative pressure and mechanical advantages. In Proc. Int. Conf. Adv. Robot. Mechatronics, Singapore, pp. 369–374. Cited by: §II-C.
  • [3] R. Barea, L. M. Bergasa, E. Romera, E. López-Guillén, O. Perez, M. Tradacete, and J. López (2019) Integrating state-of-the-art CNNs for multi-sensor 3D vehicle detection in real autonomous driving environments. In Proc. IEEE Intell. Transp. Syst. Conf., Auckland, New Zealand, pp. 1425–1431. Cited by: §II-A.
  • [4] J. Bouguet (2013) Camera calibration toolbox for MATLAB. External Links: Link Cited by: §III-B.
  • [5] P. Chu, Z. Li, R. Lu, and X. Liu (under review, 2020) DeepApple: deep learning-based apple detection using a suppression Mask R-CNN. Pattern Recognit. Lett.. Cited by: §II-A.
  • [6] J. R. Davidson, A. Silwal, C. J. Hohimer, M. Karkee, C. Mo, and Q. Zhang (2016) Proof-of-concept of a robotic apple harvester. In iros, Daejeon, Korea, pp. 634–639. Cited by: §I, §I.
  • [7] J. R. Davidson, C. J. Hohimer, and C. Mo (2016-Oct.) Preliminary design of a robotic system for catching and storing fresh market apples. IFAC-PapersOnLine 49 (16), pp. 149–154. Cited by: §II-C.
  • [8] M. E. De Kleine and M. Karkee (2015) A semi-automated harvesting prototype for shaking fruit tree limbs. Trans. ASABE 58 (6), pp. 1461–1470. Cited by: §I.
  • [9] Z. De-An, L. Jidong, J. Wei, Z. Ying, and C. Yu (2011) Design and control of an apple harvesting robot. Biosyst. Eng. 110 (2), pp. 112–122. Cited by: §I, §I.
  • [10] F. A. Fathallah (2010-Oct.) Musculoskeletal disorders in labor-intensive agriculture. Appl. Ergon. 41 (6), pp. 738–743. Cited by: §I.
  • [11] R. K. Gallardo and S. P. Galinato (2012) 2012 cost estimates of establishing, producing, and packing red delicious apples in washington. Washington State University Extension. Cited by: §I.
  • [12] P. Ganesh, K. Volle, T. F. Burks, and S. S. Mehta (2019-Dec.) Deep orange: mask R-CNN based orange detection and segmentation. IFAC-PapersOnLine 52 (30), pp. 70–75. Cited by: §II-A.
  • [13] Cited by: §II-C.
  • [14] A. Gongal, S. Amatya, M. Karkee, Q. Zhang, and K. Lewis (2015-Aug.) Sensors and systems for fruit detection and localization: A review. Comput. Electron. Agric. 116, pp. 8–19. Cited by: §I, §I.
  • [15] R. Hartley and A. Zisserman (2003)

    Multiple view geometry in computer vision

    .
    Cambridge University Press. Cited by: §II-A.
  • [16] K. He, G. Gkioxari, P. Dollár, and R. Girshick (2017) Mask R-CNN. In Proc. IEEE Int. Conf. Comput. Vision, Venice, Italy, pp. 2961–2969. Cited by: §II-A.
  • [17] L. He, H. Fu, D. Sun, M. Karkee, and Q. Zhang (2017) Shake-and-catch harvesting for fresh market apples in trellis-trained trees. Trans. ASABE 60 (2), pp. 353–360. Cited by: §I.
  • [18] L. He, X. Zhang, Y. Ye, M. Karkee, and Q. Zhang (2019) Effect of shaking location and duration on mechanical harvesting of fresh market apples. Appl. Eng. Agric. 35 (2), pp. 175–183. Cited by: §I.
  • [19] J. Hemming, B. Tuijl, W. Gauchel, and E. Wais (2016-Dec.) Field test of different end-effectors for robotic harvesting of sweet-pepper. Acta Hortic., pp. 567–574. Cited by: §II-C.
  • [20] (2020) Intel®RealSense™camera D400 series product family datasheet. External Links: Link Cited by: §II-A.
  • [21] H. K. Khalil (2002) Nonlinear systems. Prentice-Hall, Upper Saddle River, NJ. Cited by: Appendix A, §III-B.
  • [22] N. Kondo, K. Yata, M. Iida, T. Shiigi, M. Monta, M. Kurita, and H. Omori (2010) Development of an end-effector for a tomato cluster harvesting robot. Eng. Agric., Env. Food 3 (1), pp. 20–24. Cited by: §II-C.
  • [23] Y. Lee, C. Lan, C. Chu, C. Lai, and Y. Chen (2013-Jun.) A pan–tilt orienting mechanism with parallel axes of flexural actuation. ieee_j_mech 18 (3), pp. 1100–1112. Cited by: §II-B.
  • [24] Cited by: §II-B.
  • [25] Cited by: §I.
  • [26] R. Lu, Z. Zhang, and A. K. Pothula (2017) Innovative technology for apple harvest and in-field sorting. Fruit Qtly. 25 (2), pp. 11–14. Cited by: §I.
  • [27] R. M. Murray, Z. Li, and S. S. Sastry (1994) A mathematical introduction to robotic manipulation. CRC press. Cited by: §II-B.
  • [28] D. L. Peterson, B. S. Bennedsen, W. C. Anger, and S. D. Wolford (1999) A systems approach to robotic bulk harvesting of apples. Trans. ASAE 42 (4), pp. 871–876. Cited by: §I.
  • [29] A. Plebe and G. Grasso (2001-Nov.) Localization of spherical fruits for robotic harvesting. Mach. Vision Appl. 13 (2), pp. 70–79. Cited by: §I.
  • [30] I. Sa, Z. Ge, F. Dayoub, B. Upcroft, T. Perez, and C. McCool (2016-Aug.) Deepfruits: A fruit detection system using deep neural networks. Sensors 16 (8), pp. 12–22. Cited by: §I.
  • [31] B. Siciliano, L. Sciavicco, L. Villani, and G. Oriolo (2010) Robotics: modelling, planning and control. Springer-Verlag, London. Cited by: §III-A, §III-B.
  • [32] A. Silwal, J. R. Davidson, M. Karkee, C. Mo, Q. Zhang, and K. Lewis (2017) Design, integration, and field evaluation of a robotic apple harvester. J. Field Robot. 34 (6), pp. 1140–1159. Cited by: §I, §I, §I, §I, §IV-B.
  • [33] T. Sugihara (2011-Oct.) Solvability-unconcerned inverse kinematics by the levenberg–marquardt method. ieee_j_ro 27 (5), pp. 984–991. Cited by: §III-A.
  • [34] A. O. Vuola, S. U. Akram, and J. Kannala (2019) Mask-RCNN and U-net ensembled for nuclei segmentation. In Proc. IEEE Int. Symp. Biomed. Imaging, Venice, Italy, pp. 208–212. Cited by: §II-A.
  • [35] L. Wang and C. Chen (1991-Aug.) A combined optimization method for solving the inverse kinematics problems of mechanical manipulators. ieee_j_ra 7 (4), pp. 489–499. Cited by: §III-A.