Flower Interaction Subsystem for a Precision Pollination Robot

06/21/2019
by   Jared Strader, et al.
0

Robotic pollinators not only can aid farmers by providing more cost effective and stable methods for pollinating plants but also benefit crop production in environments not suitable for bees such as greenhouses, growth chambers, and in outer space. Robotic pollination requires a high degree of precision and autonomy but few systems have addressed both of these aspects in practice. In this paper, a fully autonomous robot is presented, capable of precise pollination of individual small flowers. Experimental results show that the proposed system is able to achieve a 93.1 'pollination' success rate tested with high-fidelity artificial flowers.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 5

page 6

page 7

05/09/2020

Autonomous Aerial Robotic Surveying and Mapping with Application to Construction Operations

In this paper we present an overview of the methods and systems that giv...
12/03/2021

A Low-cost Robot with Autonomous Recharge and Navigation for Weed Control in Fields with Narrow Row Spacing

Modern herbicide application in agricultural settings typically relies o...
08/29/2018

Design of an Autonomous Precision Pollination Robot

Precision robotic pollination systems can not only fill the gap of decli...
06/28/2021

Online Estimation and Coverage Control with Heterogeneous Sensing Information

Heterogeneous multi-robot sensing systems are able to characterize physi...
07/22/2021

MobileCharger: an Autonomous Mobile Robot with Inverted Delta Actuator for Robust and Safe Robot Charging

MobileCharger is a novel mobile charging robot with an Inverted Delta ac...
12/01/2021

Bumblebee: A Path Towards Fully Autonomous Robotic Vine Pruning

Dormant season grapevine pruning requires skilled seasonal workers durin...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Farmers are increasingly relying on technology to compensate for labor shortages and meet the growing demand for food. As a result, agricultural robotics are rapidly gaining interest in the research community and by the agriculture industry. To meet the increasing demands of a growing human population, global food production must nearly double in the next few decades [1], which will require rethinking current agricultural practices. The tasks involved in agriculture are often lengthy and repetitive making them well suited for robots.

In the past, automation in precision agriculture focused primarily on large-scale applications. However, attention is also needed on precision tasks involving sensing and manipulation of individual plants for improved crop management and productivity. The new generation of agriculture robots focus on plant parts (e.g. fruits, leaves, or flowers), which is necessary for automating tasks such as fruit and vegetable picking [2, 3, 4, 5], phenotyping [6], pollination [7, 8, 9], and weed control [10, 11] to name a few. These applications require a high degree of precision and autonomy but few systems have addressed both of these two aspects in practice.

Fig. 1: Experimental setup featuring the robotic arm with attached end-effector and depth-camera in front of an artificial bramble plant.

One urgent challenge facing the agriculture industry is the decline of natural pollinators, which threatens the future of food production. As a result, many farmers cannot rely on wild pollinators and instead depend on services for renting bee colonies at a high cost. Also, human introduced bee colonies may threaten wild pollinators due to competition for resources [12]. While there is no major pollination crisis yet, there is evidence for localized limitation of crop yield as a result of inadequate pollination [13]. Robotic pollinators additionally benefit agriculture in environments not fit for natural pollinators such as greenhouses, growth chambers, and in outer space (e.g., in a Mars colony). As a result, robotic pollinators can aid farmers by providing a more cost effective and stable method for pollinating plants as well as reduce the stress caused on rental bee colonies.

The idea of using robots to aid pollination has been considered for more than a decade [14]; however, research in this area is quite limited beyond conceptual designs [15, 16, 17], only a few systems have been demonstrated in practice [18, 19, 20], and even fewer with autonomy [8, 9]. The systems developed in [8, 9] use sprayers for pollinating flowers of tomato and kiwifruits, respectively instead of physically touching each flower like bees would do.

In this work, we aim to fill the research gap by presenting a fully autonomous system capable of precise pollination of individual small flowers. The introduced system is developed as a subsystem for a ground vehicle such as the one presented in our previous work, BrambleBee [7]. BrambleBee is a fully autonomous robot developed for pollinating bramble plants (i.e., blackberry and raspberry) in a greenhouse environment. In our previous work, the pollination procedure was tested using AruCo markers [21] instead of actual flowers, which is addressed in this work. The system is tested with high-fidelity, artificial flowers and further experiments will be performed when real flowers bloom in the near future.

The remainder of the paper is structured as follows. The problem specifications are discussed in Section II. The general concept of the robot and software design are presented in Section III

. A detailed description of the methods employed are presented for identifying flowers and estimating flower pose in

Section IV, mapping of the flowers and obstacles in Section V, planning and control of the robotic arm in Section VI, and the details behind the custom end-effector design in Section VII. The experimental results are provided in Section VIII and the conclusion and future work are discussed in Section IX.

Ii Problem Description

A brief overview of the BrambleBee robot system and the assumptions are provided here to help the reader more easily understand the underlying system and the motivation of this work.

BrambleBee, as shown in Fig. 1, is a ground vehicle designed for autonomously pollinating flowers in a greenhouse environment [7]. Built upon a ClearPath Robotics Husky platform, the vehicle is equipped with a robotic arm (KINOVA JACO 2) mounted to the front edge of the vehicle. Attached to the robotic arm are two components: a custom designed end-effector utilized for pollinating flowers and a depth-camera (Intel RealSenseTM D435) utilized for mapping the local environment (i.e., the workspace).

BrambleBee operates in a greenhouse environment with plants arranged in rows, so the robot is able to examine plants on each side. Initially, BrambleBee explores the greenhouse to inspect the plants and construct a map of the environment. After a map is created, BrambleBee visits plants with flowers and executes the pollination procedure. Specifically, this paper presents the detailed procedure for robotic pollination, along with experimental results using high-fidelity, artificial flowers.

The case is considered where BrambleBee is parked in front of the plants. The goal of the proposed subsystem is to pollinate all flowers reachable by the end-effector attached to the robotic arm.

Fig. 2: Diagram of the overall concept of operations developed for automating the system and integrating the separate software components. Activation of the manipulation system comes from BrambleBee’s full system and signals the full system when pollination procedures are complete.

Iii System Overview

As depicted in Fig. 2, the proposed system is activated by BrambleBee after the robot is positioned in front of the plants. Once activated, the system starts by mapping the flowers and obstacles in the workspace. This is achieved by manuevering the end-effector through a set of poses that cover the workspace. At each end-effector pose, image processing algorithms are applied to identify flowers and estimate the corresponding flower poses. Concurrently, the depth information is used to map the obstacles in the workspace (e.g., other plant parts or structures). The resulting obstacle map is used to avoid possible collisions that could damage the plant or robot. After mapping the workspace, a trajectory is planned for the end-effector through a set of vantage points in front of each flower.

At each vantage point, the pose is refined before pollinating the target flower by collecting and fusing additional pose estimates using a factor graph based framework [22]. Using the refined pose, the end-effector aligns itself to the flower and activates the visual servoing procedure to guide the end-effector towards the flower until contact is made. Once reached, the precision pollination procedure is executed. This operation actuates the end-effector to perform a motion that allows the pollen to be released from the anthers of the flower. Note that bramble flowers can be pollinated using pollen from the same or other bramble flowers. This process is repeated until all flowers in the workspace are pollinated.

Fig. 3: An overview of the software architecture developed for the system. The software is managed through a finite-state machine and separated in four components including image processing, mapping, manipulation, and planning and control.

The primary software modules as well as their relationships are highlighted in Fig. 3 and are managed through a finite-state machine. The software architecture of the system can be divided into four primary modules: 1) image processing, 2) mapping, 3) manipulation, and 4) planning and control. The image processing module is responsible for identifying flowers in the environment and estimating the corresponding flower poses. The manipulation module is utilized for generating the precise movements of the end-effector required to pollinate each flower. The planning and control module is responsible for motion planning and control of the robotic arm. A detailed discussion of the employed algorithms are provided in the following sections.

Iv Image Processing

The robotic pollination system must accurately identify and estimate the pose of each flower. The proposed system achieves this through a two-stage framework consisting of a segmentation step followed by a classification step. The segmentation step extracts patches from the images acquired from the depth-camera based on color in order to reduce the search space for the classification algorithm. This step not only reduces the required computation of the entire pipeline but also improves the classification accuracy. The classification step is used to distinguish between flower and non-flower patches as well as estimate the pose of the identified flowers.

Iv-a Naive Bayes’ Pixel-Level Segmentation

The segmentation step is used to classify each pixel based on color as belonging or not belonging to part of a flower. A naive Bayes’ classifier

[23, 24]

is chosen for this step for several reasons. First, naive Bayes’ classification provides a direct prediction of the posterior probabilities of the class labels avoiding manual parametrization. Second, naive Bayes’ classification is robust to missing information, and as a result, the feature space is well represented by a modest number of diverse training images. Therefore, a naive Bayes’ classifier is applied to segment the image before applying the transfer learning based classifier discussed in the following sections.

In general, the naive Bayes’ classifier is a family of conditional probability models based on applying Bayes’ theorem with the assumption of conditional independence among features. In this case, the pixel intensities are considered as features; therefore, after applying Bayes’ theorem with the independence assumption, the joint model can be expressed as

(1)

where can equivalently be written as where is the class label and , , and are the intensities of the red, green, and blue channels respectively. Therefore, , and the classification rule for a given pixel is then given by

(2)

where is the Maximum A Posteriori (MAP) estimate of the class label for a given pixel assuming conditional independence between pixel intensities. The priors and the likelihoods can be determined by calculating the relative frequency of the pixels in the training images.

To reduce the required computation per image, a lookup table is computed using all possible values for a pixel (e.g. 24 bits for most color images). The lookup table can then be accessed using the raw pixel values to efficiently segment the image. Therefore, using naive Bayes’ classification is given by

(3)

for all where and is the number of bits for a single pixel. To prevent a bias towards the training images with higher resolution, the relative frequencies are normalized for each training image.

Iv-B Refinement of Segmentation using Convolutional Neural Networks (CNNs)

The segmentation step produces a set of patches for each image consisting of flowers and non-flowers. Thus, a method is proposed using machine learning to identify true (flowers) and false (non-flowers) positives extracted in the segmentation step. Inception-v3

[25] is used for refining the segmentation. It computes the probability of each label :

(4)

where is a training example,

is the logits or unnormalized log probability of each class

[25], and

is either flowers or non-flowers in this context. The loss function is defined as

(5)

where is the ground-truth distribution. The above cross entropy loss function is differentiable with respect to the log probability

which allows the use of gradient descent for training the neural networks. The gradient is bounded between -1 and 1 and has the following form:

(6)

In our approach, a transfer learning technique was adopted by taking advantage of the body of Inception-v3, which has rich features. The softmax layer was modified by retraining the network to perform binary classification. In order to train the network, the positive and negative patches were obtained by comparing initial segmentation results against manually labeled images. There are 13,395 positive and 15,066 negative patches extracted in total from the labeled images. The training took around 35 minutes using an Intel i7-4790k CPU and an NVIDIA Titan X GPU in Tensorflow. The results for a set of patches not included in the training data are presented in

Fig. 4.

Fig. 4: Examples of classification applied to image patches extracted from the segmentation algorithm. The patches in the top row are classified as non-flower with probabilities 99.8%, 61.2%, and 61.2%, respectively. The patches in the bottom row are identified as flower with probabilities 91.1%, 97%, and 84.3%, respectively.

Iv-C Pose of Flowers

For the end-effector to accurately reach the center of each flower, the pose is estimated for each identified flower. The position of each flower is extracted from the pixel coordinates and corresponding depth using back-projection given the intrinsic camera parameters. In contrast, the orientation is not observed directly; thus, a learning approach is implemented to approximate the orientation of each flower. In general, the center of a flower may point toward any arbitrary direction; however, the end-effector (discussed in Section VII) is designed to allow for error in the flower orientation. Therefore, we simplify the flower orientation into three classes: the center points towards the center of the camera , towards the left of the camera , and towards the right of the camera as shown in Fig. 5.

This allows us to formulate the problem of determining the orientation of each flower as a multi-class classification problem, which can be solved using CNNs. Similar to the method applied for refining the segmentation, the orientation is determined by training an Inception-v3 network with three classes. Our experiments with real flower patches show that the classifier could reach approximately 70% precision and recall for orientation. A summary of the performance is given in

Table I. In the future, additional data will be collected to improve the accuracy of both the flower and pose classifiers. To reduce estimation errors, multiple observations are fused, which improve the pose estimates of each flower. This is discussed more in the following sections.

(a) c1
(b) c2
(c) c3
Fig. 5: Examples of orientation classes where the center of the flower is pointing at the center of the camera , towards the left of the camera , and towards the right of the camera .
Class Training Testing Precision Recall
Flower Pos 13,395 2,102 78.6% 90%
Neg 15,066 2,124 88.5% 75.8%
20cmOrient-
ation C1 796 60 79.3% 83.3%
C2 920 88 74.3% 59.1%
C3 771 72 59.5% 61.1%
TABLE I: Flower and Orientation Classification Results

V Mapping

V-a Obstacle Map

The obstacle map is used for motion planning to avoid collisions with the plant or other objects in the environment. The map is represented as a 3D occupancy grid where each voxel represents the probability that the voxel is occupied by an object. In this work, we use the octree-based mapping framework [26] where the voxels are managed as a tree allowing for compact memory representation and multiple query resolutions.

To map the workspace, the mapping procedure is performed by moving the arm through a predefined set of poses such that the sensor (i.e., depth-camera) observations will cover the space reachable by the end-effector. As the arm moves through the set of poses, the obstacle map is continuously updated as measurements are acquired by the depth-camera mounted on the robotic arm. An example of the obstacle map estimated using a set of 10 predefined poses is presented in Fig. 6.

Fig. 6: Example of an occupancy map estimated during the mapping procedure of a single plant with models for the robotic arm and base of BrambleBee. The left image illustrates the identified flowers in the occupancy map during the experiment displayed in the right image.

V-B Flower Map

When performing robotic pollination, a flower map is maintained that contains the pose of each flower observed using the perception algorithms described in Section IV. Specifically, a factor graph representation is utilized – as depicted in (7) – to partition the posterior distribution into three subsets: the prior information about the workspace , the dynamic information about the workspace , and the likelihood constrains about the workspace .

(7)

The optimization problem represented in (7) can be further simplified when it is assumed that the system dynamics and the collected measurements are only corrupted by additive Gaussian noise. When this assumption holds, the optimization problems simplifies to a Non-Linear Least Squares (NLLS) problem, as presented in (8),

(8)

where is the prior information about the workspace, incorporates the knowledge of workspace dynamics, and is the observation mapping function. Additionally, incorporates the uncertainty about the prior information, incorporates the uncertainty about the system dynamics, and incorporates the uncertainty about the measurements, respectively.

To adopt the generic formulation presented in (8), to the problem of flower pose filter, we first assume a static motion model; however, additional information about the growth cycle could be incorporated later. Additionally, the set of observations are provided by the previously specified pose classifier. Utilizing the specified models and observations, the cost function provided in (8) is optimized using the Levenberg-Marquardt algorithm [27] to provide a filtered pose estimate of each flower in the workspace.

Fig. 7: (Left) Section view of the end-effector showing two of the three linear servos inside the 3D printed housing mounted to the base of the end-effector. (Middle) Outer view of the end-effector showing the flexible plate connected to the linear actuators. (Right) Outer view of the end-effector performing the pollination procedure during experiments.

Vi Planning and Control

Vi-a Motion Planning

After estimating the flower and obstacle maps, a trajectory is planned for the end-effector to visit and pollinate each flower. The goal is to find a trajectory that minimizes the motion required to visit each flower under the kinematic constraints of the arm given the pose of each flower and the obstacle map. As described in Section III, a vantage point is defined in front of each flower to refine the pose of the flower before pollination. The set of vantage points are denoted where is the pose of the end-effector at the th vantage point and is the number of flowers in the map. The path length (or cost) of the end-effector to travel between a pair of vantage points is given by

(9)

where is a continuous-time function defining the trajectory of the end-effector between vantage points and determined by a point-to-point planner (e.g. planning the trajectory between a pair of vantage points). In this work, the Open Motion Planning Library (OMPL) [28] is utilized for point-to-point planning.

The problem of finding the shortest end-effector path through all vantage points is a form of the Traveling Salesmen Problem (TSP) with the corresponding planning objective defined as

(10)

where is some ordering of vantage points such that is the set of all permutations of vantage points and is the corresponding cost for visiting each vantage point for some ordering . From the previous sections, we know the pose of each flower as well as the space occupied by obstacles. Using this information, each vantage point is set as a constant offset, so each flower is in view of the camera from the corresponding vantage point. Therefore, the following optimization problem is solved:

(11)

Several software packages were used in implementing the proposed methods. The inverse kinematics of the arm were solved using TRAC-IK [29]. To check for collisions during planning, the Flexible Collision Library (FCL) [30] is utilized, which incorporates the model of the arm and the generated obstacle map. These libraries were encapsulated in the software library MoveIt! [31] and were used in the software developed for motion planning.

Vi-B Visual Servoing

Once the end-effector is positioned in front of a flower, visual servoing is used to steer the end-effector towards the flower by controlling the trajectory in terms of desired end-effector positions. The procedure is comprised of mainly two steps: 1) The axis of the end-effector is aligned with the center of the flower by moving in the plane parallel to the face of the flower; 2) The end-effector moves along the axis orthogonal to the flower until making contact. In order to execute this procedure, the velocities for each individual joint

are determined such that the end-effector reaches the desired pose. The joint velocities are computed from a vector of end-effector translational and angular velocities (i.e.,

and , respectively) denoted by . The relationship between and is defined by

(12)

where is the robot Jacobian, which was found using TRAC-IK [29].

To achieve parallel servoing, the distance is computed in the plane parallel to the face of the flower that aligns the end-effector with the flower. This is used to set the direction of the end-effector velocity such that where is a scalar representing the velocity scale. Since the proper orientation is assumed at the start of visual servoing, the angular velocities are set to zero except in the case where is ill-conditioned, which is discussed later. The individual joint velocities are determined using

(13)

During visual servoing, the norm of the joint velocities is set to a constant value, which determines the value of . This is always done before applying

to the joints to ensure safe and consistent performance of the arm, although this causes some variance in the velocity. When

is close to zero, this indicates that the end-effector and the center of the flower are nearly collinear (i.e., the end-effector is pointing almost directly at the center of the flower). Then, the procedure transitions to orthogonal servoing, which moves the end-effector towards the flower along the line orthogonal to the face of the flower. The global distance from the tip of the end-effector to the center of the flower is used to set the direction of the velocity by setting where (13) is used to determine .

Occasionally, the arm reaches singularity conditions in which the end-effector cannot move in the desired translational direction while maintaining a fixed orientation. Therefore, a check is used to determine if is ill-conditioned. If this is the case, translation-only servoing is performed to bypass the singularity condition. The Jacobian is reduced to be equal to the first 3 rows of the original , then is calculated using the Moore-Penrose right pseudo-inverse [32]:

(14)

This solution minimizes the effort while still satisfying .

This process is incrementally repeated until contact is assumed to be made with the flower. Due to the current design of the end-effector, the depth-camera eventually loses sight of the flower while approaching it. Therefore, there is a short motion where the manipulator blindly operates using the most recent flower pose estimate. In the future, an endoscope camera will be centrally placed in the end-effector to allow for continuous tracking of the flower until it is reached.

Scenario 1 2 3 4 5 6 7 8
# Trials 5 5 6 6 5 7 7 6
# Reachable 3 3 2 2 2 4 4 4
# Avg. Seen 3 2.6 2.8 1.8 2 3.7 3.4 3.8
% Touched 100% 100% 70.6% 100% 100% 100% 62.5% 91.3%
% Pollinated 80% 76.9% 52.9% 81.8% 90% 92.3% 62.5% 73.9%
% Missed 0% 0% 29.4% 0% 0% 0% 37.5% 8.7%
TABLE II: Experimental results where ‘# Trials’ is the number of trials ran for each scenario, ‘# Reachable’ is the number of reachable flowers that are 0.7 m away from the base of the manipulator, ‘# Avg. Seen’ is the average number of flowers seen in the workspace, ‘% Touched’ is the percentage of flowers where the end-effector touched the flower, ‘% Pollinated’ is the percentage of flowers pollinated where the end-effector touched the flower and its anthers, and ‘% Missed’ is the percentage of flowers where the end-effector was not able to touch the flower.

Vii Manipulation

Vii-a Mechanical Design

The design of the end-effector was inspired by a mixture of natural pollinators and human pollination methods. The end-effector must be capable of reaching a desired pose with millimeter accuracy without damaging the plant or flowers. Several key constraints were considered while designing the end-effector such as the range of actuation, size, and material. Due to the size of the bramble flowers, the diameter of the tip of the end-effector is limited to no more than 4 cm. The tip must also be flexible to enable an increased range of motion, which allows for the precise alignment of the end-effector to each flower.

To achieve this, we use three miniature linear servos (Actuonix L16-R) inside a 3D printed enclosure acting as a parallel robot. A flexible plate is attached to the linear servos to allow for off-axis flexibility. The material used for the plate is TPU-95, which is flexible and allows a wide range of motion. It is then coated in cotton padding for transferring pollen. In the future, alternative materials will be investigated for attachment to the end-effector.

Vii-B Inverse Kinematics

Due to the flexible nature of the tip of the end-effector, a lookup table was employed for approximating the inverse kinematics to enable precise pose control. To create the lookup table, the end-effector ran through all permutations of actuator commands, and the pose of the flexible plate (in the camera reference frame) and the pose of the joints (in the arms reference frame) was recorded for each permutation. To record the end-effector pose, an AruCo marker was attached to the flexible plate on the end-effector and the Intel RealSenseTM was used to extract the pose. Using the recorded poses of the end-effector (in the camera reference frame) and the joints of the robotic arm (in the arms reference frame), the lookup table is generated using standard methods for hand-eye calibration [33] to estimate the transformation between the end-effector and robotic arm for each permutation of actuator commands. The resulting transformations were stored in a lookup table that can be queried to find the actuator commands closest to a desired end-effector pose.

Viii Experimental Results

We performed a series of experiments to evaluate the performance of the described system. Since these experiments were conducted during the winter months when bramble flowers were not in bloom, an artificial plant that resembles a real bramble bush, with high-fidelity, artificial bramble flowers, was used instead. The artificial plant was divided in 8 separate sections (or scenarios), where each section contained a varying number of isolated flowers. Earlier, the experimental setup was illustrated in Fig. 1. For each scenario, at least 5 trials were performed, giving a total of 47 experiments. The results shown in Table II summarize respectively the number of trials for each scenario, the number of reachable flowers (i.e., flowers that are 0.7 m away from the base of the manipulator), the average number of flowers seen in the workspace, the percentage of flowers touched, the percentage of flowers ‘pollinated’ for each scenario (i.e., the end-effector touched the flower and its anthers after extending the linear actuators), as well as the percentage of flowers that were missed (i.e., not touch the flower after extending the linear actuators).

Out of the 144 total flowers in all trials, 134 of the flowers were accurately identified by the image processing algorithms, with only two false positives, yielding a 93.1% detection accuracy. Our pollination success rate is 76.9%. In the failed attempts, most flowers were either facing away from BrambleBee, in difficult to reach areas, or occluded by the plant leaves. During the failure cases, the tip of the end-effector would miss the center of the flower by no more than 2 cm. The main causes of these errors are: 1) errors in the estimated orientation of individual flowers and 2) the ‘blind driving’ while approaching a flower since the depth-camera loses sight of the flower. Thus, improving the algorithms for estimating the pose of flowers will be a focus on future research. Also, as stated in Section VI-B, errors due to ‘blind driving’ towards a flower will be mitigated by utilizing an endoscope camera that will be centrally placed in the end-effector. This will allow for continuous flower tracking until contact with the flower.

Ix Conclusion and Future Work

This paper presented a fully autonomous system with precision pollination of small flowers. The proposed pollination system was developed as a subsystem for the autonomous ground vehicle BrambleBee. Technologies in perception, planning and control, and autonomy were integrated to enable precise interactions with flowers. The proposed system has the potential to be leveraged for meticulous tasks such as harvesting and monitoring of crops. The experiments show the robot is capable of operating with high precision and is able to achieve a 93.1% detection accuracy and a 76.9% pollination success rate on average. The capabilities of the developed system are demonstrated in this video: https://youtu.be/ZbgtP9CHycA. To our knowledge, this system is the first to demonstrate both precision and autonomy for pollinating small flowers.

A brief summary of the future work discussed throughout the paper is provided here. Currently, our pollination system works well for sparsely populated artificial flowers; however, the system will be verified in the near future through experiments on real plants once flowers are blooming. The primary failure mode of the system was missing contact with the anthers of a flower. This is due to errors in the pose estimates of each flower (particularly the orientation). Thus, further work is needed on estimating the pose of flowers, which would significantly increase the accuracy of the proposed system.

References

  • [1] H. Valin, R. D. Sands, D. Van der Mensbrugghe, G. C. Nelson, H. Ahammad, E. Blanc, B. Bodirsky, S. Fujimori, T. Hasegawa, P. Havlik et al., “The future of food demand: understanding differences in global economic models,” Agricultural Economics, vol. 45, no. 1, pp. 51–67, 2014.
  • [2] E. J. Van Henten, J. Hemming, B. Van Tuijl, J. Kornet, J. Meuleman, J. Bontsema, and E. Van Os, “An autonomous robot for harvesting cucumbers in greenhouses,” Autonomous Robots, vol. 13, no. 3, pp. 241–258, 2002.
  • [3] J. Baeten, K. Donné, S. Boedrij, W. Beckers, and E. Claesen, “Autonomous fruit picking machine: A robotic apple harvester,” in Field and service robotics.   Springer, 2008, pp. 531–539.
  • [4] A. J. Scarfe, R. C. Flemmer, H. Bakker, and C. L. Flemmer, “Development of an autonomous kiwifruit picking robot,” in 2009 4th International Conference on Autonomous Robots and Agents.   IEEE, 2009, pp. 380–384.
  • [5] C. Lehnert, A. English, C. McCool, A. W. Tow, and T. Perez, “Autonomous sweet pepper harvesting for protected cropping systems,” IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 872–879, 2017.
  • [6] T. Mueller-Sim, M. Jenkins, J. Abel, and G. Kantor, “The robotanist: a ground-based agricultural robot for high-throughput crop phenotyping,” in 2017 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2017, pp. 3634–3639.
  • [7] N. Ohi, K. Lassak, R. Watson, J. Strader, Y. Du, C. Yang, G. Hedrick, J. Nguyen, S. Harper, D. Reynolds et al., “Design of an autonomous precision pollination robot,” in International Conference on Intelligent Robots and Systems.   IEEE, 2018.
  • [8] H. Williams, M. Nejati, S. Hussein, N. Penhall, J. Y. Lim, M. H. Jones, J. Bell, H. S. Ahn, S. Bradley, P. Schaare et al., “Autonomous pollination of individual kiwifruit flowers: Toward a robotic kiwifruit pollinator,” Journal of Field Robotics.
  • [9] T. Yuan, S. Zhang, X. Sheng, D. Wang, Y. Gong, and W. Li, “An autonomous pollination robot for hormone treatment of tomato flower in greenhouse,” in 2016 3rd International Conference on Systems and Informatics (ICSAI).   IEEE, 2016, pp. 108–113.
  • [10] D. Slaughter, D. Giles, and D. Downey, “Autonomous robotic weed control systems: A review,” Computers and electronics in agriculture, vol. 61, no. 1, pp. 63–78, 2008.
  • [11] B. Åstrand and A.-J. Baerveldt, “An agricultural mobile robot with vision-based perception for mechanical weed control,” Autonomous robots, vol. 13, no. 1, pp. 21–35, 2002.
  • [12] R. E. Mallinger, H. R. Gaines-Day, and C. Gratton, “Do managed bees have negative effects on wild bees?: A systematic review of the literature,” PloS one, vol. 12, no. 12, p. e0189268, 2017.
  • [13] D. Goulson, E. Nicholls, C. Botías, and E. L. Rotheray, “Bee declines driven by combined stress from parasites, pesticides, and lack of flowers,” Science, vol. 347, no. 6229, p. 1255957, 2015.
  • [14] C. Binns, “Robotic insects could pollinate flowers and find disaster victims,” Popular Science, 2009.
  • [15] S. Berman, V. Kumar, and R. Nagpal, “Design of control policies for spatially inhomogeneous robot swarms with application to commercial pollination,” in 2011 IEEE International Conference on Robotics and Automation.   IEEE, 2011, pp. 378–385.
  • [16] S. Berman, R. Nagpal, and A. Halász, “Optimization of stochastic strategies for spatially inhomogeneous robot swarms: a case study in commercial pollination,” in 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2011, pp. 3923–3930.
  • [17] R. N. Abutalipov, Y. V. Bolgov, and H. M. Senov, “Flowering plants pollination robotic system for greenhouses by means of nano copter (drone aircraft),” in 2016 IEEE Conference on Quality Management, Transport and Information Security, Information Technologies (IT&MQ&IS).   IEEE, 2016, pp. 7–9.
  • [18] T. Shaneyfelt, M. M. Jamshidi, and S. Agaian, “A vision feedback robotic docking crane system with application to vanilla pollination,” International Journal of Automation and Control, vol. 7, no. 1-2, pp. 62–82, 2013.
  • [19] S. Gan-Mor, Y. Grinshpon, Y. Glik, B. Ronen, L. Rozenfeld et al., “Stabilization of a mobile robotic arm for precise spraying and pollinating in tall trees.” in Proceedings of the International Conference of Agricultural Engineering, 2008.
  • [20] G. J. Amador and D. L. Hu, “Sticky solution provides grip for the first robotic pollinator,” Chem, vol. 2, no. 2, pp. 162–164, 2017.
  • [21] S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez, “Automatic generation and detection of highly reliable fiducial markers under occlusion,” Pattern Recognition, vol. 47, no. 6, pp. 2280–2292, 2014.
  • [22] F. Dellaert, M. Kaess et al., “Factor graphs for robot perception,” Foundations and Trends® in Robotics, vol. 6, no. 1-2, pp. 1–139, 2017.
  • [23]

    N. Friedman, D. Geiger, and M. Goldszmidt, “Bayesian network classifiers,”

    Machine learning, vol. 29, no. 2-3, pp. 131–163, 1997.
  • [24] C. Sammut and G. I. Webb, Encyclopedia of machine learning.   Springer Science & Business Media, 2011.
  • [25]

    C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in

    Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818–2826.
  • [26] A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, “Octomap: An efficient probabilistic 3d mapping framework based on octrees,” Autonomous robots, vol. 34, no. 3, pp. 189–206, 2013.
  • [27] J. J. Moré, “The levenberg-marquardt algorithm: implementation and theory,” in Numerical analysis.   Springer, 1978, pp. 105–116.
  • [28] I. A. Sucan, M. Moll, and L. E. Kavraki, “The open motion planning library,” IEEE Robotics & Automation Magazine, vol. 19, no. 4, pp. 72–82, 2012.
  • [29] P. Beeson and B. Ames, “Trac-ik: An open-source library for improved solving of generic inverse kinematics,” in 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids).   IEEE, 2015, pp. 928–935.
  • [30] J. Pan, S. Chitta, and D. Manocha, “FCL: A general purpose library for collision and proximity queries,” in 2012 IEEE International Conference on Robotics and Automation.   IEEE, 2012, pp. 3859–3866.
  • [31] S. Chitta, I. Sucan, and S. Cousins, “Moveit![ros topics],” IEEE Robotics & Automation Magazine, vol. 19, no. 1, pp. 18–19, 2012.
  • [32] A. Albert, Regression and the Moore-Penrose pseudoinverse.   Elsevier, 1972.
  • [33] R. Y. Tsai and R. K. Lenz, “A new technique for fully autonomous and efficient 3d robotics hand/eye calibration,” IEEE Transactions on robotics and automation, vol. 5, no. 3, pp. 345–358, 1989.