I Introduction
Mobile manipulators combine mobile platforms and robot arms. Unlike fixedbase industrial arms, mobile manipulators have a larger workspace and can deal with more flexible and complex tasks such as housekeeping, construction, and airplane assembly [3]. Current research on mobile manipulators mainly focuses on techniques in slowmotion scenarios, such as kinematics redundancy resolution [4], optimal path planning [3] and control [5]. However, mobile manipulators are capable of highspeed motions, and thus have the potential to be used in applications such as industrial highspeed pickandplace, or other challenging tasks such as table tennis [6]. In this paper we focus on highspeed trajectory generation and accurate tracking for mobile manipulators, with an emphasis on not compromising accuracy at higher speeds.
We focus on a benchmark task for accurate highspeed motions: object catching, specifically, ball catching. Within a fraction of a second, the robot needs to (i) predict the ball’s trajectory, (ii) generate and update a trajectory for graceful catching, and (iii) accurately track it with the mobile manipulator. There are numerous papers [1, 7, 8, 9] that address the ball catching problem with a fixedbase manipulator. However, using a mobile manipulator introduces additional challenges, due to the interplay between the following three characteristics:
Controlling wheeldriven bases: The mobile bases that are driven by Mecanum wheels [10], as used in this paper, usually have lower position accuracy than manipulators. One example is the position accuracy of the Kuka KMR Quantec driven by Mecanum wheels of [11] while the accuracy of the UR10 arm is [12]. Thus extra efforts are needed to accurately control the base.
High precision in position and time: Precise end effector trajectory tracking, both spatially and temporally and in position and orientation, is a prerequisite for successful catching. A precision of in position and in time must be achieved in this paper. These parameters vary depending on the gripper [2].
Realtime computation: On average, the duration of the ball’s free flight trajectory is one second, which puts a hard constraint on the computation time of ball prediction, motion planning and control. Motion planning is very challenging since complex constraints such as the nonlinear robot dynamics and forward kinematics must be considered.
Given the three challenges above, accurate highspeed motion generation for mobile manipulator ball catching has not been widely explored, with a few very notable exceptions [2]. In this paper we introduce: (i) a bilevel motion planning scheme that generates smooth trajectories despite nonlinear constraints in real time, and (ii) a learningbased controller improving tracking accuracy and enabling the mobile manipulator to follow trajectories sufficiently accurately. Our proposed bilevel optimization scheme for motion planning uses sequential quadratic programming (SQP) to select the catching pose, and solves a lowlevel quadratic program (QP) that generates the trajectory to the catching pose. Our proposed learningbased controller learns the inverse dynamics of the system [13]
to improve the control accuracy of the mobile manipulator. We use a modified Kalman Filter for improved ball state estimation, which incorporates the influence of aerodynamic drag in the state prediction step.
We present a thorough analysis and evaluation of major components of the system. Experiment data shows that our bilevel optimization scheme has an average run time of , and the inverse dynamics learning method can reduce tracking error up to . A success rate of of real robot ball catching task is achieved, which is higher than the success rate reported in [1, 2]. A video with experiments showing accurate highspeed motions can be found at http://tiny.cc/ball_catch.
The contributions of this paper are twofold. First, we propose an algorithmic framework to generate and accurately follow highspeed trajectories with a mobile manipulator. The proposed framework has a bilevel optimization scheme for motion planning, which can generate smooth joint trajectories under complex constraints in realtime. Additionally, the proposed framework has a learningbased controller to improve the tracking performance of the mobile manipulator. Our second contribution is the extensive evaluation of the performance of the proposed framework on the mobile manipulator, which includes a thorough error analysis of major components. This framework is tested on real robot catching experiments and achieves a success rate of . This success requires a tight interplay between different system components and hardware constraints. Implementation details of such a complex system are presented, and should a resource for similar system designs.
Ii Related Work
A large body of work has been devoted to ball catching with a fixedbase manipulator [1, 7, 8, 9]
, with special focus on motion planning. Early pioneering work in this field used heuristic methods for selecting the catching point and generated the joint trajectory via interpolation
[7]. Such methods are simple, as they restrict catching points to a certain area, but may not find the optimal robot motion for catching. Recent work uses nonlinear optimization methods for selecting a catching pose, and parameterization methods, like trapezoidal velocity profile and thirdorder polynomials, for path generation for simplicity. In [1], a ball catching system consisting of a fixedbase DegreeofFreedom (DoF) arm and a fourfinger hand is presented. It uses SQP for highlevel goal selection, and restricts joint trajectories to trapezoidal velocity profiles. In contrast, in our framework, we do not parameterize the trajectory. Instead, we use a QP to generate intermediate waypoints which define smooth and flexible trajectories. In the realm of highspeed manipulation, the work in [8] proposes a control law based on dynamical systems that enables the system to catch fastmoving objects. However, only fixedbase manipulators are used.The work most similar to ours is the mobile humanoid system presented in [2], which consists of two lightweight DoF arms and an omnidirectional mobile base. In their work, the mobile base is restricted to motion in the axis only. Further, they use customized hardware capable of highspeed motions. In contrast, in our work, we achieve highspeed motions in the plane with the mobile base, which brings the motion capacity of omnidirectional platforms into full play. Moreover, we use an offtheshelf mobile manipulator and a learningbased controller for improved tracking accuracy, which makes our approach easy to generalize to other robotic platforms without the need for customized hardware and lowlevel access.
There are numerous works that try to improve the tracking performance of highspeed and aggressive trajectories. Nonlearningbased methods [5, 14] rely on accurate analytical system dynamics models, which are hard to obtain. Learningbased methods mainly focus on learning the system’s dynamics or inverse dynamics models [15, 16]. In [16]
, a Deep Neural Network (DNN) is used for direct inverse control of a quadrotor. However, guaranteeing stability with a DNN in the control loop can be very difficult. In contrast, the inverse dynamics learning method from
[13] uses a DNN as an addon module outside the original closedloop system, which is safer in terms of stability.. This method’s ability for improved, impromptu trajectory tracking has been demonstrated on quadrotors [13, 17], but not on mobile manipulators.Iii System overview and problem statement
This section presents a system overview of the mobile manipulator and the flying ball. Finally, a formal statement of the ball catching problem is given.
Iiia System Overview
The mobile manipulator system considered in this paper consists of a 6DoF manipulator rigidly mounted on a 3DoF omnidirectional mobile base. The yaw rotation of the mobile base is not used in this work since it is too slow for the application at hand. Thus the system has 8 DoF in total. We refer to these DoF as robot joints, for brevity. The joint configuration vector of the robot is , where is a vector containing the arm joint angles and represents the Cartesian position of the mobile base. The forward kinematics equations of the mobile manipulator shown in Fig. 2 are assumed to be known [18].
The flying ball’s motion is modeled as a point mass under the influence of aerodynamic drag and gravity. The aerodynamic drag is proportional to the square of the speed [19]. The ball’s equation of motion can be written as follows:
(1) 
where is the ball’s Cartesian position in the world frame and , are the corresponding velocity and acceleration, () is the gravitational acceleration, is the aerodynamic drag coefficient, and is the Euclidean norm.
IiiB Problem Statement
Given initial robot configuration , ball state , and time , the goal is to catch the ball with the robot’s end effector at a certain catch time . To achieve this, two conditions need to be satisfied: 1) at catch time , the origin of the end effector frame should coincide with the ball’s position; and 2) the axis of the end effector frame should be aligned with the ball’s velocity vector. Formally, these constraints can be written as:
(2)  
where is the robot joint configuration at catch time , is the position of the end effector frame’s origin in the world frame, is the vector of the axis of the end effector in the world frame, and refers to the angle between two vectors.
Iv Methodology
This section presents the framework we propose, which includes three main components: (i) ball estimation and prediction, (ii) bilevel motion planning, and (iii) lowlevel control with inverse dynamics learning, as shown in Fig. 3.
Iva Ball Estimation and Prediction
The first component estimates the ball state and uses it to generate trajectory predictions within a finite horizon. To estimate the ball state , we use a modified discretetime Kalman filter that considers the influence of air drag in the state prediction step, as presented in [19]. In the state prediction step at the discrete time index , we predict the ball state at time step given measurements up to time according to the estimated state at the previous time step and motion model in (1) as follows:
(3)  
where is the time interval between time index and
. The update step and variance propagation and update are the same as the vanilla Kalman filter
[21]. The aerodynamic drag coefficient is estimated via a Recursive Leastsquare estimator [19] in an offline fashion.For the trajectory prediction, given the estimated ball state at time , future ball trajectory predictions are made via numerically integrating (1) for a finite horizon. Hence, ball trajectory predictions will change when a new state measurement comes, and a new motion plan should be made. This requires that the motion planner can make new plans in a very short time (typically [9]) so that the information of the latest ball trajectory prediction can be used in the latest robot motion plan.
IvB Motion Planning
In order to achieve better performance, the mobile platform trajectory needs to be updated when a new ball trajectory prediction is available. The motion planner is bilevel: 1) the highlevel planner calculates a feasible and optimal catching configuration and catch time that enable the end effector to intercept the ball’s trajectory at time , and 2) the lowlevel planner generates a smooth joint trajectory that takes the robot from its current to the desired catch configuration. Moreover, the motion planner must take into account kinematic redundancy, collision avoidance, and robot kinematic and dynamic constraints (e.g. position, velocity and torque constraints). This problem is a nonlinear and nonconvex optimization problem [9]. Solving such an optimization problem even offline is still an open question in the field [1, 9].
To tackle this planning problem in an online fashion, we need to make the following three simplifications. Similar simplifications are also made in the ball catching literature and work well in experiments [1, 2, 9, 22].
Simplification 1 (Armbase collision)
There are no nearby obstacles and no arm selfcollision check; hence, no obstacle or selfcollision check is needed. The only collision that may happen is the collision between the arm end effector and the mobile base.
This simplification is reasonable because during robot catching experiments, usually the arm will extend to reach out for a catching. In such a scenario, arm selfcollision is rare but collisions between the arm and mobile base are possible. Specifically, to prevent such a collision, we require the arm end effector to stay inside a semicylinder centered on the arm’s base:
(4)  
where is the end effector’s position in the arm frame which can be calculated via forward kinematics functions given the robot configuration , is the cylinder’s radius and is the cylinder’s height.
Through experiments, we noticed that as long as the end effector is inside this semicylinder space at catch time, the whole generated end effector trajectory usually will stay in that space. Thus constraints in (4) is only checked for instead of every for .
Simplification 2 (Kinematic planning)
We use kinematics models to depict the joint motion, specify objective functions and constraints on the joint accelerations, instead of torques.
This simplification works in experiments as long as maximum joint acceleration values are chosen conservatively and lowlevel joint controls work properly. We use a double integrator as the kinematics model of a single joint and take joint acceleration as the control input:
(5) 
where index represents the th joint, is the step size^{1}^{1}1The step size used for joint motion equations is not necessarily the same as the step size used for the Kalman filter, and is the control input. To ensure the trajectory feasibility, we have the following box constraints:
(6)  
where are minimum and maximum value of joint positions, velocities and accelerations, respectively.
Simplification 3 (Quadratic cost function)
The cost functions in the high and lowlevel planners are quadratic. We use quadratic cost functions because they are popular in the optimization field and usually easy to deal with.
With these three simplifications, we solve the motion planning problem in a bilevel fashion. The highlevel planner selects the final catching configuration and catch time , while including the nonlinear nonconvex constraints in (2) and (4). Given the solution , the lowlevel path planner determines the intermediate trajectory waypoints , but is restricted only by the linear constraints in (5) and (6). This is a quadratic program that can be solved very efficiently. To ensure that the optimal values calculated by the highlevel planner are feasible for the lowlevel planning problem, a box constraint on which depends on and initial robot conditions is defined.
Compared to the parameterization methods used in literature for the lowlevel planning problem, like trapezoidal velocity profiles [1, 2] and thirdorder polynomials [6], the QP technique we use allows more flexible trajectories due to the increased number of decision variables. Furthermore, via constraints of kinematic models in the quadratic program formulation, the generated trajectories are smoother and should be closer to the robot’s actual behaviors when compared to trapezoidal velocity profiles and thirdorder polynomials. Details on the high and lowlevel planners follow.
IvB1 Highlevel goal planner
Given the initial robot configuration and velocity , ball position and velocity trajectory prediction and initial time , the highlevel planner aims to find the optimal final catching configuration and catch time . Formally, this can be written as the following optimization problem:
(7)  
s.t.  
where is a userdefined quadratic cost function for the highlevel planner, and is the box constraint on given initial joint position, velocity and duration time.
Accurately calculating requires calculating the optimal intermediate waypoints between and under kinematic constraints (7) for every catch time guess need to be calculated. This is too timeconsuming and an approximation is needed. In this paper, we use the following approximation:
(8)  
where is the duration time, are the maximum velocity and acceleration of the th joint, is a proportion parameter that needs to be tuned. The underlying assumption is that the maximum joint movement is proportional to the distance that the joint can traverse under the trapezoidal velocity profile [1]. Namely the joint first accelerates to its maximum speed and holds that speed until the end. We choose this approximation because of its simplicity and few corner cases needed to be considered in implementations. Other trajectory profiles like thirdorder polynomial can also be used for approximation [6]. The SQP algorithm is used to solve the problem specified in (7).
IvB2 Lowlevel path planner
Given the desired configuration and catch time from the highlevel goal planner, the task of the lowlevel path planner is to generate a sequence of intermediate trajectory waypoints for each joint.
For the th joint, this problem can be formulated as:
(9)  
s.t.  
where is a userdefined quadratic cost function for the lowlevel planner, is the finite horizon and returns the greatest integer less than or equal to , and are the position and control input sequence of the th joint.
This is an optimal control problem that consists of quadratic cost functions and linear constraints. Standard quadratic programming methods can be used to solve this problem efficiently [23]. With the solution of optimal control input sequence and initial conditions , the optimal position sequence can be constructed via the kinematic model in (5).
IvC Lowlevel Joint Control
Given the desired position trajectories generated by the motion planners, the task of the lowlevel joint controllers is to accurately track the trajectory. Accurate trajectory tracking is challenging due to the highspeed joint trajectories required for successful ball catching. Moreover, joint trajectory replanning may introduce discontinuities into the desired trajectories, which may be harder to track. To overcome these problems, we use the inverse dynamics learning technique developed in [13], shown in Fig. 4. It uses DNNs as an addon block to baseline feedback control systems to achieve a unity map between desired and actual outputs.
This method learns an inverse dynamics model of the baseline closedloop system: , where is the system’s discrete state at time index , is the reference signal to the baseline control system, and is the relative degree [13]. For discretetime systems, the vector relative degree can be intuitively understood as the number of time steps before a change in the input is reflected in the output, which can be easily determined experimentally from the system’s step response [13].
Specifically, for a single robot joint of the mobile manipulator, the DNN’s inputs and outputs are:
(10) 
where is the reference signal sampled from desired trajectories for the th joint at time index , and is the DNN’s modified reference signal. As mentioned in [13], subtracting from both the inputs and outputs of the neural network can improve the DNN’s generalization ability since only the reference signal offsets instead of the reference signal itself are learned:
(11) 
The modified position reference signal can then be sent to the baseline control system to actuate the robot joint.
V Framework Implementation and Evaluation
This section presents implementation details and evaluation of each of the three framework components.
Va Ball Estimation and Trajectory Prediction
First, we describe the implementation details of the ball state estimation and trajectory prediction. The modified Kalman filter, stated in Section IVA, is used to reduce noise in VICON position and velocity measurements. In experiments, VICON position measurements are very precise. Therefore, ball state estimation consists of position measurements from the VICON system and velocity estimation from the Kalman filter. Trajectory prediction is done through numerical integration with a step size of .
VB Motion Planning
We first describe the implementation details of the highlevel and lowlevel motion planner. The goal of the highlevel planner is to find a catch configuration and catch time while minimizing the robot movement:
(12) 
where are weight parameters that penalize movement of the arm joints and the mobile base, respectively. As mentioned in Section II, control accuracy of the mobile base is usually lower than the arm [11, 12]. Therefore, we select to reduce the mobile base motion. The proportion parameter used in the highlevel planner (8) is set to be . The highlevel planner uses the SQP algorithm which we implemented using the NLopt c++ library [24]. The initial guess for the optimization variables are the robot’s current configuration and a hardcoded catch time guess .
The goal of the lowlevel planner is to generate a smooth trajectory from current configuration to catch configuration . This is achieved by penalizing joint acceleration change from one time step to the next during the trajectory, which can be written as follows for the th joint:
(13) 
where is calculated according to Section IVB2, and the discretization step size is . We solve the QP in the lowlevel planner with qpOASES c++ library [25].
We use numerical experiments to evaluate the motion planner’s performance since we cannot control the ball initial state in real experiments when the ball is thrown by a human operator. We generated
initial ball state samples, whose positions are uniformly distributed in a circle around the robot. In order to ensure that the resulting ball trajectories are feasible given the robot’s motion constraints, we first note that the end effector can move
in , as described in Section VIB. We restrict the initial ball states to be such that after the ball is predicted, via numerical integration of (1), to be located inside a diameter sphere centered around the end effector of the given robot configuration.The proposed motion planner can find a trajectory for of all initial ball samples. The average computation time is , with used for the highlevel nonlinear planner, and used for the lowlevel (quadratic program) planner on an Intel Xeon CPU i78850H 2.60GHz computer. No sophisticated techniques like multithreaded programming are used to speed up computation time.
VC Lowlevel Control
We first describe the implementation of the learningbased lowlevel controller. We first estimate experimentally the relative degree of each joint, which is for arm joints and for mobile base joints.
To learn the mapping function in (11), we train DNNs, one for each joint. Each DNN consists of two hidden layers and each layer contains neurons. The ReLU function is used as the activation function. To create the training and testing dataset, we use the lowlevel path planner (described in Section IVB2) to generate trajectories similar to those used in real ball catching experiments. We characterize these trajectories by bounding the maximum joint displacement in a given amount of time . In our experiments we set , to represent trajectories with different speeds and , where the first six joint displacements correspond to the arm and are specified in radians, and the last two joint displacements correspond to the mobile base and are specified in meters.
The trajectories are executed using the robot’s baseline control system. It consists of offboard PD controllers that generate desired joint velocities and onboard controllers of the UR10 arm and mobile base that turn desired velocities into joint torques (UR10 arm) or wheel rotation velocities (mobile base). In this way we collect a dataset with data pairs for each joint. The dataset is divided with is used for training and the rest is used for validation. We use Adam optimizer implemented in Tensorflow to train the neural network with training iterations.
To test the influence of inverse dynamics learning on tracking performance, we generate testing trajectories by setting and , which are different from training and validation trajectories in the dataset. The testing trajectories are executed ten times each when the robot uses the learning technique and ten times each when no learning technique is used. The trajectory tracking rootedmeansquared error (RMSE) over the ten trajectories is shown in Table I. It can be seen that the inverse dynamics learning method can reduce tracking errors between with the highest error reduction corresponding to the mobile base.
Joint  With DNN []  Without DNN []  Reduction 

arm 1  
2  
3  
4  
5  
6  
Joint  With DNN []  Without DNN []  Reduction 
base  
Vi Experiments
This section presents the experimental setup and ball catching experimental results. A video demonstrating the real robot experiments can be found at http://tiny.cc/ball_catch.
Via Experimental Setup
The mobile manipulator used in this experiment consists of a 6DOF UR10 arm and a 3DOF Ridgeback omnidirectional mobile base driven by four Mecanum wheels. We use symmetric joint motion boundary values, i.e. . The six arm joints have the same maximum joint positions of and accelerations of . Their maximum joint velocities are , , , , , and for arm joint respectively. The maximum positions, velocities and accelerations of the mobile base in the and axes are , , , respectively.
The threefinger end effector grasps a diameter metal cup that is used to catch the ball, as shown in Fig. 5. The proposed framework is implemented via Robot Operation System (ROS) and runs on a ThinkPad P52 laptop. The ball used in these experiments is a standard tennis ball with a weight of . The ball is wrapped with retroreflective tape in order to be visible to the VICON motion capture system, which provides position and velocity measurements to be used in the ball state estimation module.
ViB Ball Catching Experiment
We conducted experiments to test three main characteristics of the proposed framework: (i) ball catching success rate, (ii) impact of the learningbased controller, and (iii) impact of QPbased lowlevel planner.
ViB1 Experiment A: Ball Catching Success Rate
In this experiment, we use the framework proposed in Section IV and throw balls from different locations while the robot has different initial configurations. Three initial robot configurations are chosen. For each initial robot configuration, we throw the ball from five different locations, five throws for each location, for a total of 75 ball catching experiments. All ball trajectories are shown in Fig. 6. Of the ball throws, balls are caught successfully by the robot, which is a success rate. The end effector position trajectory tracking RMSE over the experiments are for axes respectively. Among the successful catches, the maximum end effector displacement in a catch is in with a mobile base displacement of in that catch. The corresponding arm and base position trajectories for this catch are shown in Fig. 6(a) and 6(b). It can be seen that our mobile base is able to move accurately in both and directions while the humanoid in [2] can only move in one direction.
ViB2 Experiment B: Impact of Learningbased Controller
In this experiment, we are interested in showing that the learningbased controller improves tracking performance, therefore improving the success rate of the overall framework. It is hard to repeat a ball trajectory twice because a human throws it. To do a fair comparison in this experiment we throw balls from the same location while the robot has the same initial configuration, as shown in Fig. 1. We make throws when the robot uses inverse dynamics learning and when the robot does not use inverse dynamics learning for a total of throws. The success rate for this particular configuration is when using inverse dynamics learning and when not using it. The ten percent decrease shows the effectiveness of inverse dynamics learning. Note that the success rate for this particular configuration is lower than the success rate in Experiment A. Visual inspection of the data and video show that the geometrical setup, i.e. ball throwing location and robot initial configuration, used in this experiment requires more robot maneuvers than other geometrical setups used in Experiment A.
ViB3 Experiment C: Impact of QPbased Lowlevel Planner
In this experiment, we are interested in showing the impact of the QPbased lowlevel planner in the ball catching success rate. To do this, we implemented the trapezoidal velocity profile method in [1] for the lowlevel planner, and tested it by throwing balls with the geometrical setup as in Experiment B. Note that the inverse dynamics learning is still used. The robot achieves a success rate of , similar to the success rate of when using the QP method.
The QP method does not have a significant impact on ball catching success rate. One possible reason is that the way the robot accelerates and decelerates does not have a large impact on the ball catching success due to the short duration of the trajectory [1]. However, the QP method does allow for more flexible and smooth trajectories, which may improve the performance of tasks with longer duration. Finally, similar success rates with the two different lowlevel planners may be due to the high accuracy of the learningbased controller when tracking arbitrary trajectories.
ViC Failure analysis
The success of ball catching tasks requires a tight integration of every system component. The malfunction of any component will result in a failure. Among all throws in Experiment A, B, C, failure has occurred for the following reasons: inaccurate ball trajectory prediction, large tracking errors, and failure to find feasible solutions of the highlevel planning problem. Large tracking errors occur when maneuvers are too aggressive and overshooting happens. We could improve this by penalizing aggressive trajectories.
Vii Conclusion
We presented a framework for accurate highspeed motion generation and execution on mobile manipulators with the application scenario of ball catching. A modified Kalman filter that considers aerodynamics is used for estimating the ball’s velocity, and ball trajectory prediction is done via numerical integration. A bilevel optimization scheme is proposed to handle complex trajectory requirements in realtime, with an average run time of . The highlevel goal planning problem is formulated as a nonlinear optimization problem and solved by SQP. The lowlevel path planning problem is solved by QP for more flexible and smooth trajectories. The joint control is driven by an inverse dynamics learning method. This learningbased controller reduces joint tracking errors by up to , which translate into a increase in success rate compared to a nonlearningbased joint controller. The proposed framework is validated via extensive real robot catching experiments under different configurations and achieves a success rate of , setting a new state of the art. In future work we aim to focus on dynamic planning instead of kinematic planning, which should improve the feasibility of joint trajecotries.
References
 [1] B. Bäuml, T. Wimböck, and G. Hirzinger, “Kinematically optimal catching a flying ball with a handarmsystem,” in Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2010, pp. 2592–2599.
 [2] B. Bäuml, O. Birbach, T. Wimböck, U. Frese, A. Dietrich, and G. Hirzinger, “Catching flying balls with a mobile humanoid: System overview and design considerations,” in Proc. of the IEEERAS International Conference on Humanoid Robots. IEEE, 2011, pp. 513–520.
 [3] D. M. Bodily, T. F. Allen, and M. D. Killpack, “Motion planning for mobile robots using inverse kinematics branching,” in 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017, pp. 5043–5050.
 [4] A. De Luca, G. Oriolo, and P. R. Giordano, “Kinematic modeling and redundancy resolution for nonholonomic mobile manipulators,” in Proc. of the IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2006, pp. 1867–1873.
 [5] L.P. Ellekilde and H. I. Christensen, “Control of mobile manipulator using the dynamical systems approach,” in Proc. of the IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2009, pp. 1370–1376.
 [6] O. Koç, G. Maeda, and J. Peters, “Online optimal trajectory generation for robot table tennis,” Robotics and Autonomous Systems, vol. 105, pp. 121–137, 2018.
 [7] U. Frese, B. Bauml, S. Haidacher, G. Schreiber, I. Schäfer, M. Hahnle, and G. Hirzinger, “Offtheshelf vision for a robotic ball catcher,” in Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), vol. 3, 2001, pp. 1623–1629.
 [8] S. S. Mirrazavi Salehian, M. Khoramshahi, and A. Billard, “A dynamical system approach for catching softly a flying object: Theory and experiment,” IEEE Transaction on Robotics, 2016.
 [9] R. Lampariello, D. NguyenTuong, C. Castellini, G. Hirzinger, and J. Peters, “Trajectory planning for optimal robot catching in realtime,” in Proc. of the IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2011, pp. 3719–3726.
 [10] P. A. Flores Álvarez, “Modelling, simulation and experimental verification of a wheeledlocomotion system based on omnidirectional wheels,” Master’s thesis, Pontificia Universidad Católica del Perú, Ilmenau, Germany, 2016.
 [11] KUKA, “Mobile robotics kmr quantec,” https://www.kuka.com/enca/products/mobility/mobilerobots/kmrquantec, [Online; accessed 20February2020].
 [12] U. Robots, “Technical specifications ur10,” https://www.universalrobots.com/media/50880/ur10–˙˝bz.pdf, [Online; accessed 20February2020].
 [13] S. Zhou, M. K. Helwa, and A. P. Schoellig, “Design of deep neural networks as addon blocks for improving impromptu trajectory tracking,” in Proc. of the IEEE Conference on Decision and Control (CDC), 2017, pp. 5201–5207.
 [14] H. Seraji, “A unified approach to motion control of mobile manipulators,” The International Journal of Robotics Research, vol. 17, no. 2, pp. 107–118, 1998.

[15]
A. Punjani and P. Abbeel, “Deep learning helicopter dynamics models,” in
Proc. of the IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015, pp. 3223–3230.  [16] M. T. Frye and R. S. Provence, “Direct inverse control using an artificial neural network for the autonomous hover of a helicopter,” in Prof of the IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2014, pp. 4121–4122.
 [17] Q. Li, J. Qian, Z. Zhu, X. Bao, M. K. Helwa, and A. P. Schoellig, “Deep neural networks for improved, impromptu trajectory tracking of quadrotors,” in Proc. of the IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017, pp. 5183–5189.
 [18] K. P. Hawkins, “Analytic inverse kinematics for the universal robots ur5/ur10 arms,” Georgia Institute of Technology, Tech. Rep., 2013.
 [19] M. Müller, S. Lupashin, and R. D’Andrea, “Quadrocopter ball juggling,” in Proc. of the IEEE/RSJ international conference on Intelligent Robots and Systems. IEEE, 2011, pp. 5113–5120.
 [20] M. Jakob, “Position and force control with redundancy resolution for mobile manipulators,” Master’s thesis, University of Stuttgart, Stuttgart, Germany, 2018.
 [21] T. D. Barfoot, State estimation for robotics. Cambridge University Press, 2017.
 [22] S. Kim, A. Shukla, and A. Billard, “Catching objects in flight,” IEEE Transactions on Robotics, vol. 30, no. 5, pp. 1049–1065, Oct 2014.
 [23] S. Boyd, S. P. Boyd, and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.
 [24] S. G. Johnson, “The nlopt nonlinearoptimization package,” http://github.com/stevengj/nlopt, [Online; accessed 20February2020].
 [25] H. Ferreau, C. Kirches, A. Potschka, H. Bock, and M. Diehl, “qpOASES: A parametric activeset algorithm for quadratic programming,” Mathematical Programming Computation, vol. 6, no. 4, pp. 327–363, 2014.
Comments
There are no comments yet.