Mobile manipulators combine mobile platforms and robot arms. Unlike fixed-base industrial arms, mobile manipulators have a larger workspace and can deal with more flexible and complex tasks such as housekeeping, construction, and airplane assembly . Current research on mobile manipulators mainly focuses on techniques in slow-motion scenarios, such as kinematics redundancy resolution , optimal path planning  and control . However, mobile manipulators are capable of high-speed motions, and thus have the potential to be used in applications such as industrial high-speed pick-and-place, or other challenging tasks such as table tennis . In this paper we focus on high-speed trajectory generation and accurate tracking for mobile manipulators, with an emphasis on not compromising accuracy at higher speeds.
We focus on a benchmark task for accurate high-speed motions: object catching, specifically, ball catching. Within a fraction of a second, the robot needs to (i) predict the ball’s trajectory, (ii) generate and update a trajectory for graceful catching, and (iii) accurately track it with the mobile manipulator. There are numerous papers [1, 7, 8, 9] that address the ball catching problem with a fixed-base manipulator. However, using a mobile manipulator introduces additional challenges, due to the interplay between the following three characteristics:
Controlling wheel-driven bases: The mobile bases that are driven by Mecanum wheels , as used in this paper, usually have lower position accuracy than manipulators. One example is the position accuracy of the Kuka KMR Quantec driven by Mecanum wheels of  while the accuracy of the UR10 arm is . Thus extra efforts are needed to accurately control the base.
High precision in position and time: Precise end effector trajectory tracking, both spatially and temporally and in position and orientation, is a pre-requisite for successful catching. A precision of in position and in time must be achieved in this paper. These parameters vary depending on the gripper .
Real-time computation: On average, the duration of the ball’s free flight trajectory is one second, which puts a hard constraint on the computation time of ball prediction, motion planning and control. Motion planning is very challenging since complex constraints such as the nonlinear robot dynamics and forward kinematics must be considered.
Given the three challenges above, accurate high-speed motion generation for mobile manipulator ball catching has not been widely explored, with a few very notable exceptions . In this paper we introduce: (i) a bi-level motion planning scheme that generates smooth trajectories despite nonlinear constraints in real time, and (ii) a learning-based controller improving tracking accuracy and enabling the mobile manipulator to follow trajectories sufficiently accurately. Our proposed bi-level optimization scheme for motion planning uses sequential quadratic programming (SQP) to select the catching pose, and solves a low-level quadratic program (QP) that generates the trajectory to the catching pose. Our proposed learning-based controller learns the inverse dynamics of the system 
to improve the control accuracy of the mobile manipulator. We use a modified Kalman Filter for improved ball state estimation, which incorporates the influence of aerodynamic drag in the state prediction step.
We present a thorough analysis and evaluation of major components of the system. Experiment data shows that our bi-level optimization scheme has an average run time of , and the inverse dynamics learning method can reduce tracking error up to . A success rate of of real robot ball catching task is achieved, which is higher than the success rate reported in [1, 2]. A video with experiments showing accurate high-speed motions can be found at http://tiny.cc/ball_catch.
The contributions of this paper are two-fold. First, we propose an algorithmic framework to generate and accurately follow high-speed trajectories with a mobile manipulator. The proposed framework has a bi-level optimization scheme for motion planning, which can generate smooth joint trajectories under complex constraints in real-time. Additionally, the proposed framework has a learning-based controller to improve the tracking performance of the mobile manipulator. Our second contribution is the extensive evaluation of the performance of the proposed framework on the mobile manipulator, which includes a thorough error analysis of major components. This framework is tested on real robot catching experiments and achieves a success rate of . This success requires a tight interplay between different system components and hardware constraints. Implementation details of such a complex system are presented, and should a resource for similar system designs.
Ii Related Work
The work most similar to ours is the mobile humanoid system presented in , which consists of two light-weight DoF arms and an omnidirectional mobile base. In their work, the mobile base is restricted to motion in the axis only. Further, they use customized hardware capable of high-speed motions. In contrast, in our work, we achieve high-speed motions in the plane with the mobile base, which brings the motion capacity of omnidirectional platforms into full play. Moreover, we use an off-the-shelf mobile manipulator and a learning-based controller for improved tracking accuracy, which makes our approach easy to generalize to other robotic platforms without the need for customized hardware and low-level access.
There are numerous works that try to improve the tracking performance of high-speed and aggressive trajectories. Non-learning-based methods [5, 14] rely on accurate analytical system dynamics models, which are hard to obtain. Learning-based methods mainly focus on learning the system’s dynamics or inverse dynamics models [15, 16]. In 
, a Deep Neural Network (DNN) is used for direct inverse control of a quadrotor. However, guaranteeing stability with a DNN in the control loop can be very difficult. In contrast, the inverse dynamics learning method from uses a DNN as an add-on module outside the original closed-loop system, which is safer in terms of stability.. This method’s ability for improved, impromptu trajectory tracking has been demonstrated on quadrotors [13, 17], but not on mobile manipulators.
Iii System overview and problem statement
This section presents a system overview of the mobile manipulator and the flying ball. Finally, a formal statement of the ball catching problem is given.
Iii-a System Overview
The mobile manipulator system considered in this paper consists of a 6-DoF manipulator rigidly mounted on a 3-DoF omnidirectional mobile base. The yaw rotation of the mobile base is not used in this work since it is too slow for the application at hand. Thus the system has 8 DoF in total. We refer to these DoF as robot joints, for brevity. The joint configuration vector of the robot is , where is a vector containing the arm joint angles and represents the Cartesian position of the mobile base. The forward kinematics equations of the mobile manipulator shown in Fig. 2 are assumed to be known .
The flying ball’s motion is modeled as a point mass under the influence of aerodynamic drag and gravity. The aerodynamic drag is proportional to the square of the speed . The ball’s equation of motion can be written as follows:
where is the ball’s Cartesian position in the world frame and , are the corresponding velocity and acceleration, () is the gravitational acceleration, is the aerodynamic drag coefficient, and is the Euclidean norm.
Iii-B Problem Statement
Given initial robot configuration , ball state , and time , the goal is to catch the ball with the robot’s end effector at a certain catch time . To achieve this, two conditions need to be satisfied: 1) at catch time , the origin of the end effector frame should coincide with the ball’s position; and 2) the axis of the end effector frame should be aligned with the ball’s velocity vector. Formally, these constraints can be written as:
where is the robot joint configuration at catch time , is the position of the end effector frame’s origin in the world frame, is the vector of the axis of the end effector in the world frame, and refers to the angle between two vectors.
This section presents the framework we propose, which includes three main components: (i) ball estimation and prediction, (ii) bi-level motion planning, and (iii) low-level control with inverse dynamics learning, as shown in Fig. 3.
Iv-a Ball Estimation and Prediction
The first component estimates the ball state and uses it to generate trajectory predictions within a finite horizon. To estimate the ball state , we use a modified discrete-time Kalman filter that considers the influence of air drag in the state prediction step, as presented in . In the state prediction step at the discrete time index , we predict the ball state at time step given measurements up to time according to the estimated state at the previous time step and motion model in (1) as follows:
where is the time interval between time index and
. The update step and variance propagation and update are the same as the vanilla Kalman filter. The aerodynamic drag coefficient is estimated via a Recursive Least-square estimator  in an off-line fashion.
For the trajectory prediction, given the estimated ball state at time , future ball trajectory predictions are made via numerically integrating (1) for a finite horizon. Hence, ball trajectory predictions will change when a new state measurement comes, and a new motion plan should be made. This requires that the motion planner can make new plans in a very short time (typically ) so that the information of the latest ball trajectory prediction can be used in the latest robot motion plan.
Iv-B Motion Planning
In order to achieve better performance, the mobile platform trajectory needs to be updated when a new ball trajectory prediction is available. The motion planner is bi-level: 1) the high-level planner calculates a feasible and optimal catching configuration and catch time that enable the end effector to intercept the ball’s trajectory at time , and 2) the low-level planner generates a smooth joint trajectory that takes the robot from its current to the desired catch configuration. Moreover, the motion planner must take into account kinematic redundancy, collision avoidance, and robot kinematic and dynamic constraints (e.g. position, velocity and torque constraints). This problem is a nonlinear and non-convex optimization problem . Solving such an optimization problem even off-line is still an open question in the field [1, 9].
To tackle this planning problem in an on-line fashion, we need to make the following three simplifications. Similar simplifications are also made in the ball catching literature and work well in experiments [1, 2, 9, 22].
Simplification 1 (Arm-base collision)
There are no nearby obstacles and no arm self-collision check; hence, no obstacle or self-collision check is needed. The only collision that may happen is the collision between the arm end effector and the mobile base.
This simplification is reasonable because during robot catching experiments, usually the arm will extend to reach out for a catching. In such a scenario, arm self-collision is rare but collisions between the arm and mobile base are possible. Specifically, to prevent such a collision, we require the arm end effector to stay inside a semi-cylinder centered on the arm’s base:
where is the end effector’s position in the arm frame which can be calculated via forward kinematics functions given the robot configuration , is the cylinder’s radius and is the cylinder’s height.
Through experiments, we noticed that as long as the end effector is inside this semi-cylinder space at catch time, the whole generated end effector trajectory usually will stay in that space. Thus constraints in (4) is only checked for instead of every for .
Simplification 2 (Kinematic planning)
We use kinematics models to depict the joint motion, specify objective functions and constraints on the joint accelerations, instead of torques.
This simplification works in experiments as long as maximum joint acceleration values are chosen conservatively and low-level joint controls work properly. We use a double integrator as the kinematics model of a single joint and take joint acceleration as the control input:
where index represents the -th joint, is the step size111The step size used for joint motion equations is not necessarily the same as the step size used for the Kalman filter, and is the control input. To ensure the trajectory feasibility, we have the following box constraints:
where are minimum and maximum value of joint positions, velocities and accelerations, respectively.
Simplification 3 (Quadratic cost function)
The cost functions in the high- and low-level planners are quadratic. We use quadratic cost functions because they are popular in the optimization field and usually easy to deal with.
With these three simplifications, we solve the motion planning problem in a bi-level fashion. The high-level planner selects the final catching configuration and catch time , while including the nonlinear non-convex constraints in (2) and (4). Given the solution , the low-level path planner determines the intermediate trajectory waypoints , but is restricted only by the linear constraints in (5) and (6). This is a quadratic program that can be solved very efficiently. To ensure that the optimal values calculated by the high-level planner are feasible for the low-level planning problem, a box constraint on which depends on and initial robot conditions is defined.
Compared to the parameterization methods used in literature for the low-level planning problem, like trapezoidal velocity profiles [1, 2] and third-order polynomials , the QP technique we use allows more flexible trajectories due to the increased number of decision variables. Furthermore, via constraints of kinematic models in the quadratic program formulation, the generated trajectories are smoother and should be closer to the robot’s actual behaviors when compared to trapezoidal velocity profiles and third-order polynomials. Details on the high- and low-level planners follow.
Iv-B1 High-level goal planner
Given the initial robot configuration and velocity , ball position and velocity trajectory prediction and initial time , the high-level planner aims to find the optimal final catching configuration and catch time . Formally, this can be written as the following optimization problem:
where is a user-defined quadratic cost function for the high-level planner, and is the box constraint on given initial joint position, velocity and duration time.
Accurately calculating requires calculating the optimal intermediate waypoints between and under kinematic constraints (7) for every catch time guess need to be calculated. This is too time-consuming and an approximation is needed. In this paper, we use the following approximation:
where is the duration time, are the maximum velocity and acceleration of the -th joint, is a proportion parameter that needs to be tuned. The underlying assumption is that the maximum joint movement is proportional to the distance that the joint can traverse under the trapezoidal velocity profile . Namely the joint first accelerates to its maximum speed and holds that speed until the end. We choose this approximation because of its simplicity and few corner cases needed to be considered in implementations. Other trajectory profiles like third-order polynomial can also be used for approximation . The SQP algorithm is used to solve the problem specified in (7).
Iv-B2 Low-level path planner
Given the desired configuration and catch time from the high-level goal planner, the task of the low-level path planner is to generate a sequence of intermediate trajectory waypoints for each joint.
For the -th joint, this problem can be formulated as:
where is a user-defined quadratic cost function for the low-level planner, is the finite horizon and returns the greatest integer less than or equal to , and are the position and control input sequence of the -th joint.
This is an optimal control problem that consists of quadratic cost functions and linear constraints. Standard quadratic programming methods can be used to solve this problem efficiently . With the solution of optimal control input sequence and initial conditions , the optimal position sequence can be constructed via the kinematic model in (5).
Iv-C Low-level Joint Control
Given the desired position trajectories generated by the motion planners, the task of the low-level joint controllers is to accurately track the trajectory. Accurate trajectory tracking is challenging due to the high-speed joint trajectories required for successful ball catching. Moreover, joint trajectory replanning may introduce discontinuities into the desired trajectories, which may be harder to track. To overcome these problems, we use the inverse dynamics learning technique developed in , shown in Fig. 4. It uses DNNs as an add-on block to baseline feedback control systems to achieve a unity map between desired and actual outputs.
This method learns an inverse dynamics model of the baseline closed-loop system: , where is the system’s discrete state at time index , is the reference signal to the baseline control system, and is the relative degree . For discrete-time systems, the vector relative degree can be intuitively understood as the number of time steps before a change in the input is reflected in the output, which can be easily determined experimentally from the system’s step response .
Specifically, for a single robot joint of the mobile manipulator, the DNN’s inputs and outputs are:
where is the reference signal sampled from desired trajectories for the -th joint at time index , and is the DNN’s modified reference signal. As mentioned in , subtracting from both the inputs and outputs of the neural network can improve the DNN’s generalization ability since only the reference signal offsets instead of the reference signal itself are learned:
The modified position reference signal can then be sent to the baseline control system to actuate the robot joint.
V Framework Implementation and Evaluation
This section presents implementation details and evaluation of each of the three framework components.
V-a Ball Estimation and Trajectory Prediction
First, we describe the implementation details of the ball state estimation and trajectory prediction. The modified Kalman filter, stated in Section IV-A, is used to reduce noise in VICON position and velocity measurements. In experiments, VICON position measurements are very precise. Therefore, ball state estimation consists of position measurements from the VICON system and velocity estimation from the Kalman filter. Trajectory prediction is done through numerical integration with a step size of .
V-B Motion Planning
We first describe the implementation details of the high-level and low-level motion planner. The goal of the high-level planner is to find a catch configuration and catch time while minimizing the robot movement:
where are weight parameters that penalize movement of the arm joints and the mobile base, respectively. As mentioned in Section II, control accuracy of the mobile base is usually lower than the arm [11, 12]. Therefore, we select to reduce the mobile base motion. The proportion parameter used in the high-level planner (8) is set to be . The high-level planner uses the SQP algorithm which we implemented using the NLopt c++ library . The initial guess for the optimization variables are the robot’s current configuration and a hard-coded catch time guess .
The goal of the low-level planner is to generate a smooth trajectory from current configuration to catch configuration . This is achieved by penalizing joint acceleration change from one time step to the next during the trajectory, which can be written as follows for the -th joint:
We use numerical experiments to evaluate the motion planner’s performance since we cannot control the ball initial state in real experiments when the ball is thrown by a human operator. We generated
initial ball state samples, whose positions are uniformly distributed in a circle around the robot. In order to ensure that the resulting ball trajectories are feasible given the robot’s motion constraints, we first note that the end effector can movein , as described in Section VI-B. We restrict the initial ball states to be such that after the ball is predicted, via numerical integration of (1), to be located inside a diameter sphere centered around the end effector of the given robot configuration.
The proposed motion planner can find a trajectory for of all initial ball samples. The average computation time is , with used for the high-level nonlinear planner, and used for the low-level (quadratic program) planner on an Intel Xeon CPU i7-8850H 2.60GHz computer. No sophisticated techniques like multi-threaded programming are used to speed up computation time.
V-C Low-level Control
We first describe the implementation of the learning-based low-level controller. We first estimate experimentally the relative degree of each joint, which is for arm joints and for mobile base joints.
To learn the mapping function in (11), we train DNNs, one for each joint. Each DNN consists of two hidden layers and each layer contains neurons. The ReLU function is used as the activation function. To create the training and testing dataset, we use the low-level path planner (described in Section IV-B2) to generate trajectories similar to those used in real ball catching experiments. We characterize these trajectories by bounding the maximum joint displacement in a given amount of time . In our experiments we set , to represent trajectories with different speeds and , where the first six joint displacements correspond to the arm and are specified in radians, and the last two joint displacements correspond to the mobile base and are specified in meters.
The trajectories are executed using the robot’s baseline control system. It consists of off-board PD controllers that generate desired joint velocities and on-board controllers of the UR10 arm and mobile base that turn desired velocities into joint torques (UR10 arm) or wheel rotation velocities (mobile base). In this way we collect a dataset with data pairs for each joint. The dataset is divided with is used for training and the rest is used for validation. We use Adam optimizer implemented in Tensorflow to train the neural network with training iterations.
To test the influence of inverse dynamics learning on tracking performance, we generate testing trajectories by setting and , which are different from training and validation trajectories in the dataset. The testing trajectories are executed ten times each when the robot uses the learning technique and ten times each when no learning technique is used. The trajectory tracking rooted-mean-squared error (RMSE) over the ten trajectories is shown in Table I. It can be seen that the inverse dynamics learning method can reduce tracking errors between with the highest error reduction corresponding to the mobile base.
|Joint||With DNN ||Without DNN ||Reduction|
|Joint||With DNN ||Without DNN ||Reduction|
This section presents the experimental setup and ball catching experimental results. A video demonstrating the real robot experiments can be found at http://tiny.cc/ball_catch.
Vi-a Experimental Setup
The mobile manipulator used in this experiment consists of a 6-DOF UR10 arm and a 3-DOF Ridgeback omnidirectional mobile base driven by four Mecanum wheels. We use symmetric joint motion boundary values, i.e. . The six arm joints have the same maximum joint positions of and accelerations of . Their maximum joint velocities are , , , , , and for arm joint respectively. The maximum positions, velocities and accelerations of the mobile base in the and axes are , , , respectively.
The three-finger end effector grasps a diameter metal cup that is used to catch the ball, as shown in Fig. 5. The proposed framework is implemented via Robot Operation System (ROS) and runs on a ThinkPad P52 laptop. The ball used in these experiments is a standard tennis ball with a weight of . The ball is wrapped with retro-reflective tape in order to be visible to the VICON motion capture system, which provides position and velocity measurements to be used in the ball state estimation module.
Vi-B Ball Catching Experiment
We conducted experiments to test three main characteristics of the proposed framework: (i) ball catching success rate, (ii) impact of the learning-based controller, and (iii) impact of QP-based low-level planner.
Vi-B1 Experiment A: Ball Catching Success Rate
In this experiment, we use the framework proposed in Section IV and throw balls from different locations while the robot has different initial configurations. Three initial robot configurations are chosen. For each initial robot configuration, we throw the ball from five different locations, five throws for each location, for a total of 75 ball catching experiments. All ball trajectories are shown in Fig. 6. Of the ball throws, balls are caught successfully by the robot, which is a success rate. The end effector position trajectory tracking RMSE over the experiments are for axes respectively. Among the successful catches, the maximum end effector displacement in a catch is in with a mobile base displacement of in that catch. The corresponding arm and base position trajectories for this catch are shown in Fig. 6(a) and 6(b). It can be seen that our mobile base is able to move accurately in both and directions while the humanoid in  can only move in one direction.
Vi-B2 Experiment B: Impact of Learning-based Controller
In this experiment, we are interested in showing that the learning-based controller improves tracking performance, therefore improving the success rate of the overall framework. It is hard to repeat a ball trajectory twice because a human throws it. To do a fair comparison in this experiment we throw balls from the same location while the robot has the same initial configuration, as shown in Fig. 1. We make throws when the robot uses inverse dynamics learning and when the robot does not use inverse dynamics learning for a total of throws. The success rate for this particular configuration is when using inverse dynamics learning and when not using it. The ten percent decrease shows the effectiveness of inverse dynamics learning. Note that the success rate for this particular configuration is lower than the success rate in Experiment A. Visual inspection of the data and video show that the geometrical setup, i.e. ball throwing location and robot initial configuration, used in this experiment requires more robot maneuvers than other geometrical setups used in Experiment A.
Vi-B3 Experiment C: Impact of QP-based Low-level Planner
In this experiment, we are interested in showing the impact of the QP-based low-level planner in the ball catching success rate. To do this, we implemented the trapezoidal velocity profile method in  for the low-level planner, and tested it by throwing balls with the geometrical setup as in Experiment B. Note that the inverse dynamics learning is still used. The robot achieves a success rate of , similar to the success rate of when using the QP method.
The QP method does not have a significant impact on ball catching success rate. One possible reason is that the way the robot accelerates and decelerates does not have a large impact on the ball catching success due to the short duration of the trajectory . However, the QP method does allow for more flexible and smooth trajectories, which may improve the performance of tasks with longer duration. Finally, similar success rates with the two different low-level planners may be due to the high accuracy of the learning-based controller when tracking arbitrary trajectories.
Vi-C Failure analysis
The success of ball catching tasks requires a tight integration of every system component. The malfunction of any component will result in a failure. Among all throws in Experiment A, B, C, failure has occurred for the following reasons: inaccurate ball trajectory prediction, large tracking errors, and failure to find feasible solutions of the high-level planning problem. Large tracking errors occur when maneuvers are too aggressive and overshooting happens. We could improve this by penalizing aggressive trajectories.
We presented a framework for accurate high-speed motion generation and execution on mobile manipulators with the application scenario of ball catching. A modified Kalman filter that considers aerodynamics is used for estimating the ball’s velocity, and ball trajectory prediction is done via numerical integration. A bi-level optimization scheme is proposed to handle complex trajectory requirements in real-time, with an average run time of . The high-level goal planning problem is formulated as a non-linear optimization problem and solved by SQP. The low-level path planning problem is solved by QP for more flexible and smooth trajectories. The joint control is driven by an inverse dynamics learning method. This learning-based controller reduces joint tracking errors by up to , which translate into a increase in success rate compared to a non-learning-based joint controller. The proposed framework is validated via extensive real robot catching experiments under different configurations and achieves a success rate of , setting a new state of the art. In future work we aim to focus on dynamic planning instead of kinematic planning, which should improve the feasibility of joint trajecotries.
-  B. Bäuml, T. Wimböck, and G. Hirzinger, “Kinematically optimal catching a flying ball with a hand-arm-system,” in Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2010, pp. 2592–2599.
-  B. Bäuml, O. Birbach, T. Wimböck, U. Frese, A. Dietrich, and G. Hirzinger, “Catching flying balls with a mobile humanoid: System overview and design considerations,” in Proc. of the IEEE-RAS International Conference on Humanoid Robots. IEEE, 2011, pp. 513–520.
-  D. M. Bodily, T. F. Allen, and M. D. Killpack, “Motion planning for mobile robots using inverse kinematics branching,” in 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017, pp. 5043–5050.
-  A. De Luca, G. Oriolo, and P. R. Giordano, “Kinematic modeling and redundancy resolution for nonholonomic mobile manipulators,” in Proc. of the IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2006, pp. 1867–1873.
-  L.-P. Ellekilde and H. I. Christensen, “Control of mobile manipulator using the dynamical systems approach,” in Proc. of the IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2009, pp. 1370–1376.
-  O. Koç, G. Maeda, and J. Peters, “Online optimal trajectory generation for robot table tennis,” Robotics and Autonomous Systems, vol. 105, pp. 121–137, 2018.
-  U. Frese, B. Bauml, S. Haidacher, G. Schreiber, I. Schäfer, M. Hahnle, and G. Hirzinger, “Off-the-shelf vision for a robotic ball catcher,” in Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), vol. 3, 2001, pp. 1623–1629.
-  S. S. Mirrazavi Salehian, M. Khoramshahi, and A. Billard, “A dynamical system approach for catching softly a flying object: Theory and experiment,” IEEE Transaction on Robotics, 2016.
-  R. Lampariello, D. Nguyen-Tuong, C. Castellini, G. Hirzinger, and J. Peters, “Trajectory planning for optimal robot catching in real-time,” in Proc. of the IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2011, pp. 3719–3726.
-  P. A. Flores Álvarez, “Modelling, simulation and experimental verification of a wheeled-locomotion system based on omnidirectional wheels,” Master’s thesis, Pontificia Universidad Católica del Perú, Ilmenau, Germany, 2016.
-  KUKA, “Mobile robotics kmr quantec,” https://www.kuka.com/en-ca/products/mobility/mobile-robots/kmr-quantec, [Online; accessed 20-February-2020].
-  U. Robots, “Technical specifications ur10,” https://www.universal-robots.com/media/50880/ur10–˙˝bz.pdf, [Online; accessed 20-February-2020].
-  S. Zhou, M. K. Helwa, and A. P. Schoellig, “Design of deep neural networks as add-on blocks for improving impromptu trajectory tracking,” in Proc. of the IEEE Conference on Decision and Control (CDC), 2017, pp. 5201–5207.
-  H. Seraji, “A unified approach to motion control of mobile manipulators,” The International Journal of Robotics Research, vol. 17, no. 2, pp. 107–118, 1998.
A. Punjani and P. Abbeel, “Deep learning helicopter dynamics models,” inProc. of the IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015, pp. 3223–3230.
-  M. T. Frye and R. S. Provence, “Direct inverse control using an artificial neural network for the autonomous hover of a helicopter,” in Prof of the IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2014, pp. 4121–4122.
-  Q. Li, J. Qian, Z. Zhu, X. Bao, M. K. Helwa, and A. P. Schoellig, “Deep neural networks for improved, impromptu trajectory tracking of quadrotors,” in Proc. of the IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017, pp. 5183–5189.
-  K. P. Hawkins, “Analytic inverse kinematics for the universal robots ur-5/ur-10 arms,” Georgia Institute of Technology, Tech. Rep., 2013.
-  M. Müller, S. Lupashin, and R. D’Andrea, “Quadrocopter ball juggling,” in Proc. of the IEEE/RSJ international conference on Intelligent Robots and Systems. IEEE, 2011, pp. 5113–5120.
-  M. Jakob, “Position and force control with redundancy resolution for mobile manipulators,” Master’s thesis, University of Stuttgart, Stuttgart, Germany, 2018.
-  T. D. Barfoot, State estimation for robotics. Cambridge University Press, 2017.
-  S. Kim, A. Shukla, and A. Billard, “Catching objects in flight,” IEEE Transactions on Robotics, vol. 30, no. 5, pp. 1049–1065, Oct 2014.
-  S. Boyd, S. P. Boyd, and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.
-  S. G. Johnson, “The nlopt nonlinear-optimization package,” http://github.com/stevengj/nlopt, [Online; accessed 20-February-2020].
-  H. Ferreau, C. Kirches, A. Potschka, H. Bock, and M. Diehl, “qpOASES: A parametric active-set algorithm for quadratic programming,” Mathematical Programming Computation, vol. 6, no. 4, pp. 327–363, 2014.