I Introduction
In order to control a robot a model describing the relation between the actuator inputs, the interactions with the world and bodies accelerations is required. This model is called the dynamics model of the robot. A dynamics model can be obtained from first principles in mechanics, using the techniques of rigid body dynamics (RBD) [1], resulting in a parametric model
in which the values of physically meaningful parameters must be provided to complete the fixed structure of the model. Alternatively, the dynamical model can be obtained from experimental data using Machine Learning techniques, resulting in a
nonparametric model.Traditional dynamics parametric methods are based on several assumptions, such as rigidity of links or that friction has a simple analytical form, which may not be accurate in real systems. On the other hand, nonparametric methods based on algorithms such as Kernel Ridge Regression (KRR)
[2, 3, 4], Kernel Regularized Least Squares (KRLS) [5] or Gaussian Processes [6] can model dynamics by extrapolating the inputoutput relationship directly from the available data^{1}^{1}1Note that KRR and KRLS have a very similar formulation, and that these are also equivalent to the techniques derived from Gaussian Processes, as explained for instance in Chapter 6 of [4].. If a suitable kernel function is chosen, then the nonparametric model is a universal approximator which can account for the dynamics effects which are not considered by the parametric model. Still, nonparametric models have no prior knowledge about the target function to be approximated. Therefore, they need a sufficient amount of training examples in order to produce accurate predictions on the entire input space. If the learning phase has been performed offline, both approaches are sensitive to the variation of the mechanical properties over long time spans, which are mainly caused by temperature shifts and wear. Even the inertial parameters can change over time. For example if the robot grasps a heavy object, the resulting change in dynamics can be described by a change of the inertial parameters of the hand. A solution to this problem is to address the variations of the identified system properties by learning incrementally, continuously updating the model as long as new data becomes available. In this paper we propose a novel technique that joins parametric and nonparametric model learning in an incremental fashion.Author, Year  Parametric  Nonparametric 

NguyenTuong, 2010 [7]  Batch  Batch 
Gijsberts, 2011 [8]    Incremental 
Tingfan Wu, 2012 [9]  Batch  Batch 
De La Cruz, 2012 [10]  CAD  Incremental 
Camoriano, 2015  Incremental  Incremental 
In [10] the parametric part is used only for initializing the nonparametric model.
Classical methods for physicsbased dynamics modeling can be found in [1]. These methods require to identify the mechanical parameters of the rigid bodies composing the robot [11, 12, 13, 14]
, which can then be employed in modelbased control and state estimation schemes.
In [7] the authors present a learning technique which combines prior knowledge about the physical structure of the mechanical system and learning from available data with Gaussian Process Regression (GPR) [6]. A similar approach is presented in [9]. Both techniques require an offline training phase and are not incremental, limiting them to scenarios in which the properties of the system do not change significantly over time.
In [10] an incremental semiparametric robot dynamics learning scheme based on Locally Weighted Projection Regression (LWPR) [15] is presented, that is initialized using a linearized parametric model. However, this approach uses a fixed parametric model, that is not updated as new data becomes available. Moreover, LWPR has been shown to underperform with respect to other methods (e.g. [8]).
In [8], a fully nonparametric incremental approach for inverse dynamics learning with constant update complexity is presented, based on kernel methods [16] (in particular KRR) and random features [17]. The incremental nature of this approach allows for adaptation to changing conditions in time. The authors also show that the proposed algorithm outperforms other methods such as LWPR, GPR and Local Gaussian Processes (LGP) [18], both in terms of accuracy and prediction time. Nevertheless, the fully nonparametric nature of this approach undermines the interpretability of the inverse dynamics model.
In this work we propose a method that is incremental with fixed update complexity (as [8]) and semiparametric (as [7] and [9]). The fixed update complexity and prediction time are key properties of our method, enabling realtime performances. Both the parametric and nonparametric parts can be updated, as opposed to [10] in which only the nonparametric part is. A comparison between the existing literature and our incremental method is reported in Table I. We validate the proposed method with experiments performed on an arm of the iCub humanoid robot [19].
The article is organized as follows. Section II introduces the existing techniques for parametric and nonparametric robot dynamics learning. In Section III, a complete description of the proposed semiparametric incremental learning technique is introduced. Section IV presents the validation of our approach on the iCub humanoid robotic platform. Finally, Section V summarizes the content of our work.
Ii Background
Iia Notation
The following notation is used throughout the paper.

The set of real numbers is denoted by . Let and be two
dimensional column vectors of real numbers (unless specified otherwise), i.e.
, their inner product is denoted as , with “” the transpose operator. 
The Frobenius norm of either a vector or a matrix of real numbers is denoted by .

denotes the identity matrix of dimension
; denotes the zero column vector of dimension ;denotes the zero matrix of dimension
.
IiB Parametric Models of Robot Dynamics
Robot dynamics parametric models are used to represent the relation connecting the geometric and inertial parameters with some dynamic quantities that depend uniquely on the robot model. A typical example is obtained by writing the robot inverse dynamics equation in linear form with respect to the robot inertial parameters :
(1) 
where: is the vector of joint positions, is the vector of joint torques, is the vector of the identifiable (base) inertial parameters [1], is the “regressor”, i.e. a matrix that depends only on the robot kinematic parameters. In the rest of the paper we will indicate with the triple given by . Other parametric models write different measurable quantities as a product of a regressor and a vector of parameters, for example the total energy of the robot [20], the istantaneous power provided to the robot [21], the sum of all external forces acting on the robot [22] or the center of pressure of the ground reaction forces [23]. Regardless of the choice of the measured variable , the structure of the regressor is similar:
(2) 
where is the measured quantity.
The vector is composed of certain linear combinations of the inertial parameters of the links, the base inertial parameters [24]. In particular, the inertial parameters of a single body are the mass
, the first moment of mass
expressed in a body fixed frame and the inertia matrix expressed in the orientation of the body fixed frame and with respect to its origin.In parametric modeling of robot dynamics, the regressor structure depends on the kinematic parameters of the robot, that are obtained from CAD models of the robot through kinematic calibration techniques. Similarly, the inertial parameters can also be obtained from CAD models of the robot, however these models may be unavailable (for example) because the manufacturer of the robot does not provide them. In this case the usual approach is to estimate from experimental data [14]. To do that, given measures of the measured quantity (with ), stacking (2) for the samples it is possible to write:
(3) 
This equation can then be solved in least squares (LS) sense to find an estimate of the base inertial parameters. Given the training trajectories it is possible that not all parameters in can be estimated well as the problem in (3) can be illposed, hence this equation is usually solved as a Regularized Least Squares (RLS) problem. Defining
the RLS problem that is solved for the parametric identification is:
(4) 
IiC Nonparametric Modeling with Kernel Methods
Consider a probability distribution
over the probability space , where is the input space (the space of the measured attributes) and is the output space (the space of the outputs to be predicted). In a nonparametric modeling setting, the goal is to find a function belonging to a set of measurable functions , called hypothesis space, such that(5) 
where are row vectors, , is called expected risk and is the loss function. In the rest of this work, we will consider the squared loss .
Note that the distribution is unknown, and that we assume to have access to a discrete and finite set of measured data points of cardinality , in which the points are independently and identically distributed (i.i.d.) according to .
In the context of kernel methods [16], is a reproducing kernel Hilbert space (RKHS). An RKHS is a Hilbert space of functions such that for which the following properties hold:


,
where indicates the inner product in . The function is a reproducing kernel, and it can be shown to be symmetric positive definite (SPD). We also define the kernel matrix , which is symmetric and positive semidefinite (SPSD) , with .
The optimization problem outlined in (5) can be approached empirically by means of many different algorithms, among which one of the most widely used is Kernel Regularized Least Squares (KRLS) [3, 5]. In KRLS, a regularized solution is found solving
(6) 
where is called regularization parameter. The solution to (6) exists and is unique. Following the representer theorem [16], the solution can be conveniently expressed as
(7) 
with , th row of and . It is therefore necessary to invert and store the kernel matrix , which implies and time and memory complexities, respectively. Such complexities render the abovementioned KRLS approach prohibitive in settings where is large, including the one treated in this work. This limitation can be dealt with by resorting to approximated methods such as random features, which will now be described.
IiC1 Random Feature Maps for Kernel Approximation
The random features approach was first introduced in [17], and since then is has been widely applied in the field of largescale Machine Learning. This approach leverages the fact that the kernel function can be expressed as
(8) 
where are row vectors, is a feature map associated with the kernel, which maps the input points from the input space to a feature space of dimensionality , depending on the chosen kernel. When is very large, directly computing the inner product as in (8) enables the computation of the solution, as we have seen for KRLS. However, can become too cumbersome to invert and store as grows. A random feature map , typically with , directly approximates the feature map , so that
(9) 
can be chosen according to the desired approximation accuracy, as guaranteed by the convergence bounds reported in [17, 25]. In particular, we will use random Fourier features for approximating the Gaussian kernel
(10) 
The approximated feature map in this case is , where
(11) 
with column vector. The fundamental theoretical result on which random Fourier features approximation relies is Bochner’s Theorem [26]. The latter states that if is a shiftinvariant kernel on , then
is positive definite if and only if its Fourier transform
. If this holds, by the definition of Fourier transform we can write(12) 
which can be approximated by performing an empirical average as follows:
(13) 
Therefore, it is possible to map the input data as , with row vector, to obtain a nonlinear and nonparametric model of the form
(14) 
approximating the exact kernelized solution , with
. Note that the approximated model is nonlinear in the input space, but linear in the random features space. We can therefore introduce the regularized linear regression problem in the random features space as follows:
(15) 
where is the matrix of the training inputs where each row has been mapped by . The main advantage of performing a random feature mapping is that it allows us to obtain a nonlinear model by applying linear regression methods. For instance, Regularized Least Squares (RLS) can compute the solution of (15) with time and memory complexities. Once is known, the prediction for a mapped sample can be computed as .
IiD Regularized Least Squares
Let and be two matrices of real numbers, with . The Regularized Least Squares (RLS) algorithm computes a regularized solution of the potentially illposed problem , enforcing its numerical stability. Considering the widely used Tikhonov regularization scheme, is the solution to the following problem:
(16) 
where is the regularization parameter. By taking the gradient of with respect to and equating it to zero, the minimizing solution can be written as
(17) 
IiE Recursive Regularized Least Squares (RRLS) with Cholesky Update
In scenarios in which supervised samples become available sequentially, a very useful extension of the RLS algorithm consists in the definition of an update rule for the model which allows it to be incrementally trained, increasing adaptivity to changes of the system properties through time. This algorithm is called Recursive Regularized Least Squares (RRLS). We will consider RRLS with the Cholesky update rule [27], which is numerically more stable than others (e.g. the ShermanMorrisonWoodbury update rule). In adaptive filtering, this update rule is known as the QR algorithm [28].
Let us define with and . Our goal is to update the model (fully described by and ) with a new supervised sample , with , row vectors.
Consider the Cholesky decomposition . It can always be obtained, since is positive definite for . Thus, we can express the update problem at step as:
(18) 
where is full rank and unique, and .
By defining
(19) 
we can write . However, in order to compute from the obtained it would be necessary to recompute its Cholesky decomposition, requiring computational time. There exists a procedure, based on Givens rotations, which can be used to compute from with time complexity. A recursive expression can be obtained also for as follows:
(20) 
Once and are known, the updated weights matrix can be obtained via back and forward substitution as
(21) 
The time complexity for updating is .
As for RLS, the RRLS incremental solution can be applied to both the parametric (4) and nonparametric with random features (15) problems, assuming . In particular, RRLS can be applied to the parametric case by noting that the arrival of a new sample adds rows to and . Consequently, the update of must be decomposed in update steps using (20). For each one of these steps we consider only one row of and , namely:
where is the th row of the matrix .
For the nonparametric random features case, RRLS can be simply applied with:
where is the supervised sample which becomes available at step .
Iii Semiparametric Incremental Dynamics Learning
We propose a semiparametric incremental inverse dynamics estimator, designed to have better generalization properties with respect to fully parametric and nonparametric ones, both in terms of accuracy and convergence rates. The estimator, whose functioning is illustrated by the block diagram in Figure 2, is composed of two main parts. The first one is an incremental parametric estimator taking as input the rigid body dynamics regressors and computing two quantities at each step:

An estimate of the output quantities of interest

An estimate of the base inertial parameters of the links composing the rigid body structure
The employed learning algorithm is RRLS. Since it is supervised, during the model update step the measured output is used by the learning algorithm as ground truth.
The parametric estimation is performed in the first place, and it is independent of the nonparametric part.
This property is desirable in order to give priority to the identification of the inertial parameters . Moreover, being the estimator incremental, the estimated inertial parameters adapt to changes in the inertial properties of the links, which can occur if the endeffector is holding a heavy object.
Still, this adaptation cannot address changes in nonlinear effects which do not respect the rigid body assumptions.
The second estimator is also RRLSbased, fully nonparametric and incremental. It leverages the approximation of the kernel function via random Fourier features, as outlined in Section IIC1, to obtain a nonlinear model which can be updated incrementally with constant update complexity , where is the dimensionality of the random feature space (see Section IIE).
This estimator receives as inputs the current vectorized and , normalized and mapped to the random features space approximating an infinitedimensional feature space introduced by the Gaussian kernel.
The supervised output is the residual .
The nonparametric estimator provides as output the estimate of the residual, which is then added to to obtain the semiparametric estimate .
Similarly to the parametric part, in the nonparametric one the estimator’s internal nonlinear model can be updated during operation, which constitutes an advantage in the case in which the robot has to explore a previously unseen area of the state space, or when the mechanical conditions change (e.g. due to wear, tear or temperature shifts).
Iv Experimental Results
Iva Software
For implementing the proposed algorithm we used two existing open source libraries. For the RRLS learning part we used GURLS [29], a regression and classification library based on the Regularized Least Squares (RLS) algorithm, available for Matlab and C++. For the computations of the regressors we used iDynTree^{2}^{2}2https://github.com/robotology/idyntree , a C++ dynamics library designed for free floating robots. Using SWIG [30], iDynTree supports calling its algorithms in several programming languages, such as Python, Lua and Matlab. For producing the presented results, we used the Matlab interfaces of iDynTree and GURLS.
IvB Robotic Platform
iCub is a fullbody humanoid with 53 degrees of freedom
[19]. For validating the presented approach, we learned the dynamics of the right arm of the iCub as measured from the proximal sixaxis force/torque (F/T) sensor embedded in the arm. The considered output is the reading of the F/T sensor, and the inertial parameters are the base parameters of the arm [31]. As is not a input variable for the system, the output of the dynamic model is not directly usable for control, but it is still a proper benchmark for the dynamics learning problem, as also shown in [8]. Nevertheless, the joint torques could be computed seamlessly from the F/T sensor readings if needed for control purposes, by applying the method presented in [32].IvC Validation
The aim of this section is to present the results of the experimental validation of the proposed semiparametric model. The model includes a parametric part which is based on physical modeling. This part is expected to provide acceptable prediction accuracy for the force components in the whole workspace of the robot, since it is based on prior knowledge about the structure of the robot itself, which does not abruptly change as the trajectory changes. On the other hand, the nonparametric part can provide higher prediction accuracy in specific areas of the input space for a given trajectory, since it also models nonrigid body dynamics effects by learning directly from data. In order to provide empirical foundations to the above insights, a validation experiment has been set up using the right arm of the iCub humanoid robot, considering as input the positions, velocities and accelerations of the 3 shoulder joints and of the elbow joint, and as outputs the 3 force and 3 torque components measured by the sixaxis F/T sensor inbuilt in the upper arm. We employ two datasets for this experiment, collected at as the endeffector tracks (using the Cartesian controller presented in [33]) circumferences with radius on the transverse () and sagittal () planes^{3}^{3}3For more information on the iCub reference frames, see http://eris.liralab.it/wiki/ICubForwardKinematics at approximately . The total number of points for each dataset is , corresponding to approximately minutes of continuous operation. The steps of the validation experiment for the three models are the following:

Initialize the recursive parametric, nonparametric and semiparametric models to zero. The inertial parameters are also initialized to zero

Train the models on the whole dataset (10000 points)

Split the dataset in 10 sequential parts of 1000 samples each. Each part corresponds to 100 seconds of continuous operation

Test and update the models independently on the 10 splitted datasets, one sample at a time.
In Figure 4
we present the means and standard deviations of the average root mean squared error (RMSE) of the predicted force and torque components on the 10 different test sets for the three models, averaged over a 3seconds sliding window. The
axis is reported in logscale to facilitate the comparison of predictive performance for the different approaches in the initial transient phase. We observe similar behaviors for the force and torque RMSEs. After few seconds, the nonparametric (NP) and semiparametric (SP) models provide more accurate predictions than the parametric (P) model with statistical significance. At regime, their force prediction error is approximately , while the one of the P model is approximately two times larger. Similarly, the torque prediction error is for SP and NP, which is considerably better than the average RMSE of the P model. It shall also be noted that the mean average RMSE of the SP model is lower than the NP one, both for forces and torques. However, this slight difference is not very significant, since it is relatively small with respect to the standard deviation. Given these experimental results, we can conclude that in terms of predictive accuracy the proposed incremental semiparametric method outperforms the incremental parametric one and matches the fully nonparametric one. The SP method also shows a smaller standard deviation of the error with respect to the competing methods. Considering the previous results and observations, the proposed method has been shown to be able to combine the main advantages of parametric modeling (i.e. interpretability) with the ones of nonparametric modeling (i.e. capacity of modeling nonrigid body dynamics phenomena). The incremental nature of the algorithm, in both its P and NP parts, allows for adaptation to changing conditions of the robot itself and of the surrounding environment.V Conclusions
We presented a novel incremental semiparametric modeling approach for inverse dynamics learning, joining together the advantages of parametric modeling derived from rigid body dynamics equations and of nonparametric Machine Learning methods. A distinctive trait of the proposed approach lies in its incremental nature, encompassing both the parametric and nonparametric parts and allowing for the prioritized update of both the identified base inertial parameters and the nonparametric weights. Such feature is key to enabling robotic systems to adapt to mutable conditions of the environment and of their own mechanical properties throughout extended periods. We validated our approach on the iCub humanoid robot, by analyzing the performances of a semiparametric inverse dynamics model of its right arm, comparing them with the ones obtained by state of the art fully nonparametric and parametric approaches.
Acknowledgment
This paper was supported by the FP7 EU projects CoDyCo (No. 600716 ICT2011.2.1  Cognitive Systems and Robotics), Koroibot (No. 611909 ICT2013.2.1  Cognitive Systems and Robotics), WYSIWYD (No. 612139 ICT2013.2.1  Robotics, Cognitive Systems & Smart Spaces, Symbiotic Interaction), and Xperience (No. 270273 ICT2009.2.1  Cognitive Systems and Robotics).
References
 [1] R. Featherstone and D. E. Orin, “Dynamics.” in Springer Handbook of Robotics, B. Siciliano and O. Khatib, Eds. Springer, 2008, pp. 35–65.
 [2] A. E. Hoerl and R. W. Kennard, “Ridge Regression: Biased Estimation for Nonorthogonal Problems,” Technometrics, vol. 12, no. 1, pp. pp. 55–67, 1970.
 [3] C. Saunders, A. Gammerman, and V. Vovk, “Ridge Regression Learning Algorithm in Dual Variables.” in ICML, J. W. Shavlik, Ed. Morgan Kaufmann, 1998, pp. 515–521.

[4]
N. Cristianini and J. ShaweTaylor,
An Introduction to Support Vector Machines and Other Kernelbased Learning Methods
. Cambridge University Press, 2000. [Online]. Available: https://books.google.it/books?id=BY88GdO1yYC  [5] R. Rifkin, G. Yeo, and T. Poggio, “Regularized leastsquares classification,” Nato Science Series Sub Series III Computer and Systems Sciences, no. 190, pp. 131–154, 2003.
 [6] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning. MIT Press, 2006. [Online]. Available: http://www.gaussianprocess.org/gpml;http://www.bibsonomy.org/bibtex/257ca77b8164cba5c6a0ac94918219119/3mta3
 [7] D. NguyenTuong and J. Peters, “Using model knowledge for learning inverse dynamics.” in ICRA. IEEE, 2010, pp. 2677–2682.
 [8] A. Gijsberts and G. Metta, “Incremental learning of robot dynamics using random features.” in ICRA. IEEE, 2011, pp. 951–956.
 [9] T. Wu and J. Movellan, “Semiparametric Gaussian process for robot system identification,” in Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, Oct 2012, pp. 725–731.
 [10] J. Sun de la Cruz, D. Kulic, W. Owen, E. Calisgan, and E. Croft, “OnLine Dynamic Model Learning for Manipulator Control,” in IFAC Symposium on Robot Control, vol. 10, no. 1, 2012, pp. 869–874.
 [11] K. Yamane, “Practical kinematic and dynamic calibration methods for forcecontrolled humanoid robots.” in Humanoids. IEEE, 2011, pp. 269–275.
 [12] S. Traversaro, A. D. Prete, R. Muradore, L. Natale, and F. Nori, “Inertial parameter identification including friction and motor dynamics.” in Humanoids. IEEE, 2013, pp. 68–73.
 [13] Y. Ogawa, G. Venture, and C. Ott, “Dynamic parameters identification of a humanoid robot using joint torque sensors and/or contact forces.” in Humanoids. IEEE, 2014, pp. 457–462.
 [14] J. Hollerbach, W. Khalil, and M. Gautier, “Model identification,” in Springer Handbook of Robotics. Springer, 2008, pp. 321–344.
 [15] S. Vijayakumar and S. Schaal, “Locally Weighted Projection Regression: Incremental Real Time Learning in High Dimensional Space.” in ICML, P. Langley, Ed. Morgan Kaufmann, 2000, pp. 1079–1086.
 [16] B. Schölkopf and A. J. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond (Adaptive Computation and Machine Learning). MIT Press, 2002.
 [17] A. Rahimi and B. Recht, “Random Features for LargeScale Kernel Machines,” in NIPS. Curran Associates, Inc., 2007, pp. 1177–1184.
 [18] D. NguyenTuong, M. Seeger, and J. Peters, “Model Learning with Local Gaussian Process Regression,” Advanced Robotics, vol. 23, no. 15, pp. 2015–2034, 2009. [Online]. Available: http://dx.doi.org/10.1163/016918609X12529286896877
 [19] G. Metta, L. Natale, F. Nori, G. Sandini, D. Vernon, L. Fadiga, C. von Hofsten, K. Rosander, M. Lopes, J. SantosVictor, A. Bernardino, and L. Montesano, “The iCub Humanoid Robot: An Opensystems Platform for Research in Cognitive Development,” Neural Netw., vol. 23, no. 89, pp. 1125–1134, Oct. 2010.
 [20] M. Gautier and W. Khalil, “On the identification of the inertial parameters of robots,” in Decision and Control, 1988., Proceedings of the 27th IEEE Conference on. IEEE, 1988, pp. 2264–2269.
 [21] M. Gautier, “Dynamic identification of robots with power model,” in Robotics and Automation, 1997. Proceedings., 1997 IEEE International Conference on, vol. 3. IEEE, 1997, pp. 1922–1927.
 [22] K. Ayusawa, G. Venture, and Y. Nakamura, “Identifiability and identification of inertial parameters using the underactuated baselink dynamics for legged multibody systems,” The International Journal of Robotics Research, vol. 33, no. 3, pp. 446–468, 2014.
 [23] J. Baelemans, P. van Zutven, and H. Nijmeijer, “Model parameter estimation of humanoid robots using static contact force measurements,” in Safety, Security, and Rescue Robotics (SSRR), 2013 IEEE International Symposium on, Oct 2013, pp. 1–6.
 [24] W. Khalil and E. Dombre, Modeling, identification and control of robots. ButterworthHeinemann, 2004.
 [25] A. Rahimi and B. Recht, “Uniform approximation of functions with random bases,” in Communication, Control, and Computing, 2008 46th Annual Allerton Conference on, Sept 2008, pp. 555–561.
 [26] W. Rudin, Fourier Analysis on Groups, ser. A Wileyinterscience publication. Wiley, 1990.
 [27] Å. Björck, Numerical Methods for Least Squares Problems. Siam Philadelphia, 1996.
 [28] A. H. Sayed, Adaptive Filters. WileyIEEE Press, 2008.

[29]
A. Tacchetti, P. K. Mallapragada, M. Santoro, and L. Rosasco, “GURLS: a least squares library for supervised learning,”
The Journal of Machine Learning Research, vol. 14, no. 1, pp. 3201–3205, 2013.  [30] D. M. Beazley et al., “SWIG: An easy to use tool for integrating scripting languages with C and C++,” in Proceedings of the 4th USENIX Tcl/Tk workshop, 1996, pp. 129–139.
 [31] S. Traversaro, A. Del Prete, S. Ivaldi, and F. Nori, “Inertial parameters identification and joint torques estimation with proximal force/torque sensing,” in 2015 IEEE International Conference on Robotics and Automation (ICRA 2015).
 [32] S. Ivaldi, M. Fumagalli, M. Randazzo, F. Nori, G. Metta, and G. Sandini, “Computing robot internal/external wrenches by means of inertial, tactile and F/T sensors: Theory and implementation on the iCub,” in Humanoid Robots (Humanoids), 2011 11th IEEERAS International Conference on, Oct 2011, pp. 521–528.
 [33] U. Pattacini, F. Nori, L. Natale, G. Metta, and G. Sandini, “An experimental evaluation of a novel minimumjerk cartesian controller for humanoid robots,” in Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on, Oct 2010, pp. 1668–1674.
Comments
There are no comments yet.