Experimental Force-Torque Dataset for Robot Learning of Multi-Shape Insertion

07/18/2018
by   Giovanni De Magistris, et al.
0

Most real-world systems are complex and hard to model accurately. Machine learning has been used to model complex dynamical systems (e.g. articulated robot structures, cable stretch) or coupled with reinforcement learning to learn new tasks based on vision and position sensors (e.g. grasping, reaching). To solve complex tasks using machine learning techniques, availability of a suitable dataset is an important factor. The robotic community still lacks public datasets, especially for problems that are complex to model like contact tasks, where it is difficult to obtain a precise model of the physical interaction between two objects. In this paper, we provide a public dataset for insertion of convex-shaped pegs in holes and analyze the nature of the task. We demonstrate using the data how a robot learns to insert polyhedral pegs into holes using only a 6-axis force/torque sensor. This dataset can also be used to learn other contact tasks such as shape recognition.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

03/02/2020

Learning Contact-Rich Manipulation Tasks with Rigid Position-Controlled Robots: Learning to Force Control

To fully realize industrial automation, it is indispensable to give the ...
09/19/2018

Leveraging Contact Forces for Learning to Grasp

Grasping objects under uncertainty remains an open problem in robotics r...
08/15/2018

Real-time policy generation and its application to robot grasping

Real time applications such as robotic require real time actions based o...
04/01/2021

Touch-based Curiosity for Sparse-Reward Tasks

Robots in many real-world settings have access to force/torque sensors i...
02/25/2019

Robust Affordable 3D Haptic Sensation via Learning Deformation Patterns

Haptic sensation is an important modality for interacting with the real ...
10/24/2021

Contact Information Flow and Design of Compliance

The objective of many contact-rich manipulation tasks can be expressed a...
07/22/2020

Understanding Multi-Modal Perception Using Behavioral Cloning for Peg-In-a-Hole Insertion Tasks

One of the main challenges in peg-in-a-hole (PiH) insertion tasks is in ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Robot manufacturers are focused on making robots simpler to program to speed up the configuration of new assembly lines. Owing to recent advances in deep learning and machine learning, robots are becoming more flexible. Instead of manual programming, modern artificial intelligence allows robots to learn new tasks by looking at demonstrations or actively learning without explicit teaching. Recent works have already shown the potential to learn the robot dynamics 

[1] or learn contact dynamics during a peg-in-the-hole task [2].

Data is a key to the success of machine learning for solving complex tasks. The emergence of large datasets has played a prominent role in different research communities where deep learning has provided state-of-the-art results, e.g. natural language processing 

[3], image and score understanding [4, 5]. The robotic community still lacks public datasets, especially for problems that are complex to model like contact tasks, where it is still difficult to obtain a precise model of the physical interaction between two objects [6]. Therefore, we believe that availability of more datasets collected using real robots is crucial. Towards this ambitious goal, Yu et al. [7] is one of the first works to provide a large dataset on a robot contact task, with force information during pushing task.

In this paper, we choose one of the most common industrial tasks: the peg-in-hole task. We provide a dataset of a force/torque (F/T) data of peg-in-hole operations with polyhedral pegs and holes. If the robot has a precise position control and the hole pose is estimated with enough accuracy, we can solve this problem using position commands. However, usually due to uncertainty of robotic assembly, the task becomes unsolvable by positioning alone; the sources of the uncertainty include object positioning errors, hole pose estimation inaccuracy and grasping inaccuracy. Hence, in this paper, we put emphasis on the F/T data of the task. The F/T dataset presented in this paper allow to assess the feasibility of novel techniques before further effort to realize them physically or to help pre-train neural networks for insertion tasks or shape recognition.

2 Force-Based Insertion Dataset

In this paper, we choose a strategy to solve the peg-in-hole [2]: i) position the peg at a predefined height from the hole, ii) push the peg with a downwards force, iii) place the peg center within the clearance region of the hole center applying force/torque movements (search phase) and iv) push the peg with a downwards force (insertion phase).

2.1 Data collection

The dataset records object positions and interaction forces for a set of polyhedral pegs in contact with holes (see Fig. 1). The face of the polyhedron in contact with the environment is n-gon regular convex polygons with in Fig. 1.

Figure 1: Pegs and holes used to acquire the data

The data was collected by sending a sequence of robot commands: i) pick the peg, ii) rotate it in the given direction, iii) move to the center of hole with a predefined offset in and directions, iv) pushes the peg against the hole plate with a downwards force of for . Pushing the plate for a long time help the controller passing form the transient to the steady situation. The force control is executed and recorded at

. The value of the force torque sensor and the end effector position for each point is recorded and stored as vectors of

. , , , are the peg positions with respect to the hole; is the peg angle with the respect to the hole angle; , , , , ,

are the forces and moments in the force sensor frame;

is the time and is a counter of the datapoints in Fig. 2.

Figure 2: Datapoints.

As we are using a precisely calibrated table with a grid of screw holes, we can know the exact position of the hole. To ensure that the relative position of peg and the hole are known accurately, we start each data collection by manually inserting the peg in the hole at correct orientation, making both position and angle of the peg aligned with the hole. In this way, grasp errors do not have any effect on the experiments. The position of the force sensor with respect to the peg position is shown in Fig. 3. The peg is moved by increments of in and direction within the range of from the center of the hole (see Fig. 2).

Figure 3: Setup of the experiments.

Along with the dataset we also release the 3D models of all the objects used for collecting data. Details about the structure of the dataset are given inside the folders.

The dataset is available at
http://ibm.biz/multishapeinsertion

2.2 Hardware

Fig. 3 shows the setup for collecting the data.

  • Robot: The system uses a UR5 industrial robotic arm with 6 DOF to precisely control the position of its tool center point (TCP). The robot has accuracy of .

  • Gripper: a Robotiq 2-finger 85 gripper is used to collect the dataset.

  • F/T sensor: Robotiq force torque sensor FT-150 with effective resolution of for the force and for the moment. Signal noise of for the force and for the moment. To remove long-term drift, we recalibrate the force sensor.

  • Objects: The objects (pegs and holes) are printed using an Ultimaker 2+ 3D printer using PLA filament with nozzle size and infill density of 20%.

2.3 Software

The UR5 robot has many components available in the Robot Operating System (ROS) framework. We use such ROS nodes to collect our data. The F/T captured data are published as ROS topics and recorded at 100 Hz. The object position with respect to the hole is only given before the peg goes in contact with the hole.

As we are in contact with the environment during the search phase and alignment, we adopt a common admittance control to stabilize the interaction betweeen robot and environment. This controller is common for many industrial manipulators controlled by a position controller [8].

3 A Method for Labeling the Data for Multi-Shape Insertions

Here we illustrate a method to label each entry of the dataset for performing the peg-in-hole task. In the next section, we show how we can label each entry of the dataset for the shape recognition task.

During the execution of the peg-in-hole task, the position and orientation of the hole are inaccurate due to different uncertainties. Using the accurate position and orientation of our dataset, we can calculate for each entry the best action that should be performed. The action is the label and the input is the force and moments. The position and orientation are then only used for labeling the data.

Using only the peg positions , we label each entry of the dataset in 4 actions: move left, move down, move right and move up (see Fig. 4). These actions allows to reduce the position error of the peg respect to the hole during the search phase.

Figure 4: Labels based on peg position and rotation for search phase.

To reduce the orientation error of the peg respect to the hole, we use the following labels:

rotate right the peg
rotate left the peg
don’t rotate

where is the peg orientation respect to the hole and is manually defined as a function of the clearance between the peg and hole.

4 Analysis and Experiments

4.1 Analysis of Force Data

Fig. 5 shows the forces and moments of the dataset for a peg with at the top left position in Fig. 2.

In Fig. 5, we can clearly distinguish three phases:

  1. [label=.]

  2. Non-contact: the peg is not in contact with the environment. This situation is not interesting for analyzing the contact and the insertion.

  3. Transient: the forces and moments keep changing with time. In this period, we can analyze the response of the robot to the interaction with the environment.

  4. Steady: the forces and moments of interactions remains almost unchanged in time.

Figure 5: Force and moment during the top left point in Fig. 5 using a peg with

In the next sections, we will start to analyze situations B and C separately. Lastly, we also analyze the combination of the data from both situations.

4.2 Comparaison of Classifiers

We train multiple classifiers using different methods with a cross entropy cost function. The classifier aims to find the best action for the given input (forces and moments). Note that the accuracy of the classifier is the accuracy to provide a correct action given the forces and moments as inputs and is not the probability of entering the hole.

To compare the classifiers using different machine learning (ML) methods, we prepare the training data for each method by the following procedure. First, we sample raw data from a time window starting at the index and ending at the index . We separate the data to frames each of which has the length of . That results in frames. In each frame, we calculate the average of the raw data. We use the obtained average data as a training data.

We compare the accuracy of the ML methods of two tasks using the label explained in Sec. 3; (T1) reduce the position error between peg and hole position using 4 force actions, and (T2) reduce the position error between peg and hole position using 3 moment actions.

In Table 1, we compare the classifiers using only shape and the following 4 inputs:

(1)

with , ,

. We made the comparison using the following techniques: SVM is support vector machine classifier with linear kernel, DT is the Decision Tree, RNDF is the Random Forest method, ADA is Ada Boost classifier, GAUS is the Gaussian Naive Bayes method, LDA is a Linear Discriminant Analysis, QDA is a Quadratic Discriminant Analysis and MLP is the Multi Layer Perceptron.

In Sec. 4.4, we also compare the results of adding the remaining 2 F/T inputs.

Technique Acc [%] - T1 Acc [%] - T2 Average
SVM
DT
RNDF
ADA
GAUS
LDA
QDA
MLP
Table 1: Comparison of the different machine learning techniques for labels based on peg position with 4 actions (T1) and for labels based on peg orientation with 3 actions (T2).

From Table 1

, MLP is the best choice in both tests. The MLP network is composed of 2 hidden layers of size [100, 50], the optimizer is lbfgs and activation function is rectified linear unit. In the next sections, we will only use this MLP network.

4.3 Study of the different contact situations

In Table 2, we compare the accuracy for the transient and steady state.

Acc [%] - T1 Acc [%] - T2 Situation
a) 400 200 600 69.62 56.33 Transient
b) 400 600 1000 67.09 46.52 Steady
c) 800 200 1000 68.67 53.16 Both
d) 50 200 600 70.57 61.39 Transient
e) 50 600 1000 65.51 45.25 Steady
f) 50 200 1000 72.47 54.75 Both
Table 2: Comparison of the different contact situations using MLP

In Table 2, we noticed that, for the task T2, taking only the average of the data points during the transient situation, the accuracy improved from c) to a) . On the other hand, by taking the average of only steady situation the results decrease to b) . From this result, we can suppose that the information about the task T2 is mostly during transient situation.

Another important result for T2 coming from the analysis of the dataset during the transient situation is that using the following parameters (, , ), the accuracy increases to d) . The input is a sequence of 8 data points, i.e. [(600-200)/50=8].

For T1, the accuracy considering only the transient situation increase to a) and using only the steady situation the accuracy decreases to b) . Using a sequence of 8 points as input during the transient situation the accuracy increase to d) . As shown in Table 2 for (, , ), the steady contact situation is the most important phase for T1 and the accuracy increases to f) ).

During the steady situation using the following parameters (, , ), the accuracy decrease to e) for T1 and to e) for T2.

Analyzing these results, we can affirm that the dynamic during the impact between the peg and environment is very important to understand the insertion task for the search and alignment phases.

We can conclude that for our analysis using MLP the main information for T2 is the transient, while for T1 is the whole contact phase. The parameters (, , ) in Table 2 are a good compromise and we choose these parameters to analyze the results for different shapes.

4.4 Study of different inputs

Another important analysis of the dataset is to understand which inputs are the most important. Adding as input the accuracy decrease from to , adding the accuracy is . Therefore, the main information is in , , , .

4.5 Study of different shapes

Table 3 shows the results for the different shapes. From the table, we can clearly understand that T1 is easier than T2. In particular, we can notice that while the accuracy for the T1 increase with the number of sides, the accuracy for T2 is similar for all shapes.

n sides Acc [%] - T1 Acc [%] - T2
3 67.09 67.72
4 72.47 54.75
5 72.78 65.19
6 75.00 65.51
200 81.01 62.66
Table 3: Comparison of the different shapes using (, , )

4.6 Robot Experiments

The model learned using the dataset is used to perform the task on UR5 robot. As input we use the input of Eq. (1) with , , . We use the following 4 actions during T1: , , , . And 3 actions during T2 , , . For the experiment, we fix , , . Using this parameters the robot perform the insertion task in average after actions starting from a distance of the hole of with 100% success rate. These results depend on the amplitude of the force and moment commands.

The video is available at
https://youtu.be/6rLc9fAtzAQ

In the video, the robot used the learned model to perform the insertion for all shapes.

4.7 Shape Recognition

We use the force and moment during contact to recognize the shape of the peg and the hole. In our dataset, peg and the hole have the same shape. We label the data using 5 classes (one per shape). Using MLP and , , , we obtain an accuracy of .

The result shows that using our dataset, the robot can also recognize the shape of the peg. If the robot has low confidence that it is holding the correct peg, it can generate an error with the reason of failure.

5 Conclusions

In this paper, we presented a dataset for multishape peg-in-hole. Using this dataset, we conducted several analysis and we trained a MLP network able to select the right action based on forces and moments. The learned motion was tested on the UR5 robot.

In a near future, we would like to work with deeper hole. Moreover, the current data set does not consider angular alignment errors except for the rotation about the peg axis. We will investigate more in this direction. Another interesting area for future works would be transfer learning where the models are learned in simulation and fine-tuned on the real robot or where the insertion is learned from plastic pegs-holes and used with metal objects.

References

  • [1] T.-H. Pham, G. De Magistris, and R. Tachibana, “OptLayer - Practical Constrained Optimization for Deep Reinforcement Learning in the Real World,” in IEEE International Conference on Robotics and Automation, 2018.
  • [2] T. Inoue, G. D. Magistris, A. Munawar, T. Yokoya, and R. Tachibana, “Deep reinforcement learning for high precision assembly tasks,” in Proc. of The International Conference on Intelligent Robots and Systems (IROS), Vancouver, Canada, 2017.
  • [3] D. Hewlett, A. Lacoste, L. Jones, I. Polosukhin, A. Fandrianto, J. Han, M. Kelcey, and D. Berthelot, “Wikireading: A novel large-scale language understanding task over wikipedia,” CoRR, vol. abs/1608.03542, 2016.
  • [4]

    M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in

    Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , 2016.
  • [5] A. Shahroudy, J. Liu, T.-T. Ng, and G. Wang, “Ntu rgb+d: A large scale dataset for 3d human activity analysis,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
  • [6] C. Bouchard, M. Nesme, M. Tournier, B. Wang, F. Faure, and P. G. Kry, “6d frictional contact for rigid bodies,” in Proceedings of the 41st Graphics Interface Conference, ser. GI ’15.    Toronto, Ont., Canada, Canada: Canadian Information Processing Society, 2015, pp. 105–114.
  • [7] K. T. Yu, M. Bauza, N. Fazeli, and A. Rodriguez, “More than a million ways to be pushed. a high-fidelity experimental dataset of planar pushing,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct 2016, pp. 30–37.
  • [8] E. Freund and J. Pesara, “High-bandwidth force and impedance control for industrial robots,” Robotica, vol. 16, no. 1, p. 75–87, 1998.