DogTouch: CNN-based Recognition of Surface Textures by Quadruped Robot with High Density Tactile Sensors

06/09/2022
by   Nipun Dhananjaya Weerakkodi Mudalige, et al.
Skoltech
0

The ability to perform locomotion in various terrains is critical for legged robots. However, the robot has to have a better understanding of the surface it is walking on to perform robust locomotion on different terrains. Animals and humans are able to recognize the surface with the help of the tactile sensation on their feet. Although, the foot tactile sensation for legged robots has not been much explored. This paper presents research on a novel quadruped robot DogTouch with tactile sensing feet (TSF). TSF allows the recognition of different surface textures utilizing a tactile sensor and a convolutional neural network (CNN). The experimental results show a sufficient validation accuracy of 74.37% for our trained CNN-based model, with the highest recognition for line patterns of 90%. In the future, we plan to improve the prediction model by presenting surface samples with the various depths of patterns and applying advanced Deep Learning and Shallow learning models for surface recognition. Additionally, we propose a novel approach to navigation of quadruped and legged robots. We can arrange the tactile paving textured surface (similar that used for blind or visually impaired people). Thus, DogTouch will be capable of locomotion in unknown environment by just recognizing the specific tactile patterns which will indicate the straight path, left or right turn, pedestrian crossing, road, and etc. That will allow robust navigation regardless of lighting condition. Future quadruped robots equipped with visual and tactile perception system will be able to safely and intelligently navigate and interact in the unstructured indoor and outdoor environment.

READ FULL TEXT VIEW PDF

page 1

page 3

page 4

08/12/2020

Walking on TacTip toes: A tactile sensing foot for walking robots

Little research into tactile feet has been done for walking robots despi...
12/02/2019

Surface Following using Deep Reinforcement Learning and a GelSightTactile Sensor

Tactile sensors can provide detailed contact in-formation that can facil...
07/22/2021

MobileCharger: an Autonomous Mobile Robot with Inverted Delta Actuator for Robust and Safe Robot Charging

MobileCharger is a novel mobile charging robot with an Inverted Delta ac...
11/10/2017

Robotic Tactile Perception of Object Properties: A Review

Touch sensing can help robots understand their sur- rounding environment...
07/23/2019

Grasping Using Tactile Sensing and Deep Calibration

Tactile perception is an essential ability of intelligent robots in inte...
11/28/2017

The robot skin placement problem: a new technique to place triangular modules inside polygons

Providing robots with large-scale robot skin is a challenging goal, espe...

I Introduction

Tactile perception plays a crucial role in modern robotics, opening new frontiers in human-robot interaction and significantly increasing the environmental awareness of autonomous robots. In addition to visual estimation, humans and animals actively use tactile sensors in their skin and muscles to maintain balance and perform various agile motions

[1, 2]. However, high attention has been brought to visual feedback systems in the field of legged robot locomotion, for instance, the laser range finder applied for surface adaptation by Plagemann et al. [3], stereo-vision system proposed by Sabe et al. [4], or infrared (IR) camera combined with ultrasound sensors proposed by Chen et al. [5].

Fig. 1: (a) Robot dog contacting with the textured surface sample by tactile sensor array embedded in its foot. (b) VR concept scenario with recognized surface texture displayed to the remote operator.

Several works estimate the surface for legged robot locomotion through evaluation of joint position [6]. Camurri et al. [7] developed a Pronto state estimator for legged robots that can integrate pose corrections from the RGB camera, LIDAR, and odometry feedback. Sarkisov et al. [8]

introduced a novel landing gear that allows surface profile estimation based on foot-pad IMU orientation and joint angles. Zhang et al.

[9] explored visual-based estimation of the tactile patterns by designing a robotic skin with a painted inner surface and installing a camera inside the robot leg. Smith et al. [10]

suggested coupling data from foot contact sensors and Inertial Measurement Unit (IMU) to teach quadruped robot locomotion skills via Reinforcement Learning. A hybrid tactile sensor-based system was proposed by Luneckas et al.

[11], which are used in hexapod-legged robots to overcome obstacles. The sensor was designed to combine a limit switch and flexible polypropylene material that was connected with foot by silicone material, allowing the robot to estimate solid ground obstacles. Legged robots are currently using direct feedback from the environment such as sonar, vision, LIDAR, and force feedback from joint actuators. Tactile sensors have recently been applied to expand the awareness of collaborative robots to its environment by feedback from a skin-like surface. In case of the legged robot, such sensors may be used beneath the robot’s feet to estimate the properties of the surface. Adding tactile sensing to the robot’s feet can be beneficial for walking in challenging terrains in the same way haptic sensing plays an important role for animal locomotion in the nature.

In this paper, we present the Touch Sensitive Foot (TSF), which is able to recognize the surface texture where the robot walks on with the help of trained CNN model. This research opens an efficient way to achieve environmental awareness for autonomous robots. So, the robot gait can be predetermined and the robot can walk on unknown terrains.

Ii Related Works

The concept of haptic perception in robotic systems has been extensively applied in prototyping manipulators, mobile robots, underwater robots [12], and drones. Several approaches towards surface estimation have been proposed, including integrated force and tactile sensors in the joints [13], surface, and inner structures [14] of the robotic limbs. For example, Tsetserukou et al. [15] introduced a whole-sensitive robotic arm with optical torque sensors embedded in its joints. Contact force detection and control for robotic arm by joint torque sensors were investigated Dong et al. [16]. The proposed methods allow robots to efficiently estimate contact with surfaces, however, joint sensors are unable to measure fine details of the surface texture.

To estimate the distribution of forces during contact with the environment, a higher number of sensors should be embedded in robotic limbs. A pressure-sensitive skin that can be adapted to complex geometries was introduced by Fritzsche et al. [17] for safe human-robot interaction. This concept was further explored by Cheng et al. [18] presenting a humanoid robot with a sensor array on its surface. This system was enhanced with a low-resolution robot skin located on the bipedal robot soles suggested by Guadarrama-Olvera et al. [19] to reconstruct the supporting polygon and the pressure footprint online. The two-layer design of artificial robotic skin was suggested by Klimaszewski et al. [20], which allows measuring the location, value, and direction of pressure from external force. Liu et al. [21] developed a large-scaled artificial sensitive skin for robots based on electrical impedance tomography. Dilibal et al. [22] proposed soft sensors for robotic grippers by a screen printing process with flexible material and ionic liquids.

The aforementioned approaches allow to cover large areas with sensors, however, most of them lack the resolution necessary to be placed at the feet of a quadruped robot. There are, however, few displays that are developed to provide a high-resolution tactile data to the robot limbs. For example, a small thumb-sized vision-based sensor developed by Sun et al. [23] provides data with spatial resolution of 0.4 and can be applied for dexterous manipulation. To obtain a high resolution of tilt estimation of the mobile charging robot, Okunevich et al. [24] proposed to evaluate tactile patterns. The developed vision-tactile perception system allows precise positioning of the charger end-effector and ensures a reliable connection between the electrodes of the two robots.

This paper presents a novel perception system for quadruped robots with CNN-based texture recognition from the data of a high-resolution tactile sensor array embedded in the sole of robotic foot. We evaluated system performance with eight 3D-printed samples of the surface. The proposed approach aims at improving the navigation of quadruped robots and their environmental awareness through special patterns placed on the surface and potentially teaching the adaptive locomotion of robots.

Iii DogTouch System Overview

All components of the system can be divided into three main modules, as shown in Fig. 2

: Touch Sensitive Foot (TSF) with tactile sensor array, ESP32 microcontroller, and CNN model running on NVIDIA Jetson Nano to classify textures.

Fig. 2: Textures surface recognition with TSF. While contacting with the surface, the ESP32 microcontrollers process the data from the tactile sensor arrays. The data matrices are then transmitted to the NVIDIA Jetson Nano, which carries out the CNN-based texture recognition.

The system works as follows: the ESP32 reads the tactile sensor arrays to recognize whether or not the sole touched the ground. When contact occurs, the ESP32 obtains the data matrix from the tactile sensor array and sends it to the CNN model running on Jetson Nano computer. The CNN model has been trained to recognize the surface textures. When the robot is aware of the texture type, the robot can localize itself (considering that the patterns are priory located in the specific configuration on the floor or pavement) and to optimize the gait to avoid slippage.

while robot is walking do
       read tactile sensor array data;
       if foot touched the ground then
             send tactile sensor array data to CNN;
             estimate ground surface texture;
             if Current Gait is suitable for predicted ground surface then
                   walk with current gait;
                  
            else
                   select the gait assigned to the surface pattern ID;
                   walk with the selected gait;
                  
            
      
Algorithm 1 Adaptive gait algorithm

Iii-a Leg Design of Quadruped Robot

We have developed a unique customizable leg for a quadruped robot. The leg was designed to decrease the inertia, which is critical for the robot to have stable and efficient locomotion. 3D printed and carbon fiber parts were used for the fabrication of robotic legs. The manufactured legs have not only a lightweight structure but also a high strength. Each leg has 3 degrees of freedom: hip joint, upper leg joint and lower leg joint. Joints are actuated by RDS5160 SSG high torque digital servo motors with 7

maximum torque. Each servo motor is driven with 8.4 and 2.5 maximum current. The TSF was 3D printed using TPU (Thermoplastic polyurethane) material, which is flexible and strong enough to walk on harsh terrains. The tactile sensor (see III-B) was installed in the sole as shown in Fig. 3.

Fig. 3: Touch Sensitive Foot (TSF) design with the embedded tactile sensor array.

Once the foot touches the ground, the tactile sensor data is used to recognize surface texture with the CNN model.

Iii-B Embedded Tactile Sensor Array

The TSF relies on the high-resolution tactile sensor array proposed by Yem et. al. [25]. The sensor is integrated into the soft sole of the robotic leg to provide the high-resolution perception of surface texture. High-resolution tactile sensor arrays allow the quadruped robot to collect detailed data of the textured surface. It is capable of sensing the maximum contact area of 5.8 with a resolution of 100 points per frame. The sensing frequency is 120 (frames per second). The sensors allow the system to precisely detect the pressure on the small surface protrusions. The force detection range of the sensors is from 1 to 9 .

Iii-C CNN Model for Tactile Perception

CNN model consist of two convolutional layers with a 3x3 kernel and 3 fully connected linear layers with Rectified Linear Unit (ReLU) nonlinear activation functions, and batch normalization (see Fig.

4). Batch normalization acts as a regularizer and it speeds up the training of classification models. In addition, the batch normalization results in more predictive and well-behaved gradients being used in training, which eliminates major weight fluctuations and enables faster and more effective optimization.

The model receives the tactile sensor data as a three-dimensional matrix with the shape of

. The resolution of the data is relatively low in comparison with high-resolution camera frames or point cloud datasets. Therefore, our architecture does not include max pooling layers or strided convolutions in order to preserve the information. Such additions in the neural network architecture could be considered in the future with a higher number of tactile sensors or larger areas covered by tactile arrays. After convolutional layers, the data with the shape of

was flattened to one-dimensional vectors with 12800 elements.

Fig. 4: CNN model for tactile perception system.

Finally, after three linear layers with output dimensions of 256, 128, and 8 (the number of texture types), the model output was received as the matrix with predictions for each class for all inputs in the batch.

Iv Texture Recognition Experiment

The experiment was conducted with eight different textured patterns shown in Fig. 5.

Fig. 5: 3D-printed ground surface textures along with the corresponding detected tactile patterns by sensor array.

The following textured patterns were selected: diagonal lines with 1 width and 5 interval (Fig. 5a), dots of 1 diameter with 1 interval (Fig. 5b), vertical lines with 1 width and 5 interval (Fig. 5c), dots of 3 diameter with 1 interval (Fig. 5d), dots of 5 diameter with 1 interval (Fig. 5e), grid with 5 interval (Fig. 5f), dots of 1 diameter with 5 interval (Fig. 5g), cylinders of 3 diameter with 1 interval (Fig. 5h). The sample of each ground surface pattern was 3D-printed with a size of 50x50 from the PLA material. The selected patterns vary in profile and resolution of the textures. In this research, we hypothesized that the proposed sensor arrays would allow us to obtain noticeable differences in the sensor readings with a minimal change of 2 in the size of texture element.

Iv-a Dataset collection

For training and validation of the CNN-based classification model, we collected a dataset including 800 data arrays from the tactile sensor (100 data arrays for 8 textured pattern). Dataset was divided into (90%) for the train part, and (10%) for the validation. TSF stepped on a certain textured plate 100 times, each time at a different angle. When there was a contact between the robotic leg and pattern, the system recorded the array in the dataset.

Iv-B Experimental Results

The results of the CNN training are shown in Fig. 6.

Fig. 6:

(a) Training loss. (b) Training results and test validation of the developed CNN model. The accuracy does not increase after 12 epochs.

The training was conducted on a computer using the NVIDIA Tesla V100 GPU. Validation accuracy for our trained CNN-based model equals 74.37%. After 12 epochs of training, accuracy does not change. Learning time was 12.6 for the CNN-based model. After the experiment, we conclude that prediction for larger spheres is the same as for the cylindrical pattern. Line patterns demonstrated a high prediction rate of 90%.

V Conclusions and Future Work

A novel quadruped robot DogTouch is developed to leverage the tactile sensing of the robotic leg for the surface detection. The proposed CNN-driven tactile perception system using data from tactile pressure sensors recognizes different textured patterns under the foot of the quadruped robot in 74.37% cases in average. The highest prediction rate of 90% is achieved for line texture pattern. The neural network has sufficient accuracy in texture recognition. The sample, which is introduced into the network, is a part of a particular surface type with a certain texture. The cases, where the sample contains several types of texture, were not taken into account, but possible in the case of a continuous walk over an area (with a mixture of textures). With some modifications to the neural network presented in the paper (e.g. separating samples with texture mixes into additional classes or segmenting texture by its type and adding the dropout layers to reduce overfitting) higher accuracy of the texture prediction could be achieved in the future.

The proposed technology DogTouch can potentially considerably improve the robustness of navigation of legged robots regardless of the lighting conditions. Leveraging the sense of touch, such robots can navigate in unknown environment by reading the information from the tactile paving textured surface. Additionally, robots will be capable to adapt the gait to the detected type of surface to avoid slippage.

References