"Touching to See" and "Seeing to Feel": Robotic Cross-modal SensoryData Generation for Visual-Tactile Perception

02/17/2019
by   Jet-Tsyn Lee, et al.
0

The integration of visual-tactile stimulus is common while humans performing daily tasks. In contrast, using unimodal visual or tactile perception limits the perceivable dimensionality of a subject. However, it remains a challenge to integrate the visual and tactile perception to facilitate robotic tasks. In this paper, we propose a novel framework for the cross-modal sensory data generation for visual and tactile perception. Taking texture perception as an example, we apply conditional generative adversarial networks to generate pseudo visual images or tactile outputs from data of the other modality. Extensive experiments on the ViTac dataset of cloth textures show that the proposed method can produce realistic outputs from other sensory inputs. We adopt the structural similarity index to evaluate similarity of the generated output and real data and results show that realistic data have been generated. Classification evaluation has also been performed to show that the inclusion of generated data can improve the perception performance. The proposed framework has potential to expand datasets for classification tasks, generate sensory outputs that are not easy to access, and also advance integrated visual-tactile perception.

READ FULL TEXT

page 1

page 4

page 5

research
07/12/2021

Visual-Tactile Cross-Modal Data Generation using Residue-Fusion GAN with Feature-Matching and Perceptual Losses

Existing psychophysical studies have revealed that the cross-modal visua...
research
06/14/2019

Connecting Touch and Vision via Cross-Modal Prediction

Humans perceive the world using multi-modal sensory inputs such as visio...
research
09/18/2022

VisTaNet: Attention Guided Deep Fusion for Surface Roughness Classification

Human texture perception is a weighted average of multi-sensory inputs: ...
research
12/28/2021

Multimodal perception for dexterous manipulation

Humans usually perceive the world in a multimodal way that vision, touch...
research
01/17/2023

Vis2Hap: Vision-based Haptic Rendering by Cross-modal Generation

To assist robots in teleoperation tasks, haptic rendering which allows h...
research
04/19/2019

Listen to the Image

Visual-to-auditory sensory substitution devices can assist the blind in ...
research
05/04/2023

Controllable Visual-Tactile Synthesis

Deep generative models have various content creation applications such a...

Please sign up or login with your details

Forgot password? Click here to reset