Multi-Modal Data Fusion in Enhancing Human-Machine Interaction for Robotic Applications: A Survey
Human-machine interaction has been around for several decades now, with new applications emerging every day. One of the major goals that remain to be achieved is designing an interaction similar to how a human interacts with another human. Therefore, there is a need to develop interactive systems that could replicate a more realistic and easier human-machine interaction. On the other hand, developers and researchers need to be aware of state-of-the-art methodologies being used to achieve this goal. We present this survey to provide researchers with state-of-the-art data fusion technologies implemented using multiple inputs to accomplish a task in the robotic application domain. Moreover, the input data modalities are broadly classified into uni-modal and multi-modal systems and their application in myriad industries, including the health care industry, which contributes to the medical industry's future development. It will help the professionals to examine patients using different modalities. The multi-modal systems are differentiated by a combination of inputs used as a single input, e.g., gestures, voice, sensor, and haptic feedback. All these inputs may or may not be fused, which provides another classification of multi-modal systems. The survey concludes with a summary of technologies in use for multi-modal systems.
READ FULL TEXT