A methodology for multisensory product experience design using cross-modal effect: A case of SLR camera

07/07/2019
by   Takuma Maki, et al.
0

Throughout the course of product experience, a user employs multiple senses, including vision, hearing, and touch. Previous cross-modal studies have shown that multiple senses interact with each other and change perceptions. In this paper, we propose a methodology for designing multisensory product experiences by applying cross-modal effect to simultaneous stimuli. In this methodology, we first obtain a model of the comprehensive cognitive structure of user's multisensory experience by applying Kansei modeling methodology and extract opportunities of cross-modal effect from the structure. Second, we conduct experiments on these cross-modal effects and formulate them by obtaining a regression curve through analysis. Finally, we find solutions to improve the product sensory experience from the regression model of the target cross-modal effects. We demonstrated the validity of the methodology with SLR cameras as a case study, which is a typical product with multisensory perceptions.

READ FULL TEXT

page 6

page 8

research
05/30/2019

Cross-modal Variational Auto-encoder with Distributed Latent Spaces and Associators

In this paper, we propose a novel structure for a cross-modal data assoc...
research
07/11/2022

Cross-modal Prototype Driven Network for Radiology Report Generation

Radiology report generation (RRG) aims to describe automatically a radio...
research
11/28/2014

Cross-Modal Learning via Pairwise Constraints

In multimedia applications, the text and image components in a web docum...
research
05/02/2018

Automatic Inference of Cross-modal Connection Topologies for X-CNNs

This paper introduces a way to learn cross-modal convolutional neural ne...
research
07/16/2022

Cross Vision-RF Gait Re-identification with Low-cost RGB-D Cameras and mmWave Radars

Human identification is a key requirement for many applications in every...
research
07/09/2023

Mx2M: Masked Cross-Modality Modeling in Domain Adaptation for 3D Semantic Segmentation

Existing methods of cross-modal domain adaptation for 3D semantic segmen...
research
04/01/2020

Shared Cross-Modal Trajectory Prediction for Autonomous Driving

We propose a framework for predicting future trajectories of traffic age...

Please sign up or login with your details

Forgot password? Click here to reset