Vibrotactile Feedback for Vertical 2D Space Exploration

03/31/2020
by   Lancelot Dupont, et al.
IRIT
0

Visually impaired people encounter many challenges in their everyday life, especially when it comes to navigating and representing space. The issue of shopping is addressed mostly on the level of navigation and product detection, but conveying clues about the object position to the user is rarely implemented. This work presents a prototype of vibrotactile wristband using spatiotemporal patterns to help visually impaired users reach an object in the 2D plane in front of them. A pilot study on twelve blindfolded sighted subjects showed that discretizing space in a seven by seven targets matrix and conveying clues with a discrete pattern on the vertical axis and a continuous pattern on the horizontal axis is an intuitive and effective design.

READ FULL TEXT VIEW PDF

Authors

page 2

page 3

04/23/2019

Drishtikon: An advanced navigational aid system for visually impaired people

Today, many of the aid systems deployed for visually impaired people are...
07/15/2020

Plattenbauten: Touching Rectangles in Space

Planar bipartite graphs can be represented as touching graphs of horizon...
08/13/2013

Toward the Coevolution of Novel Vertical-Axis Wind Turbines

The production of renewable and sustainable energy is one of the most im...
12/20/2014

SVEN: Informative Visual Representation of Complex Dynamic Structure

Graphs change over time, and typically variations on the small multiples...
03/15/2021

All in one stroke? Intervention Spaces for Dark Patterns

This position paper draws from the complexity of dark patterns to develo...
04/05/2017

Supporting Navigation of Outdoor Shopping Complexes for Visually-impaired Users through Multi-modal Data Fusion

Outdoor shopping complexes (OSC) are extremely difficult for people with...
12/29/2015

A framework for robust object multi-detection with a vote aggregation and a cascade filtering

This paper presents a framework designed for the multi-object detection ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Visual impairment is a major obstacle to a person’s autonomy in everyday life, especially in relation to space. Many works have been carried out and are underway to help navigation in space ((Aggravi et al., 2016; Gharani and Karimi, 2017; Johnson and Higgins, 2006)) but the problem of close space seems to be less considered. A task like shopping however requires both aspects: first the user needs to navigate through the shop and then precisely locate a specific object. That kind of task is perceived as particularly difficult by people whose eyesight is severely impaired (Lamoureux et al., 2004).

According to Kostyra et al (Kostyra et al., 2017), around a third of visually impaired participants indicated that they run their basic necessities shopping online. The majority travel to the store, and 72% said they do it alone. But the poor availability of sellers and lack of braille information on packaging makes this task frustrating and time consuming. Locating one object precisely among others within a radius can be facilitated by the use of assistive devices, but few solutions exist to overcome these problems. The use of text-to-speech applications allows the labels to be read, however this does not make the search for a product faster. Some devices already offer guidance through the shelves and the transmission of product information to the customer via audio feedback, for example BlindShopping (López-de-Ipiña et al., 2011). But guidance when catching the object is only rarely finalized, and when it is, audio is usually used, which masks the ambient noises necessary for visually impaired people to understand their environment.

In this paper, we propose a wearable device using vibrotactile feedback that provides spatial information to precisely locate an object close to the user in a reliable, discreet, rapid manner that does not mask the ambient sound cues. In our study, we considered two main factors. First, the division of space into discrete circular positions, hereinafter called Target Density. Second, the methods of encoding the information transmitted along the vertical axis and along the horizontal axis, designated below by ”combination of vibration codes” or Encoding. Target position can be encoded in either a discrete or continuous manner.

Our results suggest that the best working encoding is a hybrid combination, relying on discrete encoding on the vertical axis, and continuous encoding on the horizontal one. The contribution of this paper is two-fold: (1) a wearable device with four vibration motors that can encode the position of objects in space and (2) the results of our user study in which we compare three Target Densities and four Encodings.

2. Related Work

2.1. Challenges for Visually Impaired People

A study by Papadopoulos et al (Papadopoulos et al., 2011) about the concerns of visually impaired children and their parents showed that the area most affected by their disability concerns life skills compared to communication skills and social skills. It has also shown that navigation capabilities are an indicator of performance and lag in development in this area. In addition, autonomous navigation would allow children to participate in more social activities and improve their social skills. For these reasons, mobility has been at the center of research and development for assistive devices in recent decades. Simple actions for a sighted person such as keeping a direction while walking present a strong cognitive load in the absence of visual feedback.

2.2. Vibrotactile Guidance

Navigation in space is a well addressed problem by researchers working on visual impairment. Many navigation aids based on a vibrotactile feedback have been developed, which suggests a good efficiency of this method. Kammoun et al (Kammoun et al., 2012) use two vibrating bracelets to help a blind person to maintain a direction while walking. Results were encouraging despite some confusion due to the vibration encodings. Lim et al (Lim et al., 2015) offers another example of guidance based purely on haptic feedback. This article showed that it was possible to guide a sighted person in a shopping center only by means of vibration indices via a smart watch and a mobile phone. Dobbelstein et al (Dobbelstein et al., 2016) have sought to accomplish the same goal by transmitting less information: only the general direction of the destination was communicated through a vibrotactile wristband while the subject was walking in the streets. The study showed a reduced cognitive load for the user, at the cost of navigation problems, e.g. pedestrian crossings without an audible signal or markings on the ground. Aggravi et al (Aggravi et al., 2016) have developed a guidance system for blind skiers. The principle is that the person guiding the skier can send vibrotactile ”left” or ”right” instructions of different lengths using buttons in the handles of his poles.

2.3. Locating an Object

The problem of this research is to design a device helping the visually impaired in an object search task, e.g. shopping in a supermarket. Surveys have shown that visually impaired people regard shopping centers as the most difficult environments to navigate in (Passini and Proulx, 1988). In addition, shopping would be a major problem (Lamoureux et al., 2004) because it requires both information about nearby objects (e.g. reaching the desired object in a department) and the environment as a whole (e.g. finding a particular department in the store). This type of task requires an independent and secure mobility of the subject, a problem commonly explored in research today as it was presented previously. The device presented in this article deals with the second task, reaching the object of your choice once you have arrived at its location. People with tunnel vision have a clear field of vision but reduced to less than 10 degrees, located in the center of the retina, which deprives them of peripheral vision and therefore makes it difficult and time-consuming to find specific points of interest in a visual scene. The work of Appert et al (Appert et al., 2015)

studied techniques of vibrotactile interactions communicating the relative position of a target in the visual reference of the user. Their results show that encoding the distance using discrete vibrations in a Cartesian coordinates system is preferable to encoding with polar coordinates. Our work builds from Appert et al.’s and shows that for best results, an hybrid encoding using both discrete and continuous vibration in a cartesian frame of reference leads to the best results.

3. Proposed Solution

To help visually impaired users locate an object in a vertical 2D plane in front of them, we designed a wearable device that can be used to convey vibrotactile feedback to encode the position of an object.

3.1. Hardware

We designed a wristband using Velcro, which allows us to adjust the device size as well as the locations of the vibration motors. We used four coin-type vibration motors with a diameter of 10 mm (Precision Microdrives, 310-103) operating at 130 Hz. The four motors are distributed equally around the user’s wrist, on Top, Left, Right and Bottom. Figure 1 shows the wristband.

Figure 1. Hardware prototype used in the experiment. The wires going to the wristband are connected to the top and left vibration motors respectively.

The wristband is connected to an Arduino Uno micro-controller, which receives orders from the experimental software located on the experimenter’s computer. The Arduino Uno is also connected to the laptop using a USB cable. We initially considered using a wireless connection for the Arduino micro-controller but the connection was not always stable, thus we reverted to a wired solution (Figure 1).

The experimental software handles the workflow for the experiment, and displays a view of the targets, allowing the experimenter to perform the target selection with a mouse. It also sends order to the Arduino to encode specific positions.

3.2. Vibrotactile Encoding

We encode the position of an object in a 2D frame using X and Y coordinate. The center of the 2D plane is located at the level of the user’s sternum. Every target position is encoded from the origin of the frame and the frame has a dimension of cm (Figure 2). Each of the four motors codes a different direction: Up, Left, Right, Down. Following Appert et al (Appert et al., 2015), we encoded the coordinates using either a discrete or continuous method on both axis. For the discrete encoding, we used different number of pulses, where a low number of pulses represents a target closer to the origin. In that encoding, pulses last for exactly 200 milliseconds, with a 120 milliseconds pause between them. For the continuous encoding, we used a varying duration of the pulses, with a shorter pulse showing a position closer to the origin of the frame, with a duration of vibration between 200 and 1300 milliseconds.

Figure 2. Target densities used in the experiment: , , .

The encoding is transmitted sequentially, with the vertical encoding sent first, and the horizontal encoding sent afterwards, with a pause of 800 milliseconds between them. We avoided sending the encoding simultaneously following guidelines from Carcedo et al (Carcedo et al., 2016)

. In our experiment, we used an odd number of targets on both axis, resulting in some targets having a X or Y coordinate of 0 (e.g. the origin). For these special cases, the system will send a brief 120 ms pulse on the two motors encoding the axis with 40 ms in between. The origin point is thus coded with a 120 ms pulse on the top motor, followed by a 40 ms pause, 120 ms pulse on the bottom motor, 800 ms pause, 120 ms pulse on the left motor, 40 ms pause, 120 ms pulse on the right motor. The intensity of the vibration is always the same (100% PWM, i.e. the maximum intensity available).

4. Experiment

We ran an experiment to understand which encoding allows blindfolded users to be the most accurate for different target densities.

4.1. Participants

Twelve participants (9 female, 11 right-handed), aged 19 to 33 (, ) were recruited from the university community and were given the equivalent of 4 euros for the participation. For this study we used blindfolded participants as their behavior and spatial acuity is similar to a late-blind user.

4.2. Task and Stimuli

At the start of each trial, blindfolded participants were instructed to put their dominant hand on the origin of the frame, which had a tactile mark to make it easy to find. The software would then send the encoding for the target they had to reach. The borders of the frame also had tactile marks, making it easier for participants to find the extremities. Participants would then move their hand towards the target and notify the experimenter when they thought the position of their hand was accurate, by enumerating the expected position of the target (e.g. ”two up, one left”). This allowed the experimenter to classify potential errors as either an incorrect interpretation of the encoding, or simply a positional error (i.e. the hand not reaching the exact position). Participants were allowed to receive the encoding a second time. As soon as the participant felt confident with their choice, the experimenter would stop the timer on the experimental software by clicking on the selected target on the experimental software.

4.3. Procedure

At the beginning of the experiment, participants answered a questionnaire on their demographics. The experimenter would thus explain the purpose of the study, and help the participant put on the wristband, and adjust the location of the motors. The experimenter would then send a pulse on each four motors and ask the participants to correctly identify the position of each motor in space. After that, the participant would be blindfolded and put in front of the frame, with the position of the frame adjusted to the sternum. Participants were placed at a distance that would allow them to easily reach each four corners of the frame with their arm in a comfortable position.

The participants were then trained on each encoding scheme, and would thus proceed with each three target densities. Within each target density all 4 vibrotactile encodings were tested. There was a short 2 seconds break between each trial. Each condition comprised one block, with 9 trials per condition. The 9 targets of each block were randomly selected from the 9 areas following areas: top left, top central column, top right, central row left, origin, central row right, bottom left, bottom central column, bottom right.

4.4. Design

We used a within subject design with two indepent variables: Target Density { , , } and Encoding { Vertical-Discrete/Horizontal-Discrete (VDHD), Vertical-Discrete/Horizontal-Continuous (VDHC), Vertical-Continuous/Horizontal-Discrete (VCHD), Vertical-Continous/Horizontal-Continous (VCHC) } . The order of presentation of both independent variable is counterbalanced using a Latin Square. The experiment is divided into 12 blocks (1 per condition), with 9 trials per block.

We measured the time to acquire a target as well as accuracy as dependent variable. A trial was considered successful if the index finger of the participant was within the circle representing the target. Participants were encouraged to take breaks between blocks and completed the experiment in approximately 60 minutes. Our overall design is as follows: 12 participants 3 target densities 4 encodings 1 block per condition 9 trials = 1296 trials.

4.5. Results

We used a two-way ANOVA with repeated measures on all factors to get our main effect for time, and a Friedman’s test for accuracy. For post-hoc comparisons we used pairwise t-tests with Bonferroni correction for time, or pairwise Wilcoxon tests with Bonferroni-Holm correction for accuracy.

4.5.1. Time

Our time data did not follow a normal distribution, and we thus used a logarithmic transformation to perform our ANOVA. After transforming the data, we found a significant main effect of

Target Density (, ) on time. Post-hoc analysis showed that our participants were significantly faster in the density () compared to () and (, both ). We also found a significant difference between the and densities (). We did not find any significant main effect of the Encoding () with average time performance ranging from 4.19s (VDHD) to 4.33s (HCVD) or any interaction (). Time performance is summarized in Figure 3.

Figure 3. Average trial time for each Target Density and Encoding

condition. Error bars show .95 confidence intervals.

4.5.2. Accuracy

Our participants were overall quite accurate with an average accuracy of 87%. Among the 13% error rates, 6% is due to an incorrect understanding of the encoding (e.g. confusion top vs. bottom or left vs. right) and 7% due to positional errors. We found a significant main effect of Target Density on accuracy (, ). Participants were significantly more accurate in the density () compared to (, ) and (, ). We also found a significant main effect of Encoding on accuracy (, ). Post-hoc analysis reveals that participants tended to be more accurate in the VDHC condition () as compared to VCHC (, ) and VCHD (, ). Accuracy performance is illustrated in Figure 4.

Figure 4. Average accuracy for each Target Density and Encoding condition. Error bars show .95 confidence intervals.

5. Discussion

Overall, our results suggest that our participants were able to accurately locate targets in space with a low error rate. The highest density suggests that participants were able to locate a circular target of 9 cm radius nearly 4 times out of 5 (78%) with limited training. The accuracy is even better in the (96%) and (85%), which represent a target radius of 16 cm and 11.5 cm respectively. None of the encoding we designed seemed to allow participants to identify a target faster, however, the Vertical-Discrete/Horizontal-Continuous encoding led to significantly higher accuracy overall.

Discretizing vertical space can be convenient in any scenario when an object is located on a shelf, as there is a natural mapping between a number of pulse and a ”shelf number” which discretizes the physical space as well. For that reason, the VCHD seems like the best choice for real life usage. Our participants reported that they generally preferred techniques where the information was discrete as opposed to continuous, with their favorite technique being VDHD, as counting pulses is easier than mapping a time duration to a coordinate. This result is in line with Carcedo et al (Carcedo et al., 2016).

6. Limitations and Future Work

This experiment is a first step towards the conception of a new generation of assistive devices that can accurately help visually impaired users to locate objects in a 2D frame in front of them. In this experiment, we used blindfolded users, which can be compared to late-blind users, but future work should use visually impaired users to confirm our results. In addition, we used a physical apparatus with tactile marks on the edges and center of our frame, which may be hard to replicate in real life. Finally, this study only investigates the best type of vibrotactile feedback to help guide the users but does not provide a recovery mechanism whenever a wrong target is selected and also does not offer any input solution to recognize specific objects.

We do believe that our results are encouraging and provide an interesting encoding system that users can quickly learn to get a better understanding of the space in front of them.

7. Conclusion

In this paper, we developed a wristband prototype embedding four vibration sensors. In a controlled experiment, we compared multiple vibrotactile encodings to locate targets in a cm 2D frame in front of them. Our results suggest that blindfolded participants are able to accurately locate object with a 11.5 to 16 cm radius in that space with more than 85% accuracy. More specifically, we showed that the most accurate encoding uses a discrete encoding for vertical axis and continuous encoding for the horizontal axis, which maps very well with scenarios where the user is trying to find objects on shelves.

References

  • M. Aggravi, G. Salvietti, and D. Prattichizzo (2016) Haptic assistive bracelets for blind skier guidance. In Proceedings of the 7th Augmented Human International Conference 2016, AH ’16, New York, NY, USA. External Links: ISBN 9781450336802, Link, Document Cited by: §1, §2.2.
  • D. Appert, D. Camors, J. Durand, and C. Jouffrais (2015) Tactile cues for improving target localization in subjects with tunnel vision. In Proceedings of the 27th Conference on l’Interaction Homme-Machine, IHM ’15, New York, NY, USA. External Links: ISBN 9781450338448, Link, Document Cited by: §2.3, §3.2.
  • M. G. Carcedo, S. H. Chua, S. Perrault, P. Wozniak, R. Joshi, M. Obaid, M. Fjeld, and S. Zhao (2016)

    HaptiColor: interpolating color information as haptic feedback to assist the colorblind

    .
    In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI ’16, New York, NY, USA, pp. 3572–3583. External Links: ISBN 9781450333627, Link, Document Cited by: §3.2, §5.
  • D. Dobbelstein, P. Henzler, and E. Rukzio (2016) Unconstrained pedestrian navigation based on vibro-tactile feedback around the wristband of a smartwatch. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA ’16, New York, NY, USA, pp. 2439–2445. External Links: ISBN 9781450340823, Link, Document Cited by: §2.2.
  • P. Gharani and H. A. Karimi (2017) Context-aware obstacle detection for navigation by visually impaired. Image and Vision Computing 64, pp. 103 – 115. External Links: ISSN 0262-8856, Document, Link Cited by: §1.
  • L. A. Johnson and C. M. Higgins (2006) A navigation aid for the blind using tactile-visual sensory substitution. In 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, Vol. , pp. 6289–6292. External Links: Document, ISSN 1557-170X Cited by: §1.
  • S. Kammoun, C. Jouffrais, T. Guerreiro, H. Nicolau, and J. Jorge (2012) Guiding blind people with haptic feedback. Frontiers in Accessibility for Pervasive Computing (Pervasive 2012) 3. Cited by: §2.2.
  • E. Kostyra, S. Żakowska-Biemans, K. Śniegocka, and A. Piotrowska (2017) Food shopping, sensory determinants of food choice and meal preparation by visually impaired people. obstacles and expectations in daily food experiences. Appetite 113, pp. 14 – 22. External Links: ISSN 0195-6663, Document, Link Cited by: §1.
  • E. L. Lamoureux, J. B. Hassell, and J. E. Keeffe (2004) The determinants of participation in activities of daily living in people with impaired vision. American Journal of Ophthalmology 137 (2), pp. 265 – 270. External Links: ISSN 0002-9394, Document, Link Cited by: §1, §2.3.
  • H. Lim, Y. Cho, W. Rhee, and B. Suh (2015) Vi-bros: tactile feedback for indoor navigation with a smartphone and a smartwatch. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA ’15, New York, NY, USA, pp. 2115–2120. External Links: ISBN 9781450331463, Link, Document Cited by: §2.2.
  • D. López-de-Ipiña, T. Lorido, and U. López (2011) BlindShopping: enabling accessible shopping for visually impaired people through mobile technologies. In Toward Useful Services for Elderly and People with Disabilities, B. Abdulrazak, S. Giroux, B. Bouchard, H. Pigot, and M. Mokhtari (Eds.), Berlin, Heidelberg, pp. 266–270. External Links: ISBN 978-3-642-21535-3 Cited by: §1.
  • K. Papadopoulos, K. Metsiou, and I. Agaliotis (2011) Adaptive behavior of children and adolescents with visual impairments. Research in Developmental Disabilities 32 (3), pp. 1086 – 1096. External Links: ISSN 0891-4222, Document, Link Cited by: §2.1.
  • R. Passini and G. Proulx (1988) Wayfinding without vision: an experiment with congenitally totally blind people. Environment and Behavior 20 (2), pp. 227–252. External Links: Document, Link, https://doi.org/10.1177/0013916588202006 Cited by: §2.3.