Gaze-Driven Adaptive Interventions for Magazine-Style Narrative Visualizations

09/03/2019 ∙ by Sébastien Lallé, et al. ∙ 0

In this paper we investigate the value of gaze-driven adaptive interventions to support processing of textual documents with embedded visualizations, i.e., Magazine Style Narrative Visualizations (MSNVs). These interventions are provided dynamically by highlighting relevant data points in the visualization when the user reads related sentences in the MNSV text, as detected by an eye-tracker. We conducted a user study during which participants read a set of MSNVs with our interventions, and compared their performance and experience with participants who received no interventions. Our work extends previous findings by showing that dynamic, gaze-driven interventions can be delivered based on reading behaviors in MSNVs, a widespread form of documents that have never been considered for gaze-driven adaptation so far. Next, we found that the interventions significantly improved the performance of users with low levels of visualization literacy, i.e., those users who need help the most due to their lower ability to process and understand data visualizations. However, high literacy users were not impacted by the interventions, providing initial evidence that gaze-driven interventions can be further improved by personalizing them to the levels of visualization literacy of their users.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.