How Mock Model Training Enhances User Perceptions of AI Systems

11/16/2021
by   Amama Mahmood, et al.
Johns Hopkins University
0

Artificial Intelligence (AI) is an integral part of our daily technology use and will likely be a critical component of emerging technologies. However, negative user preconceptions may hinder adoption of AI-based decision making. Prior work has highlighted the potential of factors such as transparency and explainability in improving user perceptions of AI. We further contribute to work on improving user perceptions of AI by demonstrating that bringing the user in the loop through mock model training can improve their perceptions of an AI agent's capability and their comfort with the possibility of using technology employing the AI agent.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

page 5

08/12/2022

Developing moral AI to support antimicrobial decision making

Artificial intelligence (AI) assisting with antimicrobial prescribing ra...
04/19/2022

Factors that influence the adoption of human-AI collaboration in clinical decision-making

Recent developments in Artificial Intelligence (AI) have fueled the emer...
01/06/2021

Towards an Abolitionist AI: the role of Historically Black Colleges and Universities

Abolition is the process of destroying and then rebuilding the structure...
04/05/2020

Personalization in Human-AI Teams: Improving the Compatibility-Accuracy Tradeoff

AI systems that model and interact with users can update their models ov...
03/16/2022

Artificial Intelligence Enables Real-Time and Intuitive Control of Prostheses via Nerve Interface

Objective: The next generation prosthetic hand that moves and feels like...
02/09/2021

AI-based Blackbox Code Deobfuscation: Understand, Improve and Mitigate

Code obfuscation aims at protecting Intellectual Property and other secr...
11/09/2020

Artificial Intelligence Decision Support for Medical Triage

Applying state-of-the-art machine learning and natural language processi...

1 Motivation and Background

Fueled by increasingly available user data, growing computing power, and recent advances in machine learning, Artificial Intelligence (AI) technologies are transforming our society and daily lives. However, users’ negative preconceptions of AI may hinder adoption and continued use of AI technologies. Negative user preconceptions can affect user trust, which is a key factor in determining acceptance of technology

(Venkatesh et al., 2003). Inadequate user trust can in turn lead to misuse (i.e., inappropriate reliance on technology) and disuse (i.e., underutilization of technology due to rejection of its capability) (Parasuraman and Riley, 1997; Lee and See, 2004). To enhance user perceptions of AI systems, previous research has investigated AI transparency, explainability, and interpretability (e.g., (Adadi and Berrada, 2018; Arrieta et al., 2020)), as modern machine learning methods are largely black boxes (Holzinger et al., 2018; Castelvecchi, 2016). For example, prior work has explored how visualization may aid user understanding of how machine learning models work (e.g., (Samek et al., 2019; Choo and Liu, 2018)). Explanations of these models and justifications for decisions made by intelligent machines help users understand their inner workings once they begin interacting with the AI technologies. In this work, we explore how to improve users’ existing preconceptions of AI agents prior to any interactions with the agents.

Simulated setups, such as mock trials, mock interviews, and drills, have been used as low-cost, hands-on tools in early training phases to help people become accustomed to unfamiliar practices and processes prior to engaging in them. Similarly, we explore the potential of using mock interactions in which users label training data for AI models in modulating users’ confidence in AI agents’ capabilities and their comfort with the possibility of using technologies employing the AI agents before engaging in real interactions with the AI agents. We contextualize our exploration within the scenario of training AI agents for use in autonomous vehicles—a safety-critical domain that is likely to involve interactions with everyday users. Our findings indicate that users’ perceptions of AI agents improved through participation in mock model training, especially when they were able to precisely label objects that they perceived to be important.

Figure 1: We explore how mock model training involving various data labeling strategies may affect users’ perceptions of AI agents posed as driving assistants.

2 Methods

2.1 Experimental Design, Task, and Conditions

We conducted a within-subjects study that consisted of four experimental conditions (Figure 1). The study was contextualized within the scenario of labeling images to train four AI agents to perform driving-related object identification:

  • A1: To train this agent, the participant was presented with a grid of images that included five positive examples for each of six item categories commonly encountered during driving: stop sign, speed limit sign, traffic light, car, bicyclist, and pedestrian. This labeling process is similar to image selection tasks commonly used in web security checks. It represents low labeling precision (i.e., the user did not localize the object within the image) and passive labeling (i.e., the user only labeled items for the requested categories). It is analogous to binary object detection (i.e., indicating whether or not a specified item is present).

  • A2: The participant followed a similar labeling process to train agent A2 as done for A1, with the additional task of drawing bounding boxes around the target item in all images. This process represents high labeling precision and is analogous to binary object recognition..

  • A3: In training this agent, the participant was provided with a set of individual images for labeling. For each image, the user was prompted to list all items within the image that they considered to be relevant via text. The user was free to specify as many item categories as they wanted. This method is analogous to multiple object detection.

  • A4: Similarly to the training task for A3, the participant was prompted to draw bounding boxes around all items that they considered to be relevant and to specify the associated labels via text within each image in the set. This process represents high labeling precision and is analogous to multiple object recognition.

We also presented a baseline pre-trained agent to the participant at the beginning of the study. The participant was able to review the images used to the train the agent. We used this baseline condition as a reference to measure users’ preconceptions of an AI agent without undergoing mock training.

2.2 Measures

We used a range of metrics to measure user perceptions that may affect user trust in and adoption of AI technologies. For each trained agent, we computed the difference in comfort, projected capability, and task confidence relative to the baseline, pre-trained agent (i.e., positive values indicate an improvement in user perceptions relative to the baseline). We normalized the data from all questionnaire responses to get values in a 0 – 1 range before computing the difference.

  • Trustworthiness. Trust was measured through a single question asking which AI agent the participant would trust the most if it was employed in an autonomous vehicle.

  • Comfort. Comfort was measured through a custom scale consisting of six statements (Cronbach’s ) prompting users to rate how comfortable they felt towards a self-driving car employing the trained agent (Appendix A.1).

  • Projected Capability. Projected capability was measured through a custom scale consisting of four statements (Cronbach’s ) prompting participants to rate how capable they felt the self-driving car employing the trained agent to be (Appendix A.2).

  • Task Confidence. To quantify their perception of the AI agent’s performance, we asked participants to rate their confidence (0–100%) in its ability to identify specified items (e.g., stop sign) for a set of 14 images. This set included two images for each of the six item categories (12 images) and two images representing “unseen” items (e.g., no-left-turn sign and pedestrian-crossing sign) that were not included in the object categories used for training agents A1 and A2.

2.3 Procedure

The study consisted of five phases: (1) Introduction and consent. Upon opening the website, participants were briefed about the study and were informed that they would be training AI agents to become driving assistants by providing examples of things (e.g., stop signs and pedestrians) that the agents may encounter on the road. (2) Reference. The participants review the images used to train the baseline, pre-trained agent and complete the confidence assessment and perception survey. (3) Labeling training examples for AI agents A1-A4. The participants labeled training data for for the four experimental conditions, which were counterbalanced using a Latin square design. (4) Confidence assessment and perception survey. Participants were asked to rate task confidence and questions about trust. They then continued to the next condition and repeated phase 3–4. (5) Post-study questionnaire. At the end, participants filled out a post-study questionnaire, which asked which agent they trusted the most and collected demographics information. The study was approved by our institutional review board and took approximately 45 minutes to complete. The participants were compensated with $10 USD upon completion of the study.

Figure 2:

One-way repeated measures ANOVAs were conducted to discover effects of experimental condition on comfort, projected capability, and task confidence for seen and unseen cases. Error bars represent 95% confidence intervals; only significant comparisons (

) are highlighted.

3 Results

A total of 35 participants (17 females, 17 males, 1 non-binary) were recruited for this online study via convenience sampling. The participants were aged between 18 to 35 () and were from a variety of educational backgrounds, including computer science, engineering and technology, social work, healthcare, life sciences, business, law, media, public policy, and education. The participants reported having minimal experience with self-driving cars (), and moderate experience with AI products () and with training AI or machine learning models (), using 6-point rating scales with 1 being no experience and 6 being lots of experience. Figure 2 summarizes our main findings. For all statistical tests reported below, was considered a significant effect. We followed Cohen’s guidelines on effect size and considered a small effect size, a medium effect size, and a large effect size (Cohen, 1988).

A chi-square goodness-of-fit test showed that users did not perceive AI agents, including the baseline agent, as equally trustworthy,

. In particular, A4 (active labeling with high precision) was considered the most trustworthy agent by the most participants (51%). A one-way repeated measures analysis of variance (ANOVA) yielded a significant main effect of experimental condition on comfort,

. Post-hoc pairwise comparisons with a Bonferroni correction revealed that comfort increased with active labeling with precision, A4 (), more than with active labeling without precision, A3 (), . Moreover, a one-way repeated measures ANOVA yielded a significant main effect of the experimental condition on projected capability, . Post-hoc pairwise comparison with a Bonferroni correction revealed that active labeling with precision, A4 (), had higher improvement in projected capability than active labeling without precision, A3 (), .

A one-way repeated measures ANOVA yielded a significant main effect of the experimental condition on task confidence for unseen cases, . Post-hoc pairwise comparisons with a Bonferroni adjustment revealed that active labeling with precision, A4 (), had higher improvement in task confidence than passive labeling with precision, A2 (), . While a one-way repeated measures ANOVA yielded a significant main effect of the experimental condition on task confidence for seen cases, , we did not observe any significant differences in pairwise comparisons.

4 Discussion

In this study, we observed that users associated higher levels of comfort and projected capability with the agents for which they labeled training data with precision. Moreover, for unseen cases, users perceived the agent for which they were able to freely label objects of interest to be more capable. Our results suggest that everyday users can perceive the importance of high-precision training data representative of diverse scenarios in determining AI task performance. Therefore, involving users in mock training exercises where they obtain hands-on experience with training data may help them in developing accurate mental models of how an AI agent operates and in maintaining appropriate trust levels in the AI agent’s performance before working with or using the AI technology. Furthermore, our study suggests that greater levels of user involvement (e.g., precise labeling using bounding boxes) may help users feel more comfortable with using an AI agent, even in a more safety-critical scenario. Overall, our study suggests that mock training setups can serve to help set up appropriate user understanding and improved preconceptions of how an AI agent will operate prior to real interaction with the AI agent.

One of the limitations of this study is that we focused on user trust in AI through a questionnaire item, rather than relying on behavioral (e.g., Yu et al. (2019)) or physiological (e.g., Hergeth et al. (2016)) measures. As a result, we may have failed to accurately or fully capture actual user trust in AI systems. In future studies, we would like to investigate alternate methods for measuring and investigating trust so that we can better understand the range of factors that contribute to user trust in human-AI interaction. We would also like to expand our study of mock model training in AI systems to encompass new types of interactions in different domains. In this work, we chose to contextualize our study within the scenario of training AI agents for self-driving cars, which is a safety-critical domain that many users may not have direct experience with. Therefore, we would like to further investigate how our findings would apply to more general, commonplace scenarios that may involve lower stakes, such as speech-based interactions with AI agents in smart-speakers. Furthermore, we investigated the effects of mock model training as explicit participation in this work, but users may participate in different forms and in other phases of machine learning, such as algorithm design or error correction. Furthermore, modern machine learning systems may involve users without their knowledge or explicit consent, such as recommender systems used in online services. Future work should investigate if user participation still positively influences perceptions of AI in cases where users are engaged outside of AI training or implicitly without their awareness.

Acknowledgements

This work was supported by the National Science Foundation award #1840088, the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1746891, and the Nursing/Engineering joint fellowship from the Johns Hopkins University.

References

  • A. Adadi and M. Berrada (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (xai). IEEE Access 6, pp. 52138–52160. Cited by: §1.
  • A. B. Arrieta, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. García, S. Gil-López, D. Molina, R. Benjamins, et al. (2020) Explainable artificial intelligence (xai): concepts, taxonomies, opportunities and challenges toward responsible ai. Information Fusion 58, pp. 82–115. Cited by: §1.
  • D. Castelvecchi (2016) Can we open the black box of ai?. Nature News 538 (7623), pp. 20. Cited by: §1.
  • J. Choo and S. Liu (2018)

    Visual analytics for explainable deep learning

    .
    IEEE computer graphics and applications 38 (4), pp. 84–92. Cited by: §1.
  • J. Cohen (1988) Statistical power analysis for the behavioral sciences. England: Routledge. Cited by: §3.
  • J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In

    2009 IEEE conference on computer vision and pattern recognition

    ,
    pp. 248–255. Cited by: §A.3.
  • S. Hergeth, L. Lorenz, R. Vilimek, and J. F. Krems (2016) Keep your scanners peeled: gaze behavior as a measure of automation trust during highly automated driving. Human factors 58 (3), pp. 509–519. Cited by: §4.
  • A. Holzinger, P. Kieseberg, E. Weippl, and A. M. Tjoa (2018) Current advances, trends and challenges of machine learning and knowledge extraction: from machine learning to explainable ai. In International Cross-Domain Conference for Machine Learning and Knowledge Extraction, pp. 1–8. Cited by: §1.
  • J. D. Lee and K. A. See (2004) Trust in automation: designing for appropriate reliance. Human factors 46 (1), pp. 50–80. Cited by: §1.
  • Z. Luo, F. Branchaud-Charron, C. Lemaire, J. Konrad, S. Li, A. Mishra, A. Achkar, J. Eichel, and P. Jodoin (2018) MIO-tcd: a new benchmark dataset for vehicle classification and localization. IEEE Transactions on Image Processing 27 (10), pp. 5129–5141. Cited by: §A.3.
  • A. Mogelmose, M. M. Trivedi, and T. B. Moeslund (2012) Vision-based traffic sign detection and analysis for intelligent driver assistance systems: perspectives and survey. IEEE Transactions on Intelligent Transportation Systems 13 (4), pp. 1484–1497. Cited by: §A.3.
  • R. Parasuraman and V. Riley (1997) Humans and automation: use, misuse, disuse, abuse. Human factors 39 (2), pp. 230–253. Cited by: §1.
  • W. Samek, G. Montavon, A. Vedaldi, L. K. Hansen, and K. Müller (2019) Explainable ai: interpreting, explaining and visualizing deep learning. Vol. 11700, Springer Nature. Cited by: §1.
  • V. Venkatesh, M. G. Morris, G. B. Davis, and F. D. Davis (2003) User acceptance of information technology: toward a unified view. MIS quarterly, pp. 425–478. Cited by: §1.
  • L. Wang, J. Shi, G. Song, and I. Shen (2007) Object detection combining recognition and segmentation. In Asian conference on computer vision, pp. 189–199. Cited by: §A.3.
  • K. Yu, S. Berkovsky, R. Taib, J. Zhou, and F. Chen (2019) Do i trust my machine teammate? an investigation from perception to decision. In Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 460–468. Cited by: §4.

Appendix A Appendix

a.1 Comfort – Cronbach’s

Please rate the following regarding the self-driving car that has employed Driving Assistant X:

  • I would be wary of the self-driving car111Reverse scale items

  • I would be afraid that the self-driving car would be harmfulfootnotemark:

  • I would be confident riding in self-driving car

  • I would be comfortable riding in the self-driving car

  • I would be relaxed while riding in the self-driving car

  • I would be agitated while riding in the self-driving carfootnotemark:

a.2 Projected capability – Cronbach’s

Please rate following regarding the self-driving car that has employed Driving Assistant X:

  • I believe that the self-driving car would NOT be dependablefootnotemark:

  • I believe that the self-driving car would be reliable

  • I would trust the self-driving car to identify pedestrians, signs and signals, and obstacles correctly

  • I am confident that the self-driving car would comply with traffic rules

a.3 Image Sources

The images that we used in the study included public domain images from the web and images from various datasets, including the Penn-Fudan Database for Pedestrian Detection and Segmentation Wang et al. [2007], the MIO-TCD Dataset Luo et al. [2018], the LISA Traffic Sign Dataset Mogelmose et al. [2012], and ImageNet Deng et al. [2009]. Figure 1 shows examples of images used for our study tasks.