Monitoring Trust in Human-Machine Interactions for Public Sector Applications

10/16/2020
by   Farhana Faruqe, et al.
0

The work reported here addresses the capacity of psychophysiological sensors and measures using Electroencephalogram (EEG) and Galvanic Skin Response (GSR) to detect levels of trust for humans using AI-supported Human-Machine Interaction (HMI). Improvements to the analysis of EEG and GSR data may create models that perform as well, or better than, traditional tools. A challenge to analyzing the EEG and GSR data is the large amount of training data required due to a large number of variables in the measurements. Researchers have routinely used standard machine-learning classifiers like artificial neural networks (ANN), support vector machines (SVM), and K-nearest neighbors (KNN). Traditionally, these have provided few insights into which features of the EEG and GSR data facilitate the more and least accurate predictions - thus making it harder to improve the HMI and human-machine trust relationship. A key ingredient to applying trust-sensor research results to practical situations and monitoring trust in work environments is the understanding of which key features are contributing to trust and then reducing the amount of data needed for practical applications. We used the Local Interpretable Model-agnostic Explanations (LIME) model as a process to reduce the volume of data required to monitor and enhance trust in HMI systems - a technology that could be valuable for governmental and public sector applications. Explainable AI can make HMI systems transparent and promote trust. From customer service in government agencies and community-level non-profit public service organizations to national military and cybersecurity institutions, many public sector organizations are increasingly concerned to have effective and ethical HMI with services that are trustworthy, unbiased, and free of unintended negative consequences.

READ FULL TEXT
research
08/22/2022

A Trust Framework for Government Use of Artificial Intelligence and Automated Decision Making

This paper identifies the current challenges of the mechanisation, digit...
research
03/27/2018

A Classification Model for Sensing Human Trust in Machines Using EEG and GSR

Today, intelligent machines interact and collaborate with humans in a wa...
research
04/30/2022

Trust in Human-AI Interaction: Scoping Out Models, Measures, and Methods

Trust has emerged as a key factor in people's interactions with AI-infus...
research
07/04/2023

A multilevel framework for AI governance

To realize the potential benefits and mitigate potential risks of AI, it...
research
09/27/2020

Measure Utility, Gain Trust: Practical Advice for XAI Researcher

Research into the explanation of machine learning models, i.e., explaina...
research
05/05/2020

Explainable AI for Classification using Probabilistic Logic Inference

The overarching goal of Explainable AI is to develop systems that not on...
research
03/16/2023

Reclaiming the Digital Commons: A Public Data Trust for Training Data

Democratization of AI means not only that people can freely use AI, but ...

Please sign up or login with your details

Forgot password? Click here to reset