Towards a Policy-as-a-Service Framework to Enable Compliant, Trustworthy AI and HRI Systems in the Wild

10/06/2020
by   Alexis Morris, et al.
0

Building trustworthy autonomous systems is challenging for many reasons beyond simply trying to engineer agents that 'always do the right thing.' There is a broader context that is often not considered within AI and HRI: that the problem of trustworthiness is inherently socio-technical and ultimately involves a broad set of complex human factors and multidimensional relationships that can arise between agents, humans, organizations, and even governments and legal institutions, each with their own understanding and definitions of trust. This complexity presents a significant barrier to the development of trustworthy AI and HRI systems—while systems developers may desire to have their systems 'always do the right thing,' they generally lack the practical tools and expertise in law, regulation, policy and ethics to ensure this outcome. In this paper, we emphasize the "fuzzy" socio-technical aspects of trustworthiness and the need for their careful consideration during both design and deployment. We hope to contribute to the discussion of trustworthy engineering in AI and HRI by i) describing the policy landscape that must be considered when addressing trustworthy computing and the need for usable trust models, ii) highlighting an opportunity for trustworthy-by-design intervention within the systems engineering process, and iii) introducing the concept of a "policy-as-a-service" (PaaS) framework that can be readily applied by AI systems engineers to address the fuzzy problem of trust during the development and (eventually) runtime process. We envision that the PaaS approach, which offloads the development of policy design parameters and maintenance of policy standards to policy experts, will enable runtime trust capabilities intelligent systems in the wild.

READ FULL TEXT
research
06/15/2022

Legal Provocations for HCI in the Design and Development of Trustworthy Autonomous Systems

We consider a series of legal provocations emerging from the proposed Eu...
research
04/18/2023

A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective

User trust in Artificial Intelligence (AI) enabled systems has been incr...
research
12/23/2019

Defining AI in Policy versus Practice

Recent concern about harms of information technologies motivate consider...
research
08/26/2019

A Legal Definition of AI

When policy makers want to regulate AI, they must first define what AI i...
research
04/13/2021

Trust and Safety

Robotics in Australia have a long history of conforming with safety stan...
research
09/28/2020

The Development of Visualization Psychology Analysis Tools to Account for Trust

Defining trust is an important endeavor given its applicability to asses...
research
02/21/2018

Artificial Intelligence and Legal Liability

A recent issue of a popular computing journal asked which laws would app...

Please sign up or login with your details

Forgot password? Click here to reset