Towards a Policy-as-a-Service Framework to Enable Compliant, Trustworthy AI and HRI Systems in the Wild
Building trustworthy autonomous systems is challenging for many reasons beyond simply trying to engineer agents that 'always do the right thing.' There is a broader context that is often not considered within AI and HRI: that the problem of trustworthiness is inherently socio-technical and ultimately involves a broad set of complex human factors and multidimensional relationships that can arise between agents, humans, organizations, and even governments and legal institutions, each with their own understanding and definitions of trust. This complexity presents a significant barrier to the development of trustworthy AI and HRI systems—while systems developers may desire to have their systems 'always do the right thing,' they generally lack the practical tools and expertise in law, regulation, policy and ethics to ensure this outcome. In this paper, we emphasize the "fuzzy" socio-technical aspects of trustworthiness and the need for their careful consideration during both design and deployment. We hope to contribute to the discussion of trustworthy engineering in AI and HRI by i) describing the policy landscape that must be considered when addressing trustworthy computing and the need for usable trust models, ii) highlighting an opportunity for trustworthy-by-design intervention within the systems engineering process, and iii) introducing the concept of a "policy-as-a-service" (PaaS) framework that can be readily applied by AI systems engineers to address the fuzzy problem of trust during the development and (eventually) runtime process. We envision that the PaaS approach, which offloads the development of policy design parameters and maintenance of policy standards to policy experts, will enable runtime trust capabilities intelligent systems in the wild.
READ FULL TEXT