Information Governance as a Socio-Technical Process in the Development of Trustworthy Healthcare AI
In order to develop trustworthy healthcare artificial intelligence (AI) prospective and ergonomics studies that consider the complexity and reality of real-world applications of AI systems are needed. To achieve this, technology developers and deploying organisations need to form collaborative partnerships. This entails access to healthcare data, which frequently might also include potentially identifiable data such as audio recordings of calls made to an ambulance service call centre. Information Governance (IG) processes have been put in place to govern the use of personal confidential data. However, navigating IG processes in the formative stages of AI development and pre-deployment can be challenging, because the legal basis for data sharing is explicit only for the purpose of delivering patient care, i.e., once a system is put into service. In this paper we describe our experiences of managing IG for the assurance of healthcare AI, using the example of an out-of-hospital-cardiac-arrest recognition software within the context of the Welsh Ambulance Service. We frame IG as a socio-technical process. IG processes for the development of trustworthy healthcare AI rely on information governance work, which entails dialogue, negotiation, and trade-offs around the legal basis for data sharing, data requirements and data control. Information governance work should start early in the design life cycle and will likely continue throughout. This includes a focus on establishing and building relationships, as well as a focus on organisational readiness deeper understanding of both AI technologies as well as their safety assurance requirements.
READ FULL TEXT