DeepAI
Log In Sign Up

Towards Operationalising Responsible AI: An Empirical Study

05/09/2022
by   Conrad Sanderson, et al.
0

While artificial intelligence (AI) has great potential to transform many industries, there are concerns about its ability to make decisions in a responsible way. Many AI ethics guidelines and principles have been recently proposed by governments and various organisations, covering areas such as privacy, accountability, safety, reliability, transparency, explainability, contestability, and fairness. However, such principles are typically high-level and do not provide tangible guidance on how to design and develop responsible AI systems. To address this shortcoming, we present an empirical study involving interviews with 21 scientists and engineers, designed to gain insight into practitioners' perceptions of AI ethics principles, their possible implementation, and the trade-offs between the principles. The salient findings cover four aspects of AI system development: (i) overall development process, (ii) requirements engineering, (iii) design and implementation, (iv) deployment and operation.

READ FULL TEXT VIEW PDF
11/18/2021

Software Engineering for Responsible AI: An Empirical Study and Operationalised Patterns

Although artificial intelligence (AI) is solving real-world challenges a...
12/14/2021

AI Ethics Principles in Practice: Perspectives of Designers and Developers

As consensus across the various published AI ethics principles is approa...
09/29/2021

An Empirical Study of Accuracy, Fairness, Explainability, Distributional Robustness, and Adversarial Robustness

To ensure trust in AI models, it is becoming increasingly apparent that ...
01/14/2022

Tools and Practices for Responsible AI Engineering

Responsible Artificial Intelligence (AI) - the practice of developing, e...
06/06/2022

Towards Responsible AI for Financial Transactions

The application of AI in finance is increasingly dependent on the princi...
09/28/2021

Which Design Decisions in AI-enabled Mobile Applications Contribute to Greener AI?

Background: The construction, evolution and usage of complex artificial ...
07/20/2022

AI Fairness: from Principles to Practice

This paper summarizes and evaluates various approaches, methods, and tec...

I Introduction

Artificial intelligence (AI), which includes machine learning (ML), has large potential to transform many industries (especially in data rich domains), and thereby greatly impact society. The global AI market was valued at approx. USD 62 billion in 2020 and is expected to grow with an annual growth rate of 40% from 2021 to 2028 [9]. While AI can be helpful in solving many real-world challenges, there are concerns about its ability to make decisions in a responsible way, including decisions being made without regard to fairness, transparency, explainability, contestability, privacy, security, reliability, safety, and other ethically important factors [5, 6, 16, 23].

AI ethics belongs to the broader field of computer ethics, which seeks to describe and understand moral behaviour in creating and using computing technology. There is considerable overlap between the two: non-AI software may also contain biases, infringe individual privacy, and be used for harmful purposes [8, 12]. What distinguishes AI ethics from the regular ethical issues associated with software development and use are the decision making capabilities of AI systems, and the ability of some AI systems to learn from input data.

In a related development, responsible software/technology as well as human values in software have recently become an important field of study [21]. Within this context, responsible/ethical AI can be considered as a sub-field within the responsible software/technology field. However, compared with traditional software, AI systems also need to consider requirements about models, training data, system autonomy oversight, and may place greater emphasis on certain ethical requirements due to AI-based autonomous behaviour and decision making, which can be opaque.

Many high-level AI ethics principles and guidelines for responsible AI have been recently issued by governments, inter-governmental organisations, and the private sector [11, 7]. As an example, the Australian government recently proposed a set of eight high-level voluntary principles [1], summarised in Fig. 1. Across the various sets of principles proposed throughout the world, a degree of consensus has been achieved [7]. Specifically, an analysis of 36 sets of AI ethics principles identified eight key themes: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, promotion of human values, and professional responsibility (which also covers responsible design, multi-stakeholder collaboration, and long-term impacts) [7].

A common limitation of AI ethics principles and guidelines is the lack of empirically proven methods to robustly translate the principles into practice [14]. A review of publicly available tools for implementing AI ethics principles concluded that the tools reviewed overemphasised AI ‘explicability’ (which is necessary but not sufficient for transparency and explainability), focused on the effects on individuals rather than on society or groups, and were difficult to use [15].

With a view towards addressing the gap between high-level AI ethics principles and tangible implementations of the principles, we seek to identify the current state and potential challenges that developers face in dealing with responsible AI issues during the development of AI systems. To that end, we have conducted an empirical study involving semi-structured interviews with 21 scientists and engineers from various backgrounds, all working on AI/ML related projects at CSIRO (Australia’s national science agency) [18]. We asked the interviewees what ethical issues they have considered in their AI/ML work, and how they addressed or envisioned to address these issues. The AI ethics principles proposed by the Australian government [1] were treated as being representative of the many similar principles from around the world [11, 7], and were used as a framing structure for the interviews, analysis and discussion.

We continue the paper as follows. Section II briefly covers the AI system development process. Section III overviews the methods used in the empirical study. Section IV analyses and discusses the interviewees’ responses. Section V briefly discusses the threats to validity of this study. The main findings are summarised in Section VI.

[(1),leftmargin=*] Privacy Protection & Security. AI systems should respect and uphold privacy rights and data protection, and ensure the security of data. Reliability & Safety. AI systems should reliably operate in accordance with their intended purpose during their lifecycle. Transparency & Explainability. Transparency: there should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, and can find out when an AI system is engaging with them. Explainability: what the AI system is doing and why, such as the system’s processes and input data. Fairness. AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups. Contestability. When an AI system significantly impacts a person, community, group or environment, there should be a timely process that allows challenging the use or output of the system. Accountability. Those responsible for the various phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the system, and human oversight of AI systems should be enabled. Human, Social & Environmental (HSE) Wellbeing. AI systems should benefit individuals, society, and the environment. Human-centred Values. AI systems should respect human rights, diversity, and the autonomy of individuals.

Fig. 1: An adapted summary of voluntary high-level AI ethics principles promulgated by the Australian Government [1].

Ii Development Process for AI systems

An overview of the AI development process is given in Fig. 2. The development process begins with requirement analysis, where requirements and constraints placed by stakeholders are identified. The development process is then split into two distinct parts: (i) the non-AI part, and (ii) the AI-focused part. The non-AI part follows traditional software development methods, and includes design, implementation, and testing of non-AI components. In the AI-focused part the goal is model production, which covers data engineering, feature engineering, model training, model evaluation, and model updates. The non-AI and AI-focused parts are combined for the deployment and operation of the overall AI system.

Compared to traditional software, the key differences in the deployment and operation of AI systems often include: (i) continual learning of AI components based on new data, (ii) higher degree of uncertainty and risks associated with the autonomy of the AI component, (iii) validation of outcomes (ie. did the system provide the intended benefits and behave appropriately given the situation?), rather than just outputs (eg. precision, accuracy and recall) [2].

Fig. 2: Conceptual view of the overall AI system development process.

Iii Methods

The salient findings in this work are derived from semi-structured interviews with research scientists and engineers, as well as literature on AI ethics, machine learning, and software engineering for AI/ML.

The interviewees were sought via “call for participation” emails as well as via follow-up recommendations given by the interviewees, until we deemed that a saturation of perspectives was reached [10]. In total 21 interviews were conducted from February to April 2021. The interviewees are from various backgrounds, with a large variation in the interviewees’ degree of experience and responsibility. 10 interviewees worked primarily in computer science, 6 worked in the health & biosecurity area, and 5 worked in the land & water area. The job positions of the interviewees included: group leader (4), team leader (8), principal research scientist (2), principal engineer (1), senior research scientist (4), research scientist (1), postgraduate student (1). The gender split was approximately 76% males and 24% females.

The interviews were conducted by three project team members with unique research backgrounds (machine learning, software engineering, computers ethics, respectively), in a face-to-face setting and/or via video teleconferencing. Prior to each interview, the interviewees were given a summary of Australian AI ethics principles [1] (as shown in Fig. 1), to ensure all interviewees are aware of the principles. The interviews ranged from approximately 22 to 59 minutes in length, with a median length of approximately 37 minutes.

In each interview, the questions and prompts within the protocol first aimed to elicit a subset of the high-level principles that was most relevant to each interviewee, as experienced via their research and/or development work. The top 3-4 principles, as selected by the interviewee, were then explored via questions such as: (i) how each selected principle manifested itself in their work, (ii) how each selected principle was addressed (using tools and/or processes), (iii) what tools/processes would be useful in addressing each selected principle.

Follow-up questions were posed by the team members in a conversational setting, aimed to cover the intersection of the following areas: machine learning, software development, and ethics in AI. A further set of questions and prompts aimed to elicit other ethical considerations or dilemmas that were not covered by the high-level principles, but were encountered and possibly addressed in the interviewee’s work.

The three project team members individually used either thematic analysis [4] or open card sorting [17] to identify categories in the interview transcripts. The resulting categories were then compared and analysed by the team members as a whole. The thematic analysis used a theoretical approach to coding the interviews by using the eight AI ethics principles as themes. Concepts identified in discussions of specific principles were recorded as subthemes related to each principle. The analysis was performed at a semantic level, meaning that the analysis focused on describing and interpreting patterns identified in the interviews, rather than explicitly searching for any underlying assumptions or concepts.

Table I presents the occurrence of themes related to AI ethics principles across the interviews. The top three principles covered in the interviews are Reliability & Safety, Transparency & Explainability, and Privacy Protection & Security. Principles which were covered in about half the interviews are Fairness and HSE Wellbeing. The Human-Centred Values principle was covered the least in the interviews.

Principle Occurrence
Privacy Protection & Security 17 / 21 (81%)
Reliability & Safety 19 / 21 (90%)
Transparency & Explainability 18 / 21 (86%)
Accountability 13 / 21 (62%)
Contestability  8 / 21 (38%)
Fairness 10 / 21 (47%)
HSE Wellbeing 11 / 21 (52%)
Human-Centred Values  3 / 21 (14%)
TABLE I: Occurrence of themes related to AI ethics principles.

Iv Analysis and Discussion

In this section, we report our findings for categories that were identified as being relevant to the overall AI system development process (overviewed in Section II). For each category, we state our observations, and where appropriate, select the most relevant interviewees’ comments. Quotations and paraphrased sentences are attributed to specific interview participants via markers in the form of (P##), where ## denotes a two-digit interviewee identifier.

The findings are organised into four distinct parts, reflecting the process of software engineering and system deployment: (A) overall development process, (B) requirements engineering, (C) design and implementation, (D) deployment and operation. The discussion on each part is further divided into salient points. To help frame the discussion, refer to Fig. 2 for an overview of the overall AI system development process.

Iv-a Overall Development Process

Ethical risk assessment. Throughout the interviews, various types of ethical issues were discussed. As an example, one interviewee noted the incomplete data problem for ensuring fairness: “sometimes you can be limited in what data [you have] available to use in the first place” (P01). However, the ethical issues were considered and checked in isolation, and were mostly around data and ML models. As such, there appears to be a lack of a comprehensive system-level ethical checklist that covers all the ethical aspects throughout the full lifecycle of AI systems. Understanding and managing risk is particularly important for AI systems as they may have emergent behaviour and may involve continual learning.

We further observed that some ethical risk assessment frameworks were used in practice. One interviewee noted: “there was a privacy impact assessment; we went through a lengthy process to understand the privacy concerns and built in provisions to enable privacy controls and people to highlight things that they didn’t want to be visible” (P10). However, such type of approach is a done-once-and-forget type of risk assessment which may not be sufficient for AI systems that continually learn and adapt. Furthermore, various practitioners approach risk differently. One interviewee suggested fail-safe by design should be considered, and noted that “there’s only so much you can think ahead about what those failure modes might be” (P16). Another interviewee argued that “once I know that it works most of the time I don’t need explainability [and] transparency. It’s just temporary to establish the risk profile” (P11).

Trust vs. trustworthiness. Trustworthiness can be interpreted as the ability of an AI system to meet AI ethics principles, while trust

can be interpreted as users’ subjective estimates of the trustworthiness of the AI system 

[24]. Even for trustworthy AI systems, gaining the trust of users is a challenge that must be addressed carefully in order for an AI system to be widely accepted. This is due to possible significant gaps between an AI system’s inherent trustworthiness and the users’ subjective estimates of the system’s trustworthiness. It is also possible that users may overestimate a system’s trustworthiness. We found many interviewees have acknowledged the importance of human trust in AI. One interviewee stated: “a lot of the work that we do trust comes as an important factor here, that a user or somebody who takes that information, wants to be able to trust it” (P09).

One of the obstacles for the development of AI systems is gaining and maintaining the trust from the providers of data that is used to train the AI system. One interviewee noted that “you build the trust with the data providers, so more people can give you data and increase your data representability” (P02). Another interviewee pointed out that contestability can contribute to trust: “it can be very hard to get people to trust an analytical system that is just telling them to do something and does not give them the choice to disagree with the system” (P15). One interviewee emphasised that evidence needs to be offered to drive trust: “Because you justifiably want to trust that system and not only ask people do you trust it? I mean they need some evidence. You can build this into your system to some degree. So that’s very important” (P12).

Ethics credentials.

An ethical AI industry requires responsible AI components and products at each step of the value chain. AI system vendors often supply products by assembling commercial or open-source AI as well as non-AI components. Some interviewees agreed credential schemes can enable responsible AI by attaching ethical credentials to AI components and products. One interviewee commented

“Getting those certificates, it always helps. As long as there is standardisation around it.” (P13). There have been certificates for the underlying hardware used by AI systems; one interviewee pointed out that “A lot of hardware is actually certified. I mean in […] full size aviation, you have at least a certification. So when you buy something you get some sort of guarantees” (P12).

Outcome-driven vs. requirement-driven development. We noticed there are two forms of development mentioned by the interviewees: outcome-driven and requirement-driven [3]. Out of the ethics principles, privacy and security are among the most discussed requirements. For example, one interviewee noted: “to protect those privacy and de-identification requirements, you’ll be aggregating so that people can’t be uniquely identified” (P01). Related to outcome-driven development, one interviewee emphasised the development is a continual process: “This is a continual and [iterative] process: humans need to continually evaluate the performance, identify [gaps] and provide insight into what’s missing. Then go back to connect data and refine the model” (P02).

End-to-end system-level development tools. An overall AI system consists of non-AI and AI-focused components that are interconnected and work collectively. Combining the two types of components may create new emergent behaviour and dynamics. Therefore, ethics need to be considered at the system-level, including the non-AI components, AI-focused components, and their connections. For example, the effect of actions decided by the AI model could be collected through the feedback component built into the overall system (see Fig. 2).

While most of the interviewees are research scientists/engineers who mainly worked on research projects and focused on model development, some of them did recognise the significance of system-level thinking in AI projects. One interviewee commented: “Well, it’s just that the design ways in which that AI was designed and deployed as an end-to-end solution, it wasn’t that AI sat in the middle, right? It actually had to sit within the system” (P14).

We also found that the management of AI ethics principles heavily relies on manual practice. One interviewee mentioned:

“It’s not linked to anybody’s name or identity, but apparently addresses are classified as potentially personal information and therefore subject to privacy. So, we had to contact our privacy officer to […] confirm that’s the case. Then we had to escalate that to the client, to let them know of that potential issue” (P10)

. Another interviewee pointed out that “we had to go through a lot of data and make sure that there was not a single frame with a person in it” (P13). This accidental collection of sensitive data issue could be addressed automatically using AI enabled human detection tools.

Iv-B Requirements Engineering

Ethics requirements. We observed that some ethics principles, such as HSE Wellbeing, were often stated only as a project objective rather than part of verifiable requirements and outcomes. One interviewee stated: “People are presented with a clear project objective upfront, and the project leader might frame the project with we’re working on improving [a grass species] yield forecasting using machine learning. You do feel good about working on projects that provide environmental benefit” (P09).

AI ethics requirements may need to be analysed, verified and validated by a wide range of people (such as hardware engineers, culture experts, end users), and not just the software developers. For example, one interviewee elaborated on safety requirements in AI systems: “it’s not just the AI data that has to be safe, it’s actually its application and use reliability and safety, throughout its whole lifecycle and its application into the real world become really critical questions” (P14). This requires experts in all associated disciplines to be involved. In current practice, AI system developers rely on domain experts to ascertain whether the AI system is correctly following existing legal rules in the application domain (P06).

Scope of responsibility. We observed that there were various meanings and interpretations of responsible AI. As an example, one interviewee challenged the meaning of responsibility in the context of autonomous aerial systems: “The question is what happens if [the] remote pilot is really there, flicks the switch [to disable the system] and the system doesn’t react? The remote pilot is not always in full control of [the drone] because of technical reasons [such as a failed radio link]” (P12). The many meanings and interpretations of the word “responsible” have received attention in the literature, such as the three varieties of responsibility introduced in [20]: the normative interpretation (ie. behaving in positive, desirable and socially acceptable ways), the possessive interpretation (ie. having a duty and obligation) and descriptive interpretation (ie. worthy of response/answerable). We found that interviewees touched on all of the above varieties and considered all of them as important. Moreover, timeliness may need to be considered as part of responsibility. One interviewee stated “whether the stuff works in 10 years, it’s not under our control […] and we shouldn’t really care about it” (P11).

A more fine-grained classification of responsibilities may be useful to enhance requirements engineering that takes AI ethics into account. For example, eight meanings of responsibility can be considered [13]: (i) obligation, (ii) task, (iii) authority, (iv) power, (v) answerability, (vi) cause, (vii) blame/praise, and (viii) liability.

Iv-C Design and Implementation

AI in design. AI may involve complex underlying technology which can be difficult to explain, making detailed risk assessment challenging. One interviewee commented: “When do you have a complete assessment really? Especially with systems that change over time and based on sensory input. […] It’s very difficult” (P12). Whether to adopt AI can be considered as a major architectural design decision during the process of system design. A closely related design decision is whether users have the ability to make the final judgements throughout the lifecycle of a system, rather than purely relying on the AI component. This may involve allowing the AI component to be disabled during run-time, or changed from decision mode to suggestion mode. One interviewee provided a medical use case for overriding the recommended decisions: “there was actually a defined process where if a patient was not flagged as being high risk, […] clinicians were still allowed to include the patient into the next step clinical review” (P18).

Trade-offs between ethics principles in design. Several interviewees noted there are trade-offs between various ethics principles (eg. fairness vs reliability, privacy vs reliability/accountability). One interviewee noted: “if you’ve got other ways of protecting privacy that don’t involve aggregating, then you can be actually getting better distributional properties” (P01). Another interviewee mentioned fairness and reliability:

“we are in the spot where by design we restrict the variance as much as possible to make it easier to find a signal” (P11)

. However, there was only sparse discussion on the methods and approaches to adequately deal with the trade-offs. It appears that in the current dominant practice, developers follow one principle while overriding other principles, rather than building balanced trade-offs, with stakeholders making the ultimate value and risk calls [22].

The reliability of AI can greatly depend on the quantity and quality of the training data. One interview noted that “if you’re training a model without a lot of data, you can actually get some really weird results” (P09). Obtaining a sufficient number of samples can be challenging, as obtaining one sample can be high in terms of financial and/or time costs, as well as involve privacy issues in domains such as genomics (P03). There was a desire to use specific architecture styles to handle some AI ethics requirements. For example, federated learning was mentioned as a way to deal with privacy and security concerns in addition to the data hungriness issues: “[various] research institutions from around the world can collaborate, because they don’t have to give up their data; they don’t have to share their data” (P03).

Design process for ethics. We observed that the reuse of AI/ML models and other AI pipeline components is desired since training models and building various components from scratch can be costly and/or time-consuming. Furthermore, there was also a desire to reuse and/or iteratively adapt the overall design and architecture of an existing AI system with a complex pipeline. However, this may lead to architecture degradation and accumulation of high technical debt over time [19]. The ethical consequences of the reuse/adaptation were not well understood. One interviewee highlighted “What we have gone beyond the project we hope to achieve is we’ll have the whole pipeline in place. Once we have different data from a different environment that’s not associated to that particular company that they labelled and they recorded. We already have something in place that we can train with different data. As long as it’s not the same data - it’s a different type of environment - that’s fine” (P13).

Design for explainability and interpretability. Explainability and interpretability are two emerging quality attributes for AI systems. We found some interviewees have considered explainability and interpretability in practice, and adopted human-centred approaches taking into account users’ background, culture, and preferences to improve human trust.

Explainability can be defined as the ability to come up with features in an interpretable domain that contribute to some explanation about how an outcome is achieved. Users are more likely to find recommendations made by AI systems useful if there are indicators and factors supporting a given prediction/recommendation. One interviewee noted that “there have been instances where we’ve chosen an explainable model which has slightly [lower] performance [than] a non-explainable model which has higher performance but would be harder to convey the reasoning behind the prediction” (P18).

Interpretability can be defined as the ability of an AI system to provide an understandable description of a stimulus (eg. model output) in terms familiar to stakeholders. One interviewee stated: “I’m really experimenting now with how we actually show the data so that it can be interpreted by people? So we’re playing around with data visualisation tools now to say how do we bring that data to bear and going out there and saying does this make sense to you? We designed all these reports which just show the data in different ways and part of that was - do you like the way this is going or is there [other] things you’d like to see?” (P14).

Most of the actions for explainability that were discussed by the interviewees were around the interface design of AI systems. One interviewee commented: “[…] nobody seems to ask about, what’s the predictive performance of the algorithm [in the initial stakeholder meeting]? [Instead] can I look at your interface and […] see a couple of patient risk profiles and then understand that” (P18).

It may be necessary to calibrate trust over time to match AI systems’ trustworthiness. One interviewee stated: “There is no need to explain anything if you know the risk and if you have a long enough time to look over it […]. So this explainability thing, it’s just a temporary requirement until the risk is known” (P14). Another interviewee had a similar opinion on explainability: “it’s just a temporary thing until people know it works” (P11).

Iv-D Deployment and Operation

Continuous validation of AI ethics. There is a strong desire to continuously monitor and validate AI systems post-deployment to ensure adherence to ethics requirements. One interviewee commented: “it’s up to us to come with technology that makes it acceptable for them to implement measurements in that respect and being able to prove compliance or even signal a trend like you’re compliant now, but because we can see that your [values] are slowly going up and that’s your threshold, so you’re approaching it” (P07).

Awareness of potential mismatches between training data and real-world data is necessary to prevent the trained model from being unsuitable for its intended purpose (P04). Model updates and recalibration on new data were seen as important for the reliability of AI systems. The models may need to be retrained or recalibrated to properly take advantage of user feedback, newer and/or more comprehensive data which was not considered during the initial deployment. One interviewee noted:

“If you build a model on 10 year old data, then you’re not representing the current state of risks for certain disease. As a minimum, [recalibration] on new data would probably be more meaningful” (P18)

. In addition to reliability, continuous validation and improvement of other ethics principles may occur at run-time. System-level updates may be necessary to improve compliance or alignment with ethics principles.

Traceability of artefacts. The interviewees often identified two main approaches related to traceability, provenance and reproducibility, which are useful for building trust in AI systems: (i) tracking the use of an AI system, and (ii) keeping track of information related to model provenance (eg. code and training data). Both aspects are also useful for improving explainability and accountability. One interviewee noted: “[the system] suggested doing one scenario, we chose to do another, this is the result we got […] did we do the job that we expected? Or did we do the job that the system expected?” (P15). Another interviewee stated: “[We] had very strict rules about the provenance. So basically, every piece of code and every output had to go somewhere and have metadata tagged with it, so that if anyone wanted to audit what we did they could” (P04). It was typically accepted that keeping logs and previous versions of models/systems is important for model provenance; one interviewee stated “When the system gets complex, you have to keep more evidence along the way. Version control, and the immutable log. You don’t want people to tamper this […] after things went wrong” (P02). We note this can also be useful for improving both the trust and trustworthiness of AI systems.

We observed that most of the interviewees used established software development management tools, such as Git repositories. “Any software we are developing is in Bitbucket, internal configuration management system” (P17). However, AI systems usually involve co-evolution of data, model, code, and configurations. As such, data/model/code/configuration co-versioning with model dependency specification is required to ensure data provenance and traceability. Any underlying domain knowledge models also need to be co-versioned with the AI models. There is currently a lack of tools to use traceability and provenance data to help address AI ethics concerns.

V Threats to Validity

V-a Internal Validity

The interviewees for this study were selected via “call for participation” emails and recommendations within one organisation. While selection bias is always a concern when the interviewees are not randomly sampled, the employed procedure partially alleviates the threat since the interviewers had no contact with interviewees beforehand. Moreover, given that our interviews include practitioners with various backgrounds, roles, and genders, the threat has reduced influence.

We stopped seeking further participants when a saturation of findings was reached after interviewing 21 persons. To avoid the risk of missing information and interviewer subjectivity, each interview included three interviewers with unique research backgrounds. The three interviewers worked jointly to ask questions and take notes during interviews. This can aid in reducing the likelihood of subjective bias on whether the saturation of findings has been achieved, as well as maximising the capture of relevant data.

V-B External Validity

This study was conducted within one organisation, which may introduce a threat to external validity. While we recognise that having more organisations would be desirable, we believe our study can be relevant to many AI system development teams. All the interviewees are from a large national science agency with teams working on multiple areas, projects and products, serving various internal and external customers. However, we acknowledge that the opinions provided by our interviewees may not be representative of the whole community. To reduce this threat, we ensured that our interviewees had various roles and degrees of expertise, and worked on a wide variety of projects and research areas. We believe that their opinions and comments uncovered many insights into the challenges developers are facing in dealing with AI ethics issues during development and deployment.

Vi Main Findings

AI ethics principles are typically high-level and do not provide tangible guidance to developers on how to develop AI systems responsibly. In this work, we presented an empirical study which aims to understand practitioners’ perceptions of AI ethics principles and their implementation. Based on the interview results, the main findings are as follows:

  1. [,leftmargin=*]

  2. The current practice of ethical risk assessment is often a done-once-and-forget approach, which may not be sufficient for AI systems that continually learn and adapt from new data.

  3. Implementation of AI ethics principles heavily relies on manual practice. There is a lack of end-to-end development tools to support continuous assurance of AI ethics.

  4. An AI model needs to be integrated within the overall system to perform the required functions. Combining AI-focused and non-AI components may create new emergent behaviour and dynamics, which require system-level ethical consideration.

  5. AI may involve complex underlying technology which can be difficult to explain, making detailed risk assessment challenging. Adopting AI can be considered as a major architectural design decision when designing a software system. Furthermore, the design should take into account whether the AI component can be flexibly disabled at run-time, or changed from decision mode to suggestion mode.

  6. The inherent trustworthiness of an AI system for various ethics principles and the perceived trust of the system are often mixed in practice. Even for trustworthy AI systems, gaining the trust of users is a challenge that must be addressed carefully for the AI system to be widely accepted. Process and product mechanisms can be leveraged to achieve trustworthiness for various ethics principles, whereas process and product evidence need to be offered to drive trust.

  7. Human trust in AI can be improved by attaching ethical credentials to AI components/products, as vendors often supply products by assembling commercial and/or open-source AI-focused and non-AI components.

  8. Developing AI systems may require the seamless integration of outcome-driven and requirement-driven development.

  9. Requirements engineering methods need to be extended with ethics aspects for AI systems. Currently, for some ethics principles, the requirements are either omitted or mostly stated as high-level objectives, and not specified explicitly in a verifiable way as expected system outputs (to be verified/validated) and outcomes (eg. benefits).

  10. Requirements engineering needs to take into account the various possible meanings and interpretations of the word “responsible”. At minimum, three varieties of responsibility need to be taken into account: normative, possessive, and descriptive [20].

  11. There are trade-offs between various AI ethics principles. The current dominant practice is that developers follow one principle while overriding other principles, rather than building balanced trade-offs, with stakeholders making the ultimate value and risk calls.

  12. Human-centred approaches have been adopted for explainability and interpretability, taking into account users’ background and preferences to improve user trust in AI.

  13. There is a strong desire to continuously monitor and validate AI systems post-deployment to ensure adherence to ethics requirements. However, current practices and tools provide limited guidance.

  14. AI systems usually involve co-evolution of data, model, code, and configurations. Data/model/code/configuration co-versioning with model dependency specification is required to ensure data provenance and traceability.

References

  • [1] Australian Government (Department of Industry, Science, Energy and Resources) (2020) Australia’s AI Ethics Principles. Note: Accessed: 30 Apr 2022. URL: https://industry.gov.au/data-and-publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles Cited by: Fig. 1, §I, §I, §III.
  • [2] H. Barmer, R. Dzombak, M. Gaston, V. Palat, F. Redner, C. Smith, and T. Smith (2021) Human-centered AI. Note: Software Engineering Institute, Carnegie Mellon University External Links: Link Cited by: §II.
  • [3] J. Bosch (2019) From efficiency to effectiveness: delivering business value through software. In Software Business, S. Hyrynsalmi, M. Suoranta, A. Nguyen-Duc, P. Tyrväinen, and P. Abrahamsson (Eds.), pp. 3–10. Cited by: §IV-A.
  • [4] V. Braun and V. Clarke (2006) Using thematic analysis in psychology. Qualitative Research in Psychology 3 (2), pp. 77–101. Cited by: §III.
  • [5] M. Coeckelbergh (2020) AI ethics. Book, The MIT Press. Cited by: §I.
  • [6] V. Eubanks (2019) Automating inequality: how high-tech tools profile, police, and punish the poor. Book, Picador, New York, NY. Cited by: §I.
  • [7] J. Fjeld, N. Achten, H. Hilligoss, A. C. Nagy, and M. Srikumar (2020) Principled artificial intelligence: mapping consensus in ethical and rights-based approaches to principles for AI. Research Publication No. 2020-1 Berkman Klein Center for Internet & Society at Harvard University. External Links: Document, Link Cited by: §I, §I.
  • [8] B. Friedman and H. Nissenbaum (1996) Bias in Computer Systems. ACM Transactions on Computer Systems 14 (3), pp. 330–347. Cited by: §I.
  • [9] Grand View Research (2021) Artificial intelligence market size, share & trends analysis report. Cited by: §I.
  • [10] N. Hall, J. Lacey, S. Carr-Cornish, and A. Dowd (2015) Social licence to operate: understanding how a concept has been translated into practice in energy industries. Journal of Cleaner Production 86, pp. 301–310. External Links: Document Cited by: §III.
  • [11] A. Jobin, M. Ienca, and E. Vayena (2019) The global landscape of AI ethics guidelines. Nature Machine Intelligence 1 (9), pp. 389–399. Cited by: §I, §I.
  • [12] D. G. Johnson (2009) Computer ethics. Book, 4th edition, Pearson Education. Cited by: §I.
  • [13] G. Lima, N. Grgić-Hlača, and M. Cha (2021) Human perceptions on moral responsibility of AI: a case study in AI-assisted bail decision-making. In Conference on Human Factors in Computing Systems, External Links: ISBN 9781450380966, Link, Document Cited by: §IV-B.
  • [14] B. Mittelstadt (2019-11) Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1 (11), pp. 501–507. External Links: ISSN 2522-5839, Link, Document Cited by: §I.
  • [15] J. Morley, L. Floridi, L. Kinsey, and A. Elhalal (2019-12) From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Science and Engineering Ethics 26 (4), pp. 2141–2168. External Links: ISSN 1471-5546, Link, Document Cited by: §I.
  • [16] C. O’Neil (2016) Weapons of math destruction: how big data increases inequality and threatens democracy. Book, Allen Lane, London. Cited by: §I.
  • [17] C. L. Paul (2008) A modified delphi approach to a new card sorting methodology. Journal of Usability Studies 4 (1), pp. 7–30. Cited by: §III.
  • [18] C. Sanderson, D. Douglas, Q. Lu, E. Schleiger, J. Whittle, J. Lacey, G. Newnham, S. Hajkowicz, C. Robinson, and D. Hansen (2021) AI ethics principles in practice: perspectives of designers and developers. Note: arXiv:2112.07467 Cited by: §I.
  • [19] D. Sculley, G. Holt, D. Golovin, E. Davydov, T. Phillips, D. Ebner, V. Chaudhary, M. Young, J. Crespo, and D. Dennison (2015) Hidden technical debt in machine learning systems. In Advances in Neural Information Processing Systems, Vol. 28, pp. 2503–2511. External Links: Link Cited by: §IV-C.
  • [20] D. W. Tigard (2021) Responsible AI and moral responsibility: a common appreciation. AI and Ethics 1 (2), pp. 113–117. External Links: Document Cited by: §IV-B, item 9.
  • [21] J. Whittle (2019) Is your software valueless?. IEEE Software 36 (3), pp. 112–115. External Links: Document Cited by: §I.
  • [22] J. Whittlestone, R. Nyrup, A. Alexandrova, and S. Cave (2019) The role and limits of principles in AI ethics: towards a focus on tensions. In AAAI/ACM Conference on AI, Ethics, and Society, Cited by: §IV-C.
  • [23] B. Zhang, M. Anderljung, L. Khan, N. Dreksler, M. C. Horowitz, and A. Dafoe (2021) Ethics and governance of artificial intelligence: evidence from a survey of machine learning researchers. Journal of Artificial Intelligence Research 71, pp. 591–666. Cited by: §I.
  • [24] L. Zhu, X. Xu, Q. Lu, G. Governatori, and J. Whittle (2021) AI and ethics - operationalizing responsible AI. In Humanity Driven AI, pp. 15–33. Cited by: §IV-A.