The social dilemma in AI development and why we have to solve it

07/27/2021 ∙ by Inga Strumke, et al. ∙ NTNU 6

While the demand for ethical artificial intelligence (AI) systems increases, the number of unethical uses of AI accelerates, even though there is no shortage of ethical guidelines. We argue that a main underlying cause for this is that AI developers face a social dilemma in AI development ethics, preventing the widespread adaptation of ethical best practices. We define the social dilemma for AI development and describe why the current crisis in AI development ethics cannot be solved without relieving AI developers of their social dilemma. We argue that AI development must be professionalised to overcome the social dilemma, and discuss how medicine can be used as a template in this process.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A professional should not have to choose between their job and doing the right thing. Still, artificial intelligence (AI) developers can be and are put in such a position. Take the example of a company that develops an AI tool to be used to guide hiring decisions: After the product has reached a certain stage, developers may identify ethical challenges, e.g. recognising that the tool is discriminatory against minorities. Avoiding this discrimination may require decreasing the performance of the product. What should the developers do to rectify the situation? They necessarily need to inform management about their concern, but their complaint can be met with indifference, and even a threat to replace them111This example is a generalization of numerous experiences the authors have of being approached at relevant conferences by developers who perceive their work as unethical, e.g. as discriminatory against minorities. A common question within this context is whether they should risk losing their jobs for prioritising ethical considerations.

Situations such as these fall into the category of social dilemmas, and our goal in this paper is to highlight the impediment to ethical AI development due to the social dilemma faced byAIdevelopers. We argue that the current approaches to ethical practices in AI development fail to account for the existence of the challenge for developers to choose between doing the right thing and keeping their jobs.

A social dilemma exists when the best outcome for society would be achieved if everyone behaved in a certain way, but actually implementing this behaviour would lead to such drawbacks for an individual that they refrain from it. The problem we identify is that the current structures put the burden to refuse unethical development on the shoulders of the developers who cannot possibly do it, due to their social dilemma. Furthermore, this challenge will become increasingly relevant and prevalent since AI is becoming one of the most impactful current technologies, with a huge demand for development Szczepański (2019); Bughin and Seong (2018).

Advances in the field of AI

have led to unprecedented progress in data analysis and pattern recognition, with subsequent advances in the industry. This progress is predominantly due to machine learning, which is a data-driven method. The utilized data are in the majority of cases historical, and can thus represent discriminatory practices and inequalities. Therefore, many machine learning models currently in use cement or even augment existing discriminatory practices and inequalities. Furthermore,

AI

technology does not have to be discriminatory for its development to be unethical. Mass surveillance, based on e.g. facial recognition, smart policing and safe city systems, are already used by several countries 

Feldstein (2019), news feed models used by social media create echo chambers and foster extremism Cinelli et al. (2020), and autonomous weapon systems are in production Haner and Garcia (2019).

There has been rapid development in the field of AI ethics, and sub-fields like machine learning fairness, see e.g. Mary et al. (2019); Verma and Rubin (2018); Goelz et al. (2019); Barocas et al. (2019). However, it is not clear that much progress is made in implementing ethical practices in AI development, nor that developers are being empowered to refuse engaging in unethical AI development. Reports such as The AI Index 2021 Annual Report Zhang et al. (2021) stress the lack of coordination in AI development ethics. Specifically, one of the nine highlights in this report states that “AI ethics lacks benchmarks and consensus”.

The ones developing the overwhelming majority of AI systems in use are major corporations, also referred to as “Big Tech”. These corporations have reacted to academic and public pressure by publishing guidelines and principles for AI development ethics, and there has been what can be characterised as an inflation of such documents over the past years Jobin et al. (2019); Hagendorff (2020); Schiff et al. (2020). Although researchers and the society view AI development ethics as important Ebell et al. (2021), the proliferation of ethical guidelines and principles has been met with criticism from ethics researchers and human rights practitioners who, e.g., oppose the imprecise usage of ethics-related terms Floridi and Cowls (2019); Rességuier and Rodrigues (2020). The critics also point out that the aforementioned principles are non-binding in most cases and due to their vague and abstract nature also fail to be specific regarding their implementation. Finally, they do not give developers the power to refuse unethical AI development. The late firings of accomplished AI ethics researchers Hao (2020b, a); Johnson (2021) for voicing topics inconvenient for the business model of the employer, demonstrate that top-down institutional guidelines are subject to executive decisions and can be overruled. While we acknowledge that we must be cautious when generalizing from single cases, we are not alone with our concern that ethical principles might be merely ethics washing Metzinger (2019); Bietti (2020); Wagner (2018), i.e., that corporations only give the impression of ethical practices in order to avoid regulation. Thus, the need for implementing ethical principles in AI development remains, and a crucial factor for this to succeed is removing the social dilemma for AI developers.

Social dilemmas exist in most areas where individuals, employers and society are in a relational triangle around decisions that affect the society at large. AI is not an exception; there are many fields that encounter social dilemmas and some have successfully implemented mitigating measures. A very prominent example of this is medicine. In this paper, we argue that medicine’s strong focus on professionalization and the development of binding professional ethical codes is a powerful way to protect medical professionals from social dilemmas, and we discuss how structures like those in medicine can serve as a blueprint for AI development, thus leading to a lasting impact on ethical AI development.

Before proceeding, we recognise that our analysis touches upon topics from other ethics sub-fields, namely business ethics, corporate ethics and research ethics. We do not adopt the viewpoint of any of these since we believe that our analysis can inform them and would be hampered by a too narrow focus.

2 The social dilemma in AI development

A social dilemma, also referred to as a ‘collective action problem’, is a decision-making problem faced when the interests of the collective conflict with the interests of the individual making a decision. It was established in the early analysis of the problems of public good cost by Olson (1971); Perrow and Olson (1973), who stated that “rational self-interested individuals will not act to achieve their common or group interests.” Well known problems that can be considered instances of social dilemmas are the prisoner’s dilemma Luce and Raiffa (1957), the tragedy of the commons Lloyd (1980), the bystander-problem Darley and Latané (1968), fishing rights, et alia. The best known of these is perhaps the tragedy of the commons, which is a situation in which individuals with open access to a shared resource selfishly deplete the resource, thus acting against the common good and hurting their own individual interests as a result. All collective action problems concern situations in which individuals fail to behave according to the interests of the collective, although this would ultimately benefit all individuals, or, as stated by Kollock (1998): “situations in which individual rationality leads to collective irrationality”. At the same time, all these examples are metaphors that stand as evidence for the difficulty of formulating an exact definition of social dilemmas Allison et al. (1996).

In the context of AI, the social dilemma has been little discussed. The exception is in relation with autonomous vehicles Bonnefon et al. (2016). Bonnefon et al. (2016) observe in their experiments that “people praise utilitarian, self-sacrificing AVs and welcome them on the road, without actually wanting to buy one for themselves.”, and state that this has “…the classic signature of a social dilemma, in which everyone has a temptation to free-ride instead of adopting the behavior that would lead to the best global outcome.”. This is, in fact, the tragedy of the commons Hardin (1968).

The social dilemma in AI development described in the introduction, however, does not fit the metaphor of the tragedy of the commons, or any of the other commonly used social dilemma metaphors. Consequently, we need to define the social dilemma in the context of AI development, and put forward the following definition: a social dilemma exists when the best outcome for society would be achieved if everyone behaved in a certain way, but actually implementing this behaviour would lead to such drawbacks for individuals that they refrain from the behaviour. In the social dilemma in AI development, we encounter three agents, each with their, possibly conflicting, interests: society, a business corporation, and an AI developer who is a member of society and an employee of the business corporation. The interest of society is ethical AI development; the interest of the business corporation is profit and surviving in the market; the interest of the developer is primarily maintaining their employment, but secondly ethical AI development, because developers are also a part of society. The developer is thus put in a situation where they have to weigh their interest as a member of society and their interests as an employee of the corporation. This is the social dilemma we want the AI developer not to face.

An analysis by PricewaterhouseCoopers Rao and Verweij (2017) stated that AI has the potential to contribute trillion dollars to the global economy by 2030. This puts business corporations in a competitive situation, especially regarding developing and deploying AI solutions fast. Fast development is potentially the opposite of what is needed for ethical development, which can require decreasing the development speed to implement necessary ethical analyses, or even deciding against deploying a system based on ethical considerations. This can create a direct conflict between the corporations’ motivation and the interest of society, which manifests in the work and considerations of the developers. These then find themselves in a situation where they might be replaced if they voiced concerns or refuse to contribute to the development.

Expecting that AI developers will overcome this social dilemma without support is unrealistic. This stance is strengthened by the observation of other areas where social dilemmas are evident, e.g. climate change, environmental destruction, and reduction of meat consumption, where billions of people behave contrary to the agreed-upon common goal of sustainability, because of their social dilemmas.

AI development ethics, however, are much more complex than for example the ethics of meat consumption. The ethical challenges in AI are often both novel and complicated, with unforeseeable effects. While different approaches to ethics may not provide the same answer to the question “What is ethical development?”, the process of analysing the ethical aspects of a development process or system yields important information regarding the risks that can be mitigated by the developer. Yet, analysing a system and its potential impact from an ethical standpoint requires ethical training and a methodology. For the AI developer untrained in ethics and facing a social dilemma, it is unrealistic to perform this task, especially at scale McLennan et al. (2020a).

We can also observe the potential for a social dilemma to occur on another level, this time for corporations: No single corporation or small group of corporations can take on the responsibility of solving AI development ethics, as this might put them at a disadvantage compared to other agents in the same market222We acknowledge that there might be cases where ethical development of a product can be considered a competitive business advantage. The existence of such cases does not preclude, however, that also cases exist where ethical development is a clear disadvantage.. It is an interesting phenomenon that the social dilemma spirals upwards, in the sense that it can only be removed by solving it at the lowest level. If no corporation finds developers willing to engage in unethical development, they cannot end up in the corporation-level social dilemma. Furthermore, imposing corporation-level regulations for ethical conduct would likely lead to a search for loopholes, especially since there would always be gray zones, context-dependence and need for interpretation. From this perspective, solving the social dilemma for developers is also the approach that would lead to the most stable solution.

3 Professional codes versus legislation

Ethical perspectives in AI development are important since unethical development of AI can have a profound, negative impact both on individuals and society at large. Motivated by recent efforts to propose a regulatory framework for AI European Commission (2021), one might be tempted to think that the challenge of ethical AI development could be solved solely by legislation. However, there are several reasons why legislation cannot fill this role: Legislation develops at a much slower pace than current technology, implying that legislation is likely to arrive after harm has already been done, or even worse, after customary practice has been established. Furthermore, legislating against anything that could potentially be unethical or misused would disproportionately hinder progress, which is both undesirable and would in practice affect small businesses more than large ones, reinforcing the already problematic power imbalance between users and providers.

We thus face the challenging situation where we have to entrust corporations developing AI to take the ethical responsibility, although we recognize that they are not only motivated by the benefit of society. Neither can we rely on legislation to define hard legal boundaries. Thus, many individual developers will be hindered to pursue ethical development due to their social dilemma. We now describe a possible solution to this problem, recognising that the described phenomenon is not novel from a societal point of view.

Historically, societies have understood early that certain professions, while having the potential to be valuable for society, require stronger oversight than others due to their equally substantial potential for harm. While the necessity of a certain autonomy and freedom for professionals is acknowledged, it is important to simultaneously expect professionals to work for the benefit of society. As Frankel (1989) puts it: “Society’s granting of power and privilege to the professions is premised on their willingness and ability to contribute to social benefit and to conduct their affairs in a manner consistent with broader social values”. Camenisch (1983) even argues that the autonomy of a profession is a privilege granted by society, which in turn leads to a moral duty of a profession to work for societal benefit. Professional codes, which are not in line with societal good, will be rejected by society Jamal and Bowie (1995).

Professional codes have been used to promote the agreed upon professional values in areas where legislative solutions are inadequate. Members of a profession are tied together in a “major normative reference group whose norms, values, and definitions of appropriate [professional] conduct serve as guides by which the individual practitioner organizes and performs his own work” Pavalko (1988). Most importantly, in the context of this work, professional codes are a natural remedy against social dilemmas encountered in professional settings. The individual is relieved from the potential consequences of criticising conduct or refusing to perform behaviour in violation of their professional codes, and it would be highly unlikely that another member of the same profession would be willing to perform the same acts in violation of the professional code. Furthermore, the public would have insight into what is the standard ethical conduct for the entire profession.

Naturally, professional codes do not develop in a void. They draw from ethical theories, the expectations of society and from the self-image of the professionals. Consequently, professional codes are never set in stone but are constantly revised in light of technical advancement, development in societal norms and values, and regulatory restrictions. However, although they are dynamical, there is still at any given time a single version that is valid, protecting the individual professional from the social dilemma and maximizing the benefit for society.

Naturally, the development of professional codes is not an easy task. We argue that it is best to draw from a field that has succeeded at the task, as described in the next section.

4 Professional codes in medicine

As stated in the Introduction (section 1), we suggest using medicine as a template for a professional code for AI development ethics. Although other fields have also developed professional codes, we argue based on societal impact that medicine is the most suitable example to follow. Medicine – primarily responsible for individual and public health – has a tremendous impact on society, at a level which few, if any, other professions share. AI has the potential of a similarly or even more substantial impact, depending on future development.

Medicine is an ancient profession, with the first written records dating back to Sumeria 2000 BC Biggs (1995). The Hippocratic oath, the first recorded document of medical professional codes, was introduced by the ancient Greeks, and its impact was so large that many laypeople, incorrectly, believe that it is still taken today Zwitter (2019). The British and American medical associations drafted their first codes for ethical conduct in the 19th century Backof and Martin (1991). Modern medicine has evolved considerably over the past 150 years, and milestones in the development of professional codes have been the declarations of the World Medical Association Gillon (1985), which states promoting ethical professional codes as one of its main goals. The two most prominent declarations are the declarations of Geneva as a response to the cruelties performed by medical professionals in Germany and Japan during World War II 46; and the declaration of Helsinki for ethical conduct of medical research 47. These documents are continuously updated and received further refinement especially after disastrously unethical events, e.g. the revelation of the Tuskagee Syphilis study Chadwick (1997). Based on these documents, professional medical associations around the world have drafted professional ethical codes. Importantly, these codes are specifically designed to not be dependent on legislation which can highly differ between countries Gillon (1985).

Medical professionals are guided in their work by these ethical codes, and are protected from the social dilemma as the publicly known ethos enables them to refuse unethical behaviour without the fear of repercussions333We do of course not claim that this system is foolproof and can prevent the social dilemma fully. The professional codes in medicine are, however, arguably the ones that protect their professionals the best in an area with highest societal impact.. Due to the similar level of expected impact on society, we view medicine as a suitable template for a professional code for AI development ethics. In the following section we outline how the field of AI needs to adapt in order to develop robust, impactful, and unified professional codes in analogy to medicine.

5 Towards professional codes in AI

In this section we discuss the present issues with establishing ethical codes of conduct for AI developers and outline some possible paths towards establishing them.

5.1 Current issues

The topic of professional codes for AI has raised considerable interest during recent years. The field is too broad to cite all relevant publications, but several works have pointed out the fluid nature of this field and its complexity, see e.g. Larsson (2020); Boddington (2017). Yet, we observe that there is little tangible practical impact, in the sense that there are no broadly accepted professional codes today. We believe that this can be attributed to two major reasons:

Primarily, current analyses accept many boundary conditions as given, instead of suggesting how to change these conditions. We exemplify this as follows. It is true that even if large organizations, such as the IEEE, adopted professional codes of AI development, a plethora of challenges would remain. Who is an AI developer? What is the incentive for someone to join these organizations and abide by the codes? What keeps the organisation from diverging from the code if the potential gain or cost avoided is substantial? Even worse, several competing organizations might publish different codes, fragmenting the field and making it impossible for the developers and the public to know the norms of the profession. Lastly, such efforts might even be hijacked by Big Tech: They could “support” certain organizations in publishing ethical codes, with significant influence on content, essentially leading to a new form of ethics washing. We agree that under these boundary conditions, an implementation of professional codes for AI seems difficult or even impossible. However, we argue that we must distinguish between an analysis of a situation and practical suggestions regarding how conditions can and should be changed. In brief, we argue that analyzing a situation will not change it; only changing relevant determining factors will.

Secondly, as long as current initiatives do not free developers from the social dilemma outlined earlier, an implementation of professional codes will inevitably fail. For example, recent work has addressed how embedding ethicists in development teams might support ethical AI development  McLennan et al. (2020a, b). However, if these ethicists are themselves just other employees of the same corporation, the social dilemma applies to them as well.

5.2 Possible ways forward

We thus argue that in order to solve the crisis in AI development ethics, a process that addresses the two points in section 5.1 must be initiated. Our primary proposition is that AI development must become a unified profession, taking medicine as an example. And, as in medicine, it must become licensed. The licence must be mandatory for all developers of medium to high risk AI systems, following, e.g., the Proposal for a Regulation laying down harmonised rules on artificial intelligence by the European Union  European Commission (2021). This would protect the individual developer in an unprecedented manner. The chances of being replaced by another professional would be very small, since employers would know that all AI developers abide by the same code. Thus, AI developers could refuse to perform unethical development without fear of the social dilemma consequences. The difference before and after introducing a professional ethos is depicted in fig. 1.

Secondly, national AI developer organisations maintaining registers of employed AI developers must be established, analogously to national medical societies. These, including all their members, would serve as nuclei for the development of professional codes, and be responsible for maintaining, updating and refining them. With such a system in place, understanding and following the codes - the professional ethos - would replace the need for individual formal training in the methodology of ethics, as is the case in medicine. Lastly, unethical behaviour could lead to the loss of ones license, which is a strong incentive not to take part in unethical development, even if required by an employer. Note how legislation does not influence the content of the professional codes but facilitates it by creating the right boundary conditions.

We do not claim that unifying AI development into one profession is a simple task. On the contrary, we acknowledge all the challenges other authors, e.g. Mittelstadt (2019), have pointed out regarding defining who is an AI professional, and the complex interactions between all stakeholders in AI governance. The difference is that we do not focus on what hinders the process, but argue that establishing ethical AI development will otherwise fail: As long as professionals can be uncertain regarding whether they are an AI developer, as long as corporations can claim that their employees are not AI developers, as long as we leave developers alone with their social dilemma, as long as there are no single international institutions serving as contact points for governments and corporations, and as long as there is no accountability for unethical AI development, no stable solution securing future AI development to be ethical will be found.

Although overcoming all obstacles to a unified AI developer profession will be a tedious endeavour, it will remove the social dilemma for developers. We argue that this is the only realistic way to ensure that AI development follows goals in alignment with societal benefit. Once a unified profession with professional codes exists, it will serve as a safeguard against unethical corporate and governmental interests. This is important as the role of corporations can be manifold, and a unified profession will help to steer their decisions in a direction aligned with society’s ethical expectations. Removing the need for internal guidelines would also remove the possibility of using AI ethics as merely a marketing narrative.

(a)
(b)
Figure 1: (fig:dilemma1) Now: Society’s need for ethical conduct and the employer’s need to develop products together put the developer into a dilemma, and (fig:dilemma2) after introducing the ethos: what was previously a dilemma for the developer is now a trade-off that society, together with the employer, has to handle using established methods.

6 Conclusion

AI technology has the potential for substantial advancements but also for negative impacts on society, and thus requires assurance of ethical development. However, despite massive interest and efforts, the implementation of ethical practice into AI development remains an unsolved challenge, which in our view renders it obvious that the current approach to AI development ethics fails to provide such assurance. Our position is that the current, guideline-based, approach to AI development ethics fails to have an impact where it matters. We argue that the key to ethical AI development at this stage is solving the social dilemma for AI developers, and that this must be done by unifying AI development into a single profession. Furthermore, we argue that, based on observations from the mature field of medicine, a unified professional ethos is necessary to ensure a stable situation of ethical conduct that is beneficial to society.

We have discussed ethical considerations from the perspective of added cost, but would like to also point out that ethical development has itself proper value. Awareness of ethical responsibilities both inwards (towards the corporation and peers) and outwards (towards clients and society) leads directly to the protection of assets and reputation. Professional objectives in line with ethical values leads to increased dedication and sense of ownership, resulting in higher quality deliverables. Practice in ethical consideration and evaluation processes improves professionals’ decision making and implementation abilities, making them more willing to adapt to changes required for sustainability. Focus on ethical considerations fosters a culture for openness, trust and integrity, which again decreases the risk of issues being downplayed. Outstanding professionals with the privilege to choose among several employers are likely to consider not only the opportunity for professional growth, but also whether they can expect their future employer to treat them and their peers justly and ethically.

By focusing on the social dilemma we have added additional pressure to motivate the development of professional AI codes of ethics. Much remains to be done to operationalise this desired professional certification framework.

We can observe that the medical professional ethical code is built on a long-standing tradition of professional codes. In the field of AI, we do not have the benefit of such a historical and globally recognised entity. Thus, the first step will be to agree on the core values and principles that apply to anyAIdeveloper in any context. The next step will be to operationalise those values and principles on a national level by establishing a certification framework forAIdevelopers. Governments do not need to be left on their own when developing this certification frameworks, as it can be based on the experience with many national medical certification frameworks.

References

  • S. T. Allison, J. K. Beggan, and E. H. Midgley (1996) The quest for “similar instances” and “simultaneous possibilities”: metaphors in social dilemma research. Journal of Personality and Social Psychology 71 (3), pp. 479–497. External Links: Document Cited by: §2.
  • J. F. Backof and C. L. Martin (1991) Historical perspectives: Development of the codes of ethics in the legal, medical and accounting professions. Journal of Business Ethics 10 (2), pp. 99–110 (en). Note: ZSCC: 0000118 External Links: ISSN 1573-0697, Link, Document Cited by: §4.
  • S. Barocas, M. Hardt, and A. Narayanan (2019) Fairness and machine learning. fairmlbook.org. Note: http://www.fairmlbook.org Cited by: §1.
  • E. Bietti (2020) From ethics washing to ethics bashing: a view on tech ethics from within moral philosophy. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT 20, New York, NY, USA, pp. 210–219. External Links: ISBN 9781450369367, Link, Document Cited by: §1.
  • R. D. 1. Biggs (1995) Medicine, surgery, and public health in ancient Mesopotamia. Civiliations of the ancient Near East. Vol. 3 3, pp. 1911 (English). Note: ZSCC: 0000119 ISBN: 9780684197227 Cited by: §4.
  • P. Boddington (2017) Towards a Code of Ethics for Artificial Intelligence. Artificial Intelligence: Foundations, Theory, and Algorithms, Springer International Publishing, Cham (en). Note: ZSCC: 0000122 External Links: ISBN 978-3-319-60647-7 978-3-319-60648-4, Link, Document Cited by: §5.1.
  • J. Bonnefon, A. Shariff, and I. Rahwan (2016) The social dilemma of autonomous vehicles. Science 352 (6293), pp. 1573–1576. External Links: Document, https://science.sciencemag.org/content/352/6293/1573.full.pdf, ISSN 0036-8075, Link Cited by: §2.
  • J. Bughin and J. Seong (2018) Assessing the economic impact of artificial intelligence. Report, The International Telecommunication Union (ITU), McKinsey Global Institute. External Links: Link Cited by: §1.
  • P. F. Camenisch (1983) Grounding professional ethics in a pluralistic society. Haven Publications, New York, N.Y. External Links: ISBN 978-0-930586-11-9 Cited by: §3.
  • G. L. Chadwick (1997) Historical perspective: Nuremberg, Tuskegee, and the radiation experiments. Journal of the International Association of Physicians in AIDS Care 3 (1), pp. 27–28 (eng). Note: ZSCC: 0000018 External Links: ISSN 1081-454X Cited by: §4.
  • M. Cinelli, G. D. F. Morales, A. Galeazzi, W. Quattrociocchi, and M. Starnini (2020) Echo chambers on social media: a comparative analysis. External Links: 2004.09603 Cited by: §1.
  • J. M. Darley and B. Latané (1968) Bystander intervention in emergencies: diffusion of responsibility. Journal of Personality and Social Psychology 8 (4, Pt.1), pp. 377–383. External Links: Document Cited by: §2.
  • C. Ebell, R. Baeza-Yates, R. Benjamins, H. Cai, M. Coeckelbergh, T. Duarte, M. Hickok, A. Jacquet, A. Kim, J. Krijger, J. MacIntyre, P. Madhamshettiwar, L. Maffeo, J. Matthews, L. Medsker, P. Smith, and S. Thais (2021) Towards intellectual freedom in an ai ethics global community. AI and Ethics, pp. 1–8. External Links: ISSN 2730-5953, Link Cited by: §1.
  • European Commission (2021) Proposal for a regulation laying down harmonised rules on Artificial Intelligence. European Commission. External Links: Link Cited by: §3, §5.2.
  • S. Feldstein (2019) The global expansion of AI surveillance. Technical report Carnegie Endowment for International Peace. External Links: Link Cited by: §1.
  • L. Floridi and J. Cowls (2019) A unified framework of five principles for ai in society.

    Harvard Data Science Review

    1 (1).
    Note: https://hdsr.mitpress.mit.edu/pub/l0jsh9d1 External Links: Document, Link Cited by: §1.
  • M. S. Frankel (1989) Professional codes: Why, how, and with what impact?. Journal of Business Ethics 8 (2-3), pp. 109–115 (en). Note: ZSCC: 0000551 External Links: ISSN 0167-4544, 1573-0697, Link, Document Cited by: §3.
  • R. Gillon (1985) Medical oaths, declarations, and codes.. Br Med J (Clin Res Ed) 290 (6476), pp. 1194–1195 (en). Note: Publisher: British Medical Journal Publishing Group Section: Research Article External Links: ISSN 0267-0623, 1468-5833, Link, Document Cited by: §4.
  • P. Goelz, A. Kahng, and A. D. Procaccia (2019) Paradoxes in fair machine learning. In Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. Alché-Buc, E. Fox, and R. Garnett (Eds.), Vol. 32, pp. . External Links: Link Cited by: §1.
  • T. Hagendorff (2020) The ethics of AI ethics: an evaluation of guidelines. Minds and Machines 30 (1), pp. 99–120. External Links: ISSN 1572-8641, Link, Document Cited by: §1.
  • J. Haner and D. Garcia (2019) The artificial intelligence arms race: trends and world leaders in autonomous weapons development. Global Policy 10 (3), pp. 331–337. External Links: Document, Link, https://onlinelibrary.wiley.com/doi/pdf/10.1111/1758-5899.12713 Cited by: §1.
  • K. Hao (2020a) I started crying: inside timnit gebru’s last days at google - and what happens next. MIT Technology Review. External Links: Link Cited by: §1.
  • K. Hao (2020b) We read the paper that forced timnit gebru out of google. here’s what it says. MIT Technology Review. External Links: Link Cited by: §1.
  • G. Hardin (1968) The tragedy of the commons. Science 162 (3859), pp. 1243–1248. External Links: Document, ISSN 0036-8075, Link, https://science.sciencemag.org/content/162/3859/1243.full.pdf Cited by: §2.
  • K. Jamal and N. E. Bowie (1995) Theoretical considerations for a meaningful code of professional ethics. Journal of Business Ethics 14 (9), pp. 703–714 (en). Note: ZSCC: 0000196 External Links: ISSN 1573-0697, Link, Document Cited by: §3.
  • A. Jobin, M. Ienca, and E. Vayena (2019) The global landscape of AI ethics guidelines. Nature Machine Intelligence 1, pp. . External Links: Document Cited by: §1.
  • K. Johnson (2021) Google targets AI ethics lead margaret mitchell after firing timnit gebru. VentureBeat. External Links: Link Cited by: §1.
  • P. Kollock (1998) Social dilemmas: the anatomy of cooperation. Annual Review of Sociology 24 (1), pp. 183–214. External Links: Document, Link, https://doi.org/10.1146/annurev.soc.24.1.183 Cited by: §2.
  • S. Larsson (2020) On the governance of artificial intelligence through ethics guidelines. Asian Journal of Law and Society 7 (3), pp. 437–451. External Links: Document Cited by: §5.1.
  • W. F. Lloyd (1980) W. f. lloyd on the checks to population. Population and Development Review 6 (3), pp. 473–496. External Links: ISSN 00987921, 17284457, Link Cited by: §2.
  • R. D. Luce and H. Raiffa (1957) Games and decisions: introduction and critical survey. Wiley, Chicago. Cited by: §2.
  • J. Mary, C. Calauzènes, and N. E. Karoui (2019) Fairness-aware learning for continuous attributes and treatments. In Proceedings of the 36th International Conference on Machine Learning, K. Chaudhuri and R. Salakhutdinov (Eds.), Proceedings of Machine Learning Research, Vol. 97, pp. 4382–4391. External Links: Link Cited by: §1.
  • S. McLennan, A. Fiske, L. A. Celi, R. Müller, J. Harder, K. Ritt, S. Haddadin, and A. Buyx (2020a) An embedded ethics approach for AI development. Nature Machine Intelligence 2 (9), pp. 488–490 (en). Note: ZSCC: 0000009 Bandiera_abtest: a Cg_type: Nature Research Journals Number: 9 Primary_atype: Comments & Opinion Publisher: Nature Publishing Group Subject_term: Science, technology and society;Social sciences Subject_term_id: science-technology-and-society;social-sciences External Links: ISSN 2522-5839, Link, Document Cited by: §2, §5.1.
  • S. McLennan, A. Fiske, L. A. Celi, R. Müller, J. Harder, K. Ritt, S. Haddadin, and A. Buyx (2020b) An embedded ethics approach for AI development. Nature Machine Intelligence 2 (9), pp. 488–490 (en). Note: ZSCC: 0000009 Bandiera_abtest: a Cg_type: Nature Research Journals Number: 9 Primary_atype: Comments & Opinion Publisher: Nature Publishing Group Subject_term: Science, technology and society;Social sciences Subject_term_id: science-technology-and-society;social-sciences External Links: ISSN 2522-5839, Link, Document Cited by: §5.1.
  • T. Metzinger (2019) Ethics washing made in europe. Der Tagesspiegel. Note: Editorial External Links: Link Cited by: §1.
  • B. Mittelstadt (2019) Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1 (11), pp. 501–507 (en). External Links: ISSN 2522-5839, Link, Document Cited by: §5.2.
  • M. Olson (1971) The logic of collective action: public goods and the theory of groups, second printing with a new preface and appendix. Harvard University Press. External Links: ISBN 9780674537507, Link Cited by: §2.
  • R. M. Pavalko (1988) Sociology of occupations and professions. Itasca, Ill. : F.E. Peacock (eng). External Links: ISBN 978-0-87581-324-0, Link Cited by: §3.
  • C. B. Perrow and M. Olson (1973) Review: [untitled]. Social Forces 52 (1), pp. 123–125. External Links: ISSN 00377732, 15347605, Link Cited by: §2.
  • A. Rao and G. Verweij (2017) Sizing the prize. PwC. External Links: Link Cited by: §2.
  • A. Rességuier and R. Rodrigues (2020) AI ethics should not remain toothless! a call to bring back the teeth of ethics. Big Data & Society 7 (2), pp. 2053951720942541. External Links: Document, Link, https://doi.org/10.1177/2053951720942541 Cited by: §1.
  • D. Schiff, J. Biddle, J. Borenstein, and K. Laas (2020) What’s next for AI ethics, policy, and governance? a global overview. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES ’20, New York, NY, USA, pp. 153–158. External Links: ISBN 9781450371100, Link, Document Cited by: §1.
  • M. Szczepański (2019) Economic impacts of artificial intelligence (AI). European Parliamentary Research Service, European Parliament Think Thank. Note: Briefing External Links: Link Cited by: §1.
  • S. Verma and J. Rubin (2018) Fairness definitions explained. In Proceedings of the International Workshop on Software Fairness, FairWare ’18, New York, NY, USA, pp. 1–7. External Links: ISBN 9781450357463, Link, Document Cited by: §1.
  • B. Wagner (2018) Ethics as an escape from regulations: from “ethics-washing” to ethics-shopping?. In Being profiled: Cogitas ergo sum: 10 Years of Profiling the European Citizen, pp. 84–89. External Links: Link Cited by: §1.
  • [46] WMA - The World Medical Association-WMA Declaration of Geneva. (en-US). External Links: Link Cited by: §4.
  • [47] WMA - The World Medical Association-WMA Declaration of Helsinki-Ethical Principles for Medical Research Involving Human Subjects. (en-US). External Links: Link Cited by: §4.
  • D. Zhang, S. Mishra, E. Brynjolfsson, J. Etchemendy, D. Ganguli, B. Grosz, T. Lyons, J. Manyika, J. C. Niebles, M. Sellitto, Y. Shoham, J. Clark, and R. Perrault (2021) The AI index 2021 annual report. External Links: Link, 2103.06312 Cited by: §1.
  • M. Zwitter (2019) Ethical Codes and Declarations. In Medical Ethics in Clinical Practice, pp. 7–13. External Links: ISBN 978-3-030-00718-8 978-3-030-00719-5, Link, Document Cited by: §4.