DeepAI
Log In Sign Up

Atomist or Holist? A Diagnosis and Vision for More Productive Interdisciplinary AI Ethics Dialogue

08/19/2022
by   Travis Greene, et al.
0

In response to growing recognition of the social, legal, and ethical impacts of new AI-based technologies, major AI and ML conferences and journals now encourage or require submitted papers to include ethics impact statements and undergo ethics reviews. This move has sparked heated debate concerning the role of ethics in AI and data science research, at times devolving into counter-productive name-calling and threats of "cancellation." We diagnose this deep ideological conflict as one between atomists and holists. Among other things, atomists espouse the idea that facts are and should be kept separate from values, while holists believe facts and values are and should be inextricable from one another. With the goals of encouraging civil discourse across disciplines and reducing disciplinary polarization, we draw on a variety of historical sources ranging from philosophy and law, to social theory and humanistic psychology, to describe each ideology's beliefs and assumptions. Finally, we call on atomists and holists within the data science community to exhibit greater empathy during ethical disagreements and propose four targeted strategies to ensure data science research benefits society.

READ FULL TEXT VIEW PDF
07/21/2017

Data, Science and Society

Reflections on the Concept of Data and its Implications for Science and ...
03/11/2020

Can Society Function Without Ethical Agents? An Informational Perspective

Many facts are learned through the intermediation of individuals with sp...
09/10/2020

Biases in Data Science Lifecycle

In recent years, data science has become an indispensable part of our so...
10/18/2017

Mapping for accessibility: A case study of ethics in data science for social good

Ethics in the emerging world of data science are often discussed through...
10/18/2017

Children and the Data Cycle: Rights and Ethics in a Big Data World

In an era of increasing dependence on data science and big data, the voi...
08/15/2020

Tackling COVID-19 through Responsible AI Innovation: Five Steps in the Right Direction

Innovations in data science and AI/ML have a central role to play in sup...
10/30/2018

AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing

Recently, many AI researchers and practitioners have embarked on researc...

Introduction

In his 1917 lecture Science as a Vocation, Max Weber argues that questions of fact are separate from questions of value [weber1958science]. Given pre-specified ends, the task of science is to find the best means of achieving them. But asking which ends we ought to achieve is a question of values, answerable only by philosophy or religion. Weber was not alone on this point. Eminent mathematicians, physicists, and economists espouse similar views on the separation of science and values. But a growing number of data science researchers see things differently. They argue that values, particularly those related to the desirability of social and political goals, implicitly influence the foundations of data science practices [friedler2021possibility, green2021data, crawford2021atlas].

As public concern mounts over the use of AI-driven research to enable surveillance technologies, deep fakes, biased language models

[weidinger2022taxonomy]

, misinformation, addictive behaviors, and the discriminatory use of facial recognition and emotion detection algorithms

[Kate2022ProtocolEthics], data scientists appear divided along ideological lines about what to do. While some may view the inclusion of ethics impact statements in several AI conferences and journals as a sign that the value-neutral ideal of science is no longer tenable, a vocal cadre of data scientists has pushed back against this conclusion, asserting the importance of academic freedom and value-neutrality[DomingosACMOpenLetter]. Yet the conflict between these two ideological camps threatens to polarize the data science community, lowering the prospects that new AI-based technologies will contribute to our collective well-being.

This largely historical and conceptual paper is organized as follows. Section 1 lays out the conflicting ideological views—atomism and holism—underlying AI ethics debates. Section 2 contrasts the “two cultures” of atomism and holism by presenting a guiding metaphor for each that captures key aspects of each culture’s self-concept. Section 3 relates philosophical debate on facts and values to community concerns about the ethical evaluation of AI-based technologies. Section 4 considers how ethics impact statements embody a shift towards a more holist data science culture and discusses the social benefits of value-based discussions. Section 5, in response, identifies key sticking points of ethics reviews from the perspective of an atomist data scientist. Finally, Section 6 presents a broad vision for more empathetic discussion of value-related issues and proposes four targeted strategies for more civil discourse.

1 Atomism vs. holism: An overview of conflicting ideologies

An ideology is an all-encompassing world-view advancing a political and ethical vision of the good society [heywood2021political]. To better understand the nature of ethical disagreements in data science, we introduce a simple ideological taxonomy we call atomism and holism, inspired by philosophers and intellectual historians such as Popper [popper2012open], Bunge[bunge2000systemism], Sowell[sowell2002conflict], and Skinner[skinner2002visions]. The purpose of the taxonomy is to make each ideology’s implicit beliefs, assumptions, and historical foundations more explicit so they can be reflected on, refined, and more openly discussed within the data science community should ethical conflict arise [domingosCancelBlog] (see Table 1).

To delineate the scope of the arguments and examples, we use the label data scientist to cover both industry data scientists who design and deploy socially-impactful AI/ML-based systems, and data scientists—often in academia, but also in industry—whose job descriptions includes publishing research articles in peer-reviewed conferences and journals. Our broad definition includes AI/ML researchers and AI/ML engineers. We use the terms data science and AI/ML interchangeably.

Atomist Holist
Guiding metaphor Tool-maker Social steward
Facts and values Separate Inseparable
Associated “Isms”
(Neo)liberalism, libertarianism,
logical positivism, modernism
Communitarianism, feminism,
post-positivism, post-modernism
Social orientation Centripetal/individualist Centrifugal/collectivist
Self-concept Autonomous Relational
Means of social coordination Incentives and markets Shared moral values and dialogic exchange
Key moral concepts
Rights, duties, contracts,
impartial justice
Empathy, caring, connection,
responsivity to vulnerable others
Vision of the good life Neutral and realistic Distinct and idealistic
Scientific methodology Data-driven, empiricist, neutral Theory-laden, rationalist, perspectival
Extreme form leads to Technocracy/nihilism/alienation Totalitarianism/dogmatism/tribalism
Table 1: A taxonomy of two basic but often conflicting ideologies in the data science community along several key dimensions.

1.1 Conflicting visions of self, society, and politics

Our diagnosis starts from the observation that the conflict between atomist and holist data scientists echoes classic debates between traditional liberals and communitarians and care ethicists [slote2007ethics, held2006ethics, noddings2013caring]. The conflict centers on the nature of and relationship between the self and society [etzioni1996responsive], and whether a just society should advance any particular view of the good life [sandel1984procedural]. Atomists embody more of an individualist culture, holists more of a collectivist culture [triandis2018individualism]. The atomist is more inwardly focused on his or her individual needs and motivations, whereas the holist is more outwardly-focused on the needs, interests, and goals of the social collective. One helpful way of summarizing the atomist and holist conception of the self is with analogy to centripetal and centrifugal forces, respectively [etzioni1996responsive]. Notably, both atomists and holists claim the opposing ideology—when taken to the extreme—leads to various social and individual pathologies.

Holist data scientists view their identities relationally, as derived from their being embedded in a larger society with a particular and unique history and culture. They are data scientists, but also concerned citizens. These social, political, and cultural relations entail various relationships of responsibility and caring for others [noddings2013caring, held2006ethics]. Holists stress that the self is embedded in and constituted through caring relations to particular others. As such, holists believe that a good life and society should foster attentive, caring relations and cultivating empathy for the experiences and suffering of vulnerable others [slote2007ethics, held2006ethics]. Holist data scientists recognize that their technical expertise allows them to understand AI technologies in a way that politicians and ordinary citizens might not. Their privileged knowledge creates an obligation to communicate these concerns publicly so that citizens and policy-makers can act appropriately [anderson2011democracy]. Holists thus see themselves as social stewards or fiduciaries acting on behalf of society. Limiting the freedom of other data scientists in order to protect and prevent harm to vulnerable third parties, marginalized groups, and the environment is a necessary evil in the quest towards the ideal society.

Atomists, as the name implies, tend to see the world reductively [bunge2000systemism] and believe that what separates us is prior to what connects us [held2006ethics]. In political matters, atomists emphasize our separateness and independence from others while focusing on abstract issues of impartial justice, duties and rights. They stress the importance of formal equality under the law and the freedom to enter into voluntary contracts of exchange in markets; they prefer economic systems in which social coordination is achieved through the pursuit of enlightened self-interest, rather than imposed by an external authority [hayek1980individualism]. Finally, atomists value personal autonomy and take violations of their personal integrity seriously [smiley2009moral]. Personal integrity refers to the general class of life projects to which a person is committed [smart1973utilitarianism]. Atomist data scientists feel the imposition of moral responsibility to others unfairly infringes on their freedom to pursue their life projects as they see fit, a view which reflects their centripetal view of self and society.

1.2 Conflicting visions of the role of science in society

Mirroring economic arguments for specialization and trade, atomists support a gap between fact-based inquiry and value-based inquiry. The job of the technically-trained scientist is to conduct fact-based inquiry and report experimental results. In contrast, the job of policy-makers is to advance the state of society by acting on these results according to political and ethical values [rudner1953scientist]. How others interpret the facts, and what they do on the basis of these interpretations, is out of the control of the scientist. The atomist thus defends the empiricist position that a judgment of fact entails nothing about value, and vice versa [elgin2013fact]. Factual statements, strictly speaking, do not motivate action or decision [hume2003treatise] (i.e., ought cannot be derived from is). Atomists argue that intellectual specialization and division of labor—i.e., a logical gap between value-neutral fact finding and value-based decision-making [popper2012open]—is a feature, not a bug, of the arrangement. Hence, atomists see the imposition of ethics impact statements as hampering scientific fact-finding, which is essential to innovation and economic progress.

Atomists are also concerned about how the injection of values can lead to—in the best case—wishful thinking and—in the worst case—to dogmatism and totalitarianism [popper2012open]. Atomists cite how Soviet science was notoriously influenced by its interpretation of Marxist philosophy, or how Galileo was forced to recant his support of Copernicanism under the inquisition of the Catholic church. The dogmatic and totalitarian tendencies of holists were recently summarized by one data science academic who condemned “the increasing use of repressive actions aimed at limiting the free and unfettered conduct of scientific research and debate" [DomingosACMOpenLetter]. By claiming that science is and ought to be value-free, or at least value-neutral [betz2013defence], atomists advocate a form of inductive empiricism, tracing back to Francis Bacon and Galileo [lacey2005science].

Meanwhile, holists advocate a post-positivist attitude to science, where facts and values mutually inform one another [anderson1995value]. They doubt there is a uniquely correct or absolute view of reality [rorty2009philosophy] and emphasize the role of interpretation and perspective [denzin2011sage]. Holists thus reject the value-free ideal of science and the atomist ban on deriving an ought from an is. Instead, holists believe facts discovered by social/data science ought to be put towards realizing a distinct vision of the good life and society [gorski2013beyond, dewey1998essential]. Free and open discussion of values allows for dialogue aimed at finding overlapping consensus about which values are and should be embodied in data science research and applications. Without a clear articulation of and commitment to shared moral values, new AI technologies may impede the achievement of the good life [floridi2018ai4people]. In particular, holists fear the creation of a faceless and unaccountable technocracy narrowly aimed at prediction and control of human behavior resulting from the “value-neutral,” instrumental application of science [habermas1985theory]. In short, holists view ethics impact statements as tools for clarifying shared action-guiding values, developing moral character, and inducing democratic deliberation on the responsibility of data scientists to society.

Figure 1: The data scientist community risks splitting into two opposing factions—atomists vs. holists—with conflicting ideologies and self-concepts. We propose several targeted recipes designed to help bridge these divisions and promote more empathetic, fruitful, and civil debate on AI ethics issues. (Images courtesy of daphnesembroidery.com)

2 The “two cultures” within the data science community

In 1959, at the height of Cold War, and as the US military-industrial complex established itself, scientist and writer C.P. Snow worried about a growing divide between two academic cultures—those from the “hard sciences” and the ”humanities“—whose specialization rendered them increasingly hostile and unmotivated to communicate with one another[snow2012two]. We suggest a similar dynamic between may be stoking division within the larger data science community, and here sketch two guiding metaphors capturing key differences in the self-concept of atomists and holists (see Figure 1).

2.1 Atomists: Data scientists as value-neutral toolmakers

Echoing Max Weber’s position on the separation of facts and values, atomists see themselves as autonomous and rational optimizers of a more or less pre-specified objective function provided by their disciplinary paradigm. The paradigm supplies a constellation of accepted beliefs, values, and techniques with which the community members can solve the puzzles identified in the paradigm [kuhn1970structure]. Shared values, such as that quantitative are preferred over qualitative predictions, are implicitly embedded within the paradigm, but largely left unquestioned so that community members can better attend to the facts. The resulting precision contributes to scientific progress as consensus around clearly defined criteria [cole1994sociology]

permits puzzle-formulation and solution. In AI and data science research, for instance, these shared standards might take the form of the 0.05 threshold for p-values in statistical hypothesis testing

[rudner1953scientist]

, or the use of several standard benchmark datasets (e.g., ImageNet, CIFAR-10, Fashion MNIST, etc.) and performance metrics (e.g., precision, recall, or F1 score).

The atomist’s belief in the value-neutrality of science suggests the data scientist’s job consists in the search for more efficient means of reaching the shared ends of the paradigm, not in proposing new ends or critiquing the legitimacy of the paradigm (e.g., claiming the current paradigm may be overfitting to certain benchmark datasets [beyer2020we]). Larger questions of goals and purposes are external to the paradigm—and hence the paradigm remains silent on them, as it has no specialized tools to objectively settle such questions, let alone precisely formulate them. Just as an expert craftsman or toolmaker does not generally inquire into the final purposes of the tools he or she produces, the atomist data scientist focuses on what he or she knows best, as determined by the paradigm, and leaves larger questions of purpose, goals, and ends to the philosophers, theologians, and politicians. The atomist data scientist thus holds that the results and artifacts of AI research may be later be used for good or for bad, but in themselves are value-neutral.

2.2 Holists: Data scientists as social stewards

In contrast, holists see themselves as social stewards or fiduciaries working on behalf of society. The conviction that data scientists ought to act as social stewards or fiduciaries is advocated by a growing number of data scientists [bak2021stewardship], particularly in high-stakes healthcare domains [eaneff2020case]. Holists believe that data scientists’ fiduciary responsibility to society derives from their capacity to “produce consequences that matter to others” [goodin1986protecting, pg. 110]. In particular, holists are concerned about unjust power differentials and coercive dependency relationships [held2006ethics] that may arise due to applications of AI-based technologies. The vocational role of steward or fiduciary thus aligns with the holist belief that the self is constituted through social relations.

A fiduciary relation is a special kind of social relation involving discretion, power, inequality, dependence, vulnerability, trust, and confidence [miller2013justifying]. The word fiduciary itself stems from fide, meaning “trustworthy.” Fiduciaries have the power and authority to make decisions on behalf of the beneficiary. Examples of fiduciary relations include trustee-beneficiary, principal-agent, manager-shareholder, lawyer-client, doctor-patient, and parent-child relations.

The presumed fiduciary power of holists may be justified on the basis of their unequal access to knowledge of technical details and social applications of emerging AI-based technologies. As a result of this knowledge asymmetry, fiduciaries abide by a duty of loyalty to act in the best interests of the beneficiary and refrain from opportunism and conflicts of interest [miller2013justifying]. An open question, however, is whether holists can rightly claim to know what is in the best interests of society. Another concern is that fiduciary relationships are typically created by law or consented to by the beneficiary [miller2013justifying], but atomists might argue that holists have unilaterally imposed upon themselves the fiduciary duty to act on behalf of society.

3 The separation of facts and values

Atomists and holists disagree about how facts and values relate and their role in science. To better clarify the fact-value debate, this section examines the issues in more detail. At stake is whether values can be subjected to rational scrutiny and thereby provide objective grounds for ethical criticism of AI technologies.

3.1 The fact-value controversy: A brief historical interlude

The crux of the fact-value debate centers on whether it is possible to rationally criticize (i.e., by appeal to logic or empirical evidence) both the means used to achieve an end, and the end itself. Empiricists, following David Hume tend to argue it is not possible to rationally critique ends; rationalists, following Immanuel Kant, disagree.

Hume’s empiricism was revived in the 20th century in the philosophy of science known as logical positivism or logical empiricism, which asserted that philosophical problems ultimately stemmed from ambiguities in ordinary language [ayer1952language]. This highly ambitious project aimed to unify all sciences using formal logic and derive experimentally-verifiable hypotheses from formalized versions of scientific theories [rorty1992linguistic]. Asserting empirical verification as the criterion for meaning was an important methodological move that paved the way for behaviorism in psychology, revealed preference theory in economics [lewin1996economics], and operationalism in psychometrics [borsboom2005measuring].

The focus on empirical verification of scientific statements challenged the idea of objectivity in ethics. What methods of proof or evidence could support ethical judgments? To overcome this problem, some philosophers suggested that ethical judgments were actually composed of two separate components: a descriptive or factual part, and a command-like prescriptive component [hare1991language]. Others concluded that values were emotive or attitudinal in nature and lacking in descriptive, factual content, analogous to cries of pain, groans, or shrieks [ayer1952language]. Thus to say “X is good” is nothing more than a fancy way of saying “hurrah for X!”, or “I like X; you should too" [stevenson1944ethics]. Logical positivists, in short, claimed moral language is simply not “about” anything at all, casting serious doubt on efforts to resolve moral disagreements.

Despite its onetime popularity, philosophers of science abandoned the logical positivist project as early as the 1960s [suppe2000understanding]. Most philosophers today accept that facts and values are not so cleanly separable [williams2006ethics]. Still, vestiges of the logical positivist project remain influential in other disciplines [putnam2004collapse], particularly those fond of axiomatic formalization such as economics and computer science.

3.2 Implications for moral expertise and objectivity

Evaluating AI-based technologies presumably requires moral expertise. But if the notion of a moral fact is incoherent, how can we make sense of the idea of moral expertise? Does moral expertise imply the existence of objective moral facts, such as "killing is wrong" [elgin2013fact]? Even if we are willing to grant the existence of such facts, must moral experts not only possess theoretical, but also practical moral knowledge—the real-world skills needed to correctly apply moral concepts in the right situations [weinstein1994possibility]? Although developing a procedure for testing the validity of moral judgments appears difficult, if not impossible, moral philosophers nevertheless routinely undertake this task [rawls1971theory, habermas1990moral].

Another problem is that in most scientific domains, we expect experts to reach consensus on key facts, as it indicates convergence on the truth [williams2006ethics]. Yet moral disagreement is common, and value statements seemingly do not achieve a high degree of consensus, even among experts [steinkamp2008debating]. Nearly all data scientists will agree that

when training a machine learning model, both bias and variance cannot be minimized simultaneously

, but presumably fewer agree with “value-laden” statements such as a predictive model’s accuracy is more important than its interpretability. The lack of consensus around value statements, especially in an era of globalized data science, could be a sign that objective knowledge in the domain of AI ethics may not be possible. Without such objectivity, the legitimacy of publication decisions that include ethical evaluation may be disputed.

4 Social Benefits of Ethics Impact Statements

Although not without controversy, one major step towards a more holist culture was the decision by the Neural Information Processing Systems (NeurIPS) conference in 2020 to introduce a new ethics review process. The process requires submitting authors to include a “broader impact” section while a group of ethics experts review papers flagged as problematic by technical reviewers [ashurst2022ai]. Following the example of NeurIPS, impact statements and ethics reviews by experts are spreading to other conferences and journals [tmlrEthicsstatements]. The

IEEE/CVF Computer Vision and Pattern Recognition Conference

(CVPR) in 2022 adapted the NeurIPS ethics guidelines and “strongly encourages” authors to “discuss the ethical and societal consequences of their work in their papers in a concrete manner” [cvprConf2022Ethics]. But creating effective ethics impact statements remains challenging. Impact statements are plagued by a variety issues including their complexity, lack of guidance and best practices, lack of purpose, lack of procedural transparency, high opportunity costs, institutional pressure, and various social and cognitive biases affecting authors and referees [prunkl2021institutionalizing].

Nevertheless, data scientists can no longer presume their research will have a net positive impact on the world [hecht2021s]. The inclusion of impact statements could be a positive step towards recognition of the social responsibility of those who design, research, and deploy AI technologies. Below we identify several ways in which impact statements may benefit data science research and society in general.

Train “citizen” data scientists and cultivate moral and intellectual virtues

Writing and critiquing an ethics statement could become a core part of the moral education and training of future data scientists. Impact statements encourage moral reflection and the cultivation of civic and intellectual virtues, such as attentive observation, open-minded imagination, patient reflection, careful analysis, and fair-minded interpretation and assessment [baehr2011inquiring]. The development of moral and intellectual virtues could break down interdisciplinary academic barriers isolating researchers from public debates on AI-based technologies [greene2022barriers]. Hagendorff [hagendorff2020ai], for instance, suggests cultivating four basic virtues: justice, honesty, responsibility, and care, along with secondary virtues of prudence and fortitude (to speak truth to power). To develop these virtues, data science curricula might offer Embedded EthiCS[grosz2019embedded] courses to foster discussion on the ethical implications of AI-based technologies and provide training in writing ethics impact statements.

Promote disciplinary cross-pollination

The big-picture scope of ethics reviews and the diversity of the review groups can contribute to disciplinary cross-pollination. Novel dilemmas, formalizations, and technologies can be brought back and analyzed in different academic communities. These issues can then become part of the ethical training of data scientists. Data science research also confronts legal scholars and regulators with thorny problems of interpretation and application of existing laws, which could lead to better lawmaking. Lastly, cross-pollination can occur when data science researchers experimentally test the assumptions of traditional ethical theories [wallach2008moral], leading to theory refinement in ethics and moral psychology. For example, deep neural nets can make accurate predictions about the morality of various novel acts (e.g., “it’s rude to mow the lawn late at night”) as judged by human evaluators [jiang2021delphi, hendrycks2020aligning].

Spur new debates on the purpose and applications of corporate data science research

Technologists increasingly voice their concerns when corporate applications of technology conflict with their ethical values [davis2021corporate, mitchellFired]. Value-based discourse allows debate on the social and ethical impact of corporate-funded data science applications. One AI-driven application likely to lead to moral disagreement in the data science community is digital advertising [milano2021epistemic]. Jeff Hammerbacher, one of Facebook’s first data scientists, once said, “The best minds of my generation are thinking about how to make people click ads…That sucks” [hammerbacherFBads]. Indeed, Google and Facebook are two of the world’s largest technology companies employing data scientists and also two of the biggest players in digital advertising. Massive industry investment reflects the allocation of vast amounts of data scientist labor to solving problems related to digital advertising. But beyond the obvious economic benefits, what values justify this enormous outlay of human energy and attention?

Identify and inspire new forms of data science for social good projects

A more transparent debate about values can expand the scope of data scientists’ responsibility to new moral communities and stimulate new ideas for data science for social good. Ethics debates can identify new forms of harm before they become systematically entrenched in business practices and research pipelines. Interesting ethical blindspots of AI concern how computer vision, natural language processing (NLP), and other technologies contribute to environmental degradation

[crawford2021atlas], the exploitation of human data labelers—particularly in low-income countries [kshetri2021data] and in NLP crowdsourcing tasks [boaz2021beyond, santy2021use], and the mistreatment of animals in factory farms [hagendorff2021blind]. Another angle involves developing algorithms to “boost” [hertwig2017nudging, lorenz2020behavioural] critical thinking about the quality and veracity of news stories shared on social media in an effort to foster democratic values.

Reveal the sociotechnical, political, and value-laden nature of applied data science

Ethics impact statements can illuminate the sociotechnical nature of data science. In real-world institutions such as courts or corrections facilities, legal terms and predictive relationships evolve, actors act adversarially, and numerous non-sampling errors can affect algorithmic performance [greene2020hidden]. Not only that, many decisions in machine learning pipelines involve “essentially contested” and value-laden constructs such as fairness and justice [mittelstadt2019principles]. Sociotechnically-oriented data science recognizes that fairness is more than just a property of an algorithm [selbst2019fairness]: AI can be used as part of a political agenda [green2021data]. One noteworthy example comes from Barabas et al.[barabas2020studying], who refused to build an algorithm due to moral concerns about its function in reinforcing structural injustices in the prison system.

5 Concerns of Data Scientists Regarding Ethics Statements

This section considers possible concerns and arguments arising from atomist data scientists.

Lack of technical credibility

Multiple cases, including our own personal experience, exist where research is rejected or red flagged for apparent ethical concerns. Sometimes, however, the rejection is unwarranted as the contribution of the paper may be orthogonal or even a mechanism to better understand AI model risks. For example, an explainable AI paper might be rejected because the model it explains (viz. a large language model) has fairness issues. But the model itself is not the contribution. Undesirable behavior uncovered through explanations is not a limitation, but rather a validation of the explanation method. A rejection decision showcases ethics reviewers’ inability to understand the work’s technical contribution. In larger contexts, AI tools and systems might fare similarly if not properly assessed.

Lack of intellectual freedom

Rightly or wrongly, many data scientists currently see ethics requirements as constraints, not just algorithmically but also intellectually. Many see their purpose as discouraging work that does not adhere to certain “ethical” standards. Such rhetoric can engender a defensive attitude towards those demanding these standards for fear of one’s ideas being shunned and suppressed. Fostering a defensive research culture could have a chilling effect on data science progress, and in some cases even lead to “black market” research being done away from ethicists’ eyes. Ideally, when practitioners encounter ethical dilemmas, they should feel comfortable reporting and debating them with the larger research community, not suppressing them for fear of ridicule or punishment.

Limiting progress for an application/problem

In 2004, a chess program called Fruit [fruit]

was open sourced, which had a much lower playing strength than top human grandmasters at the time. However, it led to a revolution where ideas encoded in this initial version were imbibed in other programs and improved upon leading to the current generation of programs such as Stockfish. These programs are nearly unbeatable by humans and are now commonly used by top players for training and in the discovery of new ideas. This was a clear case where imperfect versions can still lead to significant progress. Refinement can happen over time. Analogously in AI, good ideas could be killed early if too many ethical restrictions are imposed upfront.

Limiting progress across applications/problems

Likewise, some ideas or tricks used to develop one system or application are often useful for different applications. For example, transformer architectures [transformer] were initially shown to be successful in natural language processing tasks, however, recently they are now preferred even in computer vision tasks [vt]. Hence, the organic transfer of ideas can have impacts far beyond the original application. Yet this intellectual transfer may be severely curtailed if strict yet subjective constraints are placed on what is ethically acceptable.

Subjectivity of ethics standards

Values can be highly subjective at an individual level and even at the level of regulatory institutions. For instance, the requirements of the General Data Protection Regulation (GDPR) [gdpr] in Europe vary from those outlined by NIST [nist] in the United States. Values are also hard to precisely specify and formalize. This can lead to frustration for persons in academia and industry alike. Publishing in respected venues is already challenging enough. A PhD student trying to meet publication requirements could get frustrated with additional and vague requirements that resemble a moving target. Similar ambiguities could betide engineering teams in industry offering products and services.

Most technology is neutral

A car could be used to transport an ailing person to the hospital and saving their life, or run over someone. Analogously in AI, a fairness algorithm could be used to output unfair results by misspecifying protected attributes. The point is that technology can be used for good or for bad and may not be inherently harmful. Hence, putting restrictions on it might make little sense. Rather, the application of it should be monitored and constrained.

6 Targeted Recipes for Fruitful Interdisciplinary AI Ethics Discussions

Given the ideological conflicts between atomists and holists, we adapt a therapeutic technique designed to restore trust between conflicting parties. Guided by its basic theme of accuracy and empathy, we then present four basic recipes—broadly targeted at different disciplines—for fostering more productive dialogue on AI ethics issues to benefit society at large (see Table 2).

6.1 A therapeutic model for interdisciplinary accuracy and empathy

Carl Rogers in 1951 proposed a person-centered approach to psychotherapy useful for resolving interpersonal conflict [accemp]. Importantly, dialogue participants are more satisfied and less resentful regardless of outcome. A similar dialogical process may be useful in improving the quality of ethics discussions in general. The core idea is the following:

Accuracy: Alice first expresses her worry. Bob then repeats it as accurately as possible. This repetition should not be an interpretation of what Bob thinks Alice meant, but as accurate a reproduction as possible. Afterwards, Bob asks Alice if he missed something, and if so, tries again until Alice is satisfied.

Empathy: Once Alice is satisfied with Bob’s reformulation, Bob empathizes with her concern. He imagines Alice’s perspective and follows her line of reasoning, although he may not agree with it. Then Bob makes his point, which now Alice must accurately repeat. The process continues until both parties are satisfied. A fictional conversation might proceed as follows:

  • Alice: I spent the last few weeks testing Schmoogle’s newest large language model for biases. I am now convinced it might be sentient.

  • Bob: You said you “spent the last few weeks testing Schmoogle’s newest large language model for biases and now you are convinced it is sentient.” Is that right?

  • Alice: Yes, that is right. We might want to ask for consent before we continue running A/B tests on it.

  • Bob: I understand why you would think such a complex language model might be sentient. If I spent as much time as you testing it, I would also be concerned. But the model is just finding statistical correlations in massive amounts of text from all over the internet.

  • Alice: You said, “The model is just finding statistical correlations in massive amounts of text data from all over the internet” Is that right?

  • Bob:

Atomist Holist
Implicit
Engineering, Physics,
Computer Science, Statistics
Technology Activism
Explicit
Economics,
Quantitative Social Science
Humanities, Law,
Qualitative Social Science
Table 2: Examples of data science ethics reviewer backgrounds and training and their position in the atomist-holist taxonomy. We distinguish the degree to which ideological assumptions are implicit or explicit, largely cutting across disciplinary divides.

6.2 Moral education for data scientists

Moral education is about the socialization and development of virtuous persons: persons with ethically admirable character traits and dispositions who empathize with and care for particular others in their community [noddings2013caring]. Today, however, most data scientists receive a highly technical education. Skills, not virtues—excellences that contribute to a flourishing human life and community—are the focus. Understandably, data science graduates may implicitly apply their disciplinary standards of objectivity to ethical issues, realize it cannot easily be done, and conclude that anything goes, or that ethical judgments are meaningless. Yet neglecting the moral education of data scientists can leave open a moral vacuum liable to be filled by tribalism, dogmatic identity politics, relativism, or nihilism. Data science educators would do well to emphasize the history and arguments behind the rejection of the fact-value distinction [putnam2004collapse], as these points seem to have gone unnoticed in AI-feeder disciplines.

Greater debate about the ideal character and virtues of data scientists can also pave the way for professionalization (see [mittelstadt2019principles]). Example debate-starters might be: under what sort of conditions is corporate whistleblowing appropriate? What is the difference between ethics consulting and fairwashing? To what extent should data scientists be expected to reasonably foresee the harmful use of AI-based technologies by others? While we cannot speak for the community, at the very least the moral education of future data scientists should aim at cultivating an ability to empathize, tolerate, and negotiate in good faith with those whose views differ from their own, without resorting to name-calling and threats of cancellation.

6.3 Incentives for economists

To encourage atomists to cooperate with holists to tackle the complex ethical issues posed by AI-based technologies, one suggestion draws on the idea that commerce mutually reinforces morality [hirschman1982rival]. By providing economic incentives towards more civil engagements on AI ethics issues, we can also respect atomists’ desire for autonomy and personal integrity.

At the institutional level, academic data science departments might incentivize faculty to explore co-teaching data science ethics courses with philosophers and lawyers. Senior researchers and practitioners could also be incentivized to mentor junior data scientists, focusing on character development. Industry data scientists could be rewarded for convening “ethics roundtable” discussions or ethics reading groups. Conferences and journals with ethics reviews and impact statements could also offer prizes or community recognition to authors whose statements are particularly cogent or creative [prunkl2021institutionalizing]. Similarly, positive examples of civil debate—whether on social media or offline—could be rewarded through community praise. The actions of certain “moral exemplars” can become teaching material for data science ethics courses. In general, organizations can do more to frame ethics issues as new and exciting research challenges, rather than as autonomy-restricting handcuffs.

6.4 Philosophical and legal training for technological activists

While typically trained in the natural sciences, technological activists seek to draw attention to social justice issues related to AI and data science, such as racial and gender discrimination in STEM fields and industry hiring practices. Technology activists author popular books [o2016weapons, broussard2018artificial] and have received major media attention in news articles [wylieCambridgeAn, mitchellFired] and films (i.e., The Social Dilemma). Although we sympathize with activists’ intentions, we also worry that threatening the cancellation[animacancellation] of those who hold opposing values can stoke fears of tribalism and dogmatic group-think, sparking further disciplinary polarization. Indeed, given activists’ concern with the unjust use of power, threats of mob-based cancellation may appear hypocritical.

We encourage philosophical training to hone activists’ persuasive and reflective abilities and create new allies and collaborators in the humanities, social sciences, and law. With broad support from the often-fragmented “two cultures” of the natural and human sciences, technological activists can arguably better achieve their ethical goals. Philosophy provides an array of concepts to express ethical concerns (e.g., rights, duties, utility, virtue, power, care, etc.) and can aid in identifying ad hominem attacks and logical fallacies. Familiarity with various areas of law (e.g., corporate law, public law, torts, etc.) can also help technology activists frame AI harms in terms of pre-existing legal frameworks and concepts, such as discrimination or human rights law [aizenberg2020designing].

6.5 Technical training for humanities scholars

For those in the humanities (e.g., philosophy, law, media and communication studies, etc.) and “soft” social sciences (e.g., anthropology, sociology, education, etc.,) who might participate in ethics reviews, we believe it is fair to ask whether they have “appropriate” technical expertise so as to accurately evaluate data science products, systems, and/or papers. For instance, there is a growing literature that draws on postmodern philosophical sources to explain how AI can entrench existing racial and gender inequities [benjamin2019race] and reproduce the dynamics of colonial exploitation and power differentials [crawford2021atlas, couldry2019data]. While we welcome critical voices into the debate on the ethics of AI, we must clearly distinguish features of the technology in itself from uses of the technology by organizations embedded in larger social, political, and economic systems.

One suggestion is to establish an accreditation process for ethics reviewers, such as a minimum-duration internship in industry or an AI-related academic department (e.g., computer science or statistics). The knowledge and skills obtained could improve both the accuracy and empathy of cross-disciplinary dialogue. Real-world experience would help ethics reviewers to develop their practical moral knowledge and increase their ability to empathize with data scientists. Accreditation could be done in collaboration with a number of interdisciplinary research groups and industry research labs.

7 Conclusion

The data science community appears split—sometimes violently so—along an ideological divide. We explicated key beliefs and assumptions of atomist and holist ideologies and described their relevance to the fact-value debate. Atomists see themselves as toolmakers and hold that facts are and should be kept separate from values, while holists see themselves as social stewards and believe facts and values mutually reinforce one another. In the interests of the data science community and society, we advocate for a balance between the two camps’ views and offer several recipes for more empathetic and productive interdisciplinary dialogue on AI ethics. We hope this work can serve as a microcosm of such dialogue, as we ourselves occupy various points along the atomist-holist spectrum.

But our diagnosis and proposal has limitations. First, some recipes may not translate well to digital and social media contexts, where the most-heated AI ethics arguments tend to occur. Second, the atomist-holist taxonomy obscures heterogeneity among data scientists who may hold less traditional views. Lastly, because many advances in AI happen inside technology corporations, corporations arguably should assume the burdens of legal and moral responsibility for the harms and risks imposed by their innovations. More systemic solutions—at the level of corporate law and governance [stout2012shareholder]—are likely needed to ensure that AI-based technology is developed and deployed with the broader interests of society in mind, not simply shareholders.

We hope our diagnosis and vision draws attention to the importance of greater empathy, openness, and humility for all members of the data science community, a community whose boundaries continue to evolve and expand. We look forward to a time when community ethics discussions are dominated by the best reasons, rather than the loudest voices.

Acknowledgements

We thank Finale Doshi-Velez for helpful comments on an earlier draft, and also David Martens and the Applied Data Mining Research Group at the University of Antwerp for the many stimulating conversations and valuable feedback.

References