In recent years it has become clear that while AI can greatly benefit society, it also carries many risks, exemplified by the racial bias in Google’s search engine, the use of facial recognition technology for surveillance, and Cambridge Analytica’s use of AI for targeted political messaging. This has resulted in growing calls for ’ethical AI’ from critics while AI organisations each present their own set of ethical principles, the most common ones being transparency, fairness, non-maleficence, responsibility, and privacyJobin et al. (2019). Additionally, several initiatives have emerged to address AI ethics at a larger scale, such as the Montreal Declaration and the EU’s guidelines for trustworthy AI. The general focus is on technical improvements in design and distribution on the one hand and better regulation on the other, where individual agents can be identified and held accountable for the ethical development, deployment and use of AI. While these are necessary steps in making AI more ethical, this paper questions whether they are sufficient to avoid the wide range of AI’s potentially negative impacts. We argue that the dominant discourse neglects systemic risks, and needs to be complemented by a structural approach to account for the more indirect and complex causes of AI risks. Since these cannot be solved by technical changes alone, this paper calls for more interdisciplinary research and urges the AI community to engage with social scientists and policy makers. The first section explores to what extent the current approach, focused on individual agency, helps to reduce AI’s negative impacts. The second section assesses the limitations of this approach by contrasting it with a structural approach adopting a contextual understanding of AI’s systemic risks. The third section explores such risks in relation to climate change and food security, where AI’s interaction with socio-economic and political factors could have far-reaching impacts. Finally the paper offers some preliminary suggestions on how a structural approach might be implemented to help mitigate AI’s harmful effects and ensure it is socially beneficial.
2 Agency approach: technical improvements and better regulation
The dominant discourse on AI ethics explores how negative outcomes can be prevented by improving the algorithmic design and optimising the distribution of technology. In an evaluation of AI ethics guidelines, the most often mentioned issues were those that could be solved to some extent by ‘technical fixes’ such as fairness, explainability, privacy, transparency and robustness Hagendorff (2019). For example, Google adjusted its racially biased algorithm, but without more fundamental changes to preventing bias in general; although this is a common response to ethical issues in the AI industry, such a solution is ‘ad hoc and reactive’ rather than a transformative shift to being more ethical Crawford and Calo (2016). Nonetheless, technical improvements are crucial to minimising certain risks while maximising benefits and thus optimising AI systems. This optimisation pertains to the production as well as distribution phase since negative impacts may not result from the product itself but its use and malicious use. Accordingly, in addition to ensuring the design is safe and secure, AI developers should carefully assess who their product, data or knowledge is shared with and to what extent. Optimisation of facial recognition technology could then entail not sharing it with actors that may use it maliciously, just as Amazon is being pressured to stop selling its ‘Rekognition’ system to government agencies Whittaker et al (2018). In extreme cases such as automatic weapons it might entail halting development completely, although the current literature precludes this possibility, being concerned with making AI ‘better’ through technical improvements Greene et al. (2019).
A second and equally vital emphasis in AI ethics literature has been on holding those responsible for AI’s negative impacts accountable. This would require new policies and enforcement mechanisms since existing frameworks governing AI do not successfully ensure accountability, being largely left to corporate self-governance Whittaker et al (2018). Moreover, competition and lack of regulation mean that currently fast development is prioritised over ‘safe, secure and socially beneficial’ development Askell et al. (2019). Regulation and liability laws are therefore needed to increase the incentive for responsible AI development Askell et al. (2019). Otherwise, the plethora of ethics guidelines have no substantial impact and may further postpone legally binding regulations while their generic nature encourages ‘the devolution of ethical responsibility to others’ Hagendorff (2019). Accountability means that Google would face legal ramifications for releasing a biased algorithm, as would AI companies selling facial recognition technology to actors using it for unethical purposes, while those actors themselves would also be held accountable. For instance, the UK privacy regulator is currently assessing a private company’s use of facial recognition in CCTV systems in central London after its legality was questioned Sabbagh (2019). However, such AI governance might not be sufficient. The Cambridge Analytica scandal centred on the illegal sharing and using of data, not the use of AI for political messaging. Because public and regulatory energy focused on data sharing rather than the structural way AI can shape democratic processes by facilitating such ‘targeted propaganda’ Brundage et al (2018), it does not preclude widespread use of AI for political micro-targeting. No individual agents could be held accountable for these broader negative impacts, because the causes are structural rather than agency-based.
3 Structural approach: understanding systemic risks
The dominant discourse on AI ethics takes an agency approach in the sense that it addresses safety and security issues for which specific actors are responsible. However, it largely neglects systemic risks, which do not stem from either accidents or malicious use Dafoe (2018). Instead, they result from the way AI systems shape and are shaped by the social, economic and political environment Zwetsloot and Dafoe (2019). A structural approach is needed to understand these effects which are more complex and indirect, and cannot be traced to individual actions Zwetsloot and Dafoe (2019). These characteristics are arguably a contributing factor to the general disregard for systemic risks, since obvious benefits outweigh opaque harmful effects, while their long-term nature is not easily captured by either corporate or political ’short-termism’ Paulson Jr. (2015). Moreover, the distributed responsibility for these risks complicates the creation of necessary regulation and its enforcement Ploug (2018). Despite and because of this, a structural approach is needed to understand systemic risks. Lessons should be drawn from other fields, such as systemic risks in the financial sector and medical research. For instance, reviewers of a study indicating that antibiotics helped alleviate symptoms of a certain disease expressed concern at publishing it because despite the benefits, publication could inadvertently lead to a higher intake of antibiotics and thus increase antibiotic resistance Ploug (2018). Without any agent behaving unethically, the research’s unintended consequences could have negative societal impacts. Systemic risks thus apply to other industries, however it has particular relevance to AI because of the scale on which AI systems can operate and how rapidly they are evolving, with their impact growing as automation increases. Moreover, while ethics codes are starting to resemble the four principles of medical ethics, this is unlikely to translate into equally ethical practices. Key differences between the two domains mean that medicine’s ’principled approach’ is unlikely to be as successful for AI, not in the least because AI lacks a clear common aim while the large share of private AI development allows commercial interests to trump public interests Mittelstadt (2019). In AI ethics literature, the few times systemic risks are acknowledged, it is in relation to warfare (eg Dafoe (2018)) or competition in the ‘AI race’ to get first-mover advantage (eg Hagendorff (2019), Cave and ÓhÉigeartaigh (2018), Askell et al. (2019)). Race dynamics between companies, but especially between countries, compromise safety measures and ethical standards Cave and ÓhÉigeartaigh (2018). The race is therefore itself a systemic risk, caused by AI’s interaction with global politics and a competitive market economy lacking sufficient governance for AI ethics Askell et al. (2019). To properly address this and other systemic risks posed by AI, agency-focused policies must therefore be complemented by policies informed by a structural approach.
4 AI for climate change and food security
One domain where a structural perspective is particularly crucial is climate change, because it is in itself a systemic issue linked to international politics and the global economic system. It is deeply political because responsibility and impact are unevenly distributed, since low income countries have generally not shared equally in the benefits of fossil fuels but are still harmed by the high income countries’ energy consumption Diffenbaugh and Burke (2019). Moreover, there is a strong inverse relationship between local impact of climate change and that location’s wealth such that ‘the greatest shifts in climate will be experienced by the poorest’ King and Harrington (2018). Although a political solution with global cooperation is needed, technology can greatly aid climate action, and AI specifically holds a lot of potential for both mitigation and adaptation. Rolnick et al (2019)
outline the wide range of machine learning applications to climate action, from enhancing efficiency in transport and infrastructure, to advancing the energy transition by improving renewable energy technologies. Other examples include using ML models for more accurate weather and climate forecastsHwang et al. (2018)
and applying deep learning to improve climate modelsRasp et al. (2018) or advance earth science more broadly Maskey et al. (2018). Meanwhile, the number of companies using AI to offer ‘climate services’ has surged for example through monitoring environmental risks (eg Ecometrica Ecometrica (2018)), predicting extreme weather events (eg Jupiter Intelligence (2019)), or providing data to assess general climate risks (eg Acclimatise Acclimatise (2019)). Critics warn that the exclusive nature of these commercial services could exacerbate inequality, essentially enabling those who can afford it to protect themselves and even profit from climate change, while others suffer its impact Dembicki (2019). Similarly, a recent UN report warned of ‘climate apartheid’, where the impact of climate change mirrors existing lines of wealth and power 3. Consequently, and having noted that accountability for individual agents is insufficient, ensuring that AI fulfills its potential for good without exacerbating issues like climate inequality requires a structural approach to understand and mitigate systemic risks that could arise from AI’s interaction with the social, economic and political dimensions of climate change.
It is useful to consider a hypothetical example to elucidate what such systemic risks might look like, looking at food security as one climate-related issue where AI is well-positioned to help. A range of companies already use AI in relation to agriculture (eg Ag (2018)) or specifically climate impact on agriculture (eg aWhere (2019)), but AI could also monitor food security in real time, as well as give longer-term warnings through ‘spatially localized crop yield predictions’ Rolnick et al (2019). The IPCC’s ’Climate Change and Land’ report highlights that ensuring global food security requires understanding the impact of climate change Arneth et al (2019). An AI-enabled model that predicts the impact of climate change on land and agricultural production would therefore be incredibly valuable. It would allow for pre-emptive climate action such as policies for more sustainable land use to mitigate yield loss caused by land degradation, while also anticipating short-term food shortages so that proactive policies can ensure continued access to food as opposed to emergency responses. However, it would interact with existing socio-economic and political structures shaping food production and distribution, since food security is equally a systemic issue requiring a political solution. The 2008 food crisis exemplified this, when technological solutions could not have prevented food insecurity. Although climate played a role in diminished yields, other factors were more important such as speculation driving up food prices, increased biofuel production, and diminishing buffer stocks and investment levels in agriculture Mittal (2009). An AI platform predicting food supply would therefore need a structural perspective to understand the political and socio-economic factors that determine food security.
In our example several systemic risks can be hypothesized including: price hikes, hoarding of food supplies, environmentally unsustainable practices, conflict over arable land and increased inequality. Consider a scenario in which this AI platform reveals that in five years maize yields will drop by 20% in region X which represents 10% of global production, not wholly unlikely given the expected 7.4% decline at one degree temperature rise Zhao et al (2017). As scarcity increases the value of goods, this could lead to a drastic price rise with devastating consequences for region X and cascade effects for the circa 820 million food insecure people worldwide who are most impacted by food price rises Torrero Cullen et al (2019). Additionally, maize growers might hoard their supplies, thus exacerbating price spikes and increasing the likelihood of food insecurity elsewhere as witnessed in 2008 when large maize-exporting countries imposed export bans Tigchelaar et al. (2018). Since yield losses will differ between countries Zhao et al (2017), another scenario could be that a large country with considerable economic and military strength is found to suffer more from land degradation than its less powerful neighbouring country, providing an incentive to annex arable land. It could also lead to deforestation to make land available or depletion of other natural resources. In these scenarios, the AI platform would need to carefully consider how it shares its information, while engaging with social scientists to adopt a structural approach to understand and mitigate such risks. This could mean not sharing information freely with the highest bidder but only with appropriate actors and under specific conditions, while coordinating with regulatory bodies that monitor food prices or the use of natural resources for example. Moreover, it would have to engage with policymakers to ensure its benefits are not undermined by socio-economic and political factors.
Current literature on AI ethics addresses the need to minimise risks of AI systems through the prevention of accidents and misuse. This paper has shown that this understanding of AI risks needs to be complemented by a structural approach to account for systemic risks, in particular when AI systems engage with systemic issues such as climate change where technology’s impact is affected by the socio-economic and political context. Since there is no mathematical solution this paper cannot offer a technical road map, but we do suggest several ways our findings can be translated into practice. First, ethics codes must adopt both an agency and structural approach to encompass the wide range of AI risks. Such codes should become central to AI research and development, but are equally relevant to other stakeholders such as policy makers and users of AI systems. Moreover, since international cooperation is needed to regulate the ethical development and use of AI and prevent ’ethics shopping’, ethics codes need to be standardised at an international level Cihon (2019). The G20’s recommendations for ‘trustworthy AI’ are promising in calling for internationally comparable metrics and cooperation to ensure AI is beneficial ‘for people and the planet’, although it needs to be broadened to include systemic risks 1. Second, interdisciplinary collaboration is needed in the formulation of ethics guidelines and their translation into policies, with an inclusive and democratic process to determine what constitutes ’AI for good’. Such collaboration between ‘governments, industry, academia and civil society’ Gutteres (2019) at a national and international level would also facilitate the implementation of policies and the creation of governance structures required to put abstract principles into practice. Finally, new regulations will require mechanisms to enforce them, such as independent regulators. At a national level this could involve legislation such as liability laws, something which the EU’s ‘expert group on liability and new technologies’ is currently investigating 2. For companies this could mean creating an ethics committee and undertaking thorough, comprehensive risk assessments. Although enforcement at the global level remains the biggest challenge, the fact that systemic risks are also collective risks should provide a strong incentive for international cooperation Zwetsloot and Dafoe (2019). Lessons can be drawn from other sectors, for instance international regulation on nuclear weapons or preventing systemic risks in the financial sector. The example from the medical domain mentioned in section 3 suggests an international code of publication ethics where editors are responsible for the wider ethical implications of publishing findings Ploug (2018). Research is needed to answer similar questions for AI around how ethics codes should be implemented and by whom. Presumably democratic governments are best positioned to lead this as they have the capacity to enforce norms and in theory would prioritize the common good rather than commercial profit or academic status. Nevertheless, AI ethics remains deeply contentious as it ’is effectively a microcosm of the political and ethical challenges faced in society’ Mittelstadt (2019). As the specifics of AI governance are beyond the scope of this paper, these are merely preliminary suggestions but we hope that demonstrating the need for a structural approach to complement the current AI ethics discourse will encourage further research on systemic risks, how they can be integrated into policies and how these are then best enforced to ensure ethical AI.
-  (2019) ’G20 AI Principles’. Osaka. External Links: Cited by: §5.
-  (2019) ’Liability of Defective Products’. External Links: Cited by: §5.
-  (2019-06) ’World faces "climate apartheid" risk, 120 more million in poverty: UN expert’. External Links: Cited by: §4.
- ’Building climate resilience’. External Links: Cited by: §4.
- ’About Indigo Ag’. External Links: Cited by: §4.
- IPCC Special Report on Climate Change, Desertification, Land Degradation, Sustainable Land Management, Food Security, and Greenhouse gas fluxes in Terrestrial Ecosystems Summary for Policymakers. Technical report Intergovernmental Panel on Climate Change. Cited by: §4.
- The Role of Cooperation in Responsible AI Development. arXiv:1907.04534 [cs]. Note: Comment: 23 pages, 1 table Cited by: §2, §3.
- ’Weather Intelligence For A Changing Climate’. External Links: Cited by: §4.
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Technical report OpenAI. Cited by: §2.
- An AI Race for Strategic Advantage: Rhetoric and Risks. External Links: Cited by: §3.
- Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development. Technical report Future of Humanity Institute. Cited by: §5.
- There is a Blind Spot in AI Research. Nature News 538 (7625), pp. 311 (en). Cited by: §2.
- AI Governance: A Research Agenda. Technical report Future of Humanity Institute. Cited by: §3.
- ’Will the climate services industry only help those who can pay?’. External Links: Cited by: §4.
- Global Warming Has Increased Global Economic Inequality. Proceedings of the National Academy of Sciences (en). Cited by: §4.
- ’Environmental Risk Reporting & Impact Monitoring for Government and Public Sector Organisations’. External Links: Cited by: §4.
- Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning. (en). External Links: Cited by: §2.
- ’Secretary-General’s message for Third Artificial Intelligence for Good Summit’. External Links: Cited by: §5.
- The Ethics of AI Ethics – An Evaluation of Guidelines. arXiv: 1903.03425. Note: Comment: 15 pages, 1 table Cited by: §2, §2, §3.
- Improving Subseasonal Forecasting in the Western U.S. with Machine Learning. arXiv:1809.07394. Cited by: §4.
- ’Jupiter Services: Dynamic Technology Delivers Precise Asset-Level Predictions’. External Links: Cited by: §4.
- Artificial Intelligence: the Global Landscape of Ethics Guidelines. arXiv:1906.11668. Note: Comment: 42 pages (incl. figures and supplementary information) Cited by: §1.
- The Inequality of Climate Change From 1.5 to 2 C of Global Warming. Geophysical Research Letters 45 (10), pp. 5030–5033 (en). Cited by: §4.
- Earth Science Deep Learning: Applications and Lessons Learned. In IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, pp. 1760–1763 (en). Cited by: §4.
- The 2008 Food Price Crisis: Rethinking Food Security Policies. Technical report Technical Report 56, UNCTAD. Cited by: §4.
- AI Ethics – Too Principled to Fail?. arXiv:1906.06668. Cited by: §3, §5.
- Short-termism and the threat from climate change. In Perspectives on the Long Term: Building a Stronger Foundation for Tomorrow, (en). Cited by: §3.
- Should all medical research be published? The moral responsibility of medical journal editors. Journal of Medical Ethics 44 (10), pp. 690–694 (en). Cited by: §3, §5.
- Deep Learning to Represent Subgrid Processes in Climate Models. Proceedings of the National Academy of Sciences 115 (39), pp. 9684–9689 (en). Cited by: §4.
- Tackling Climate Change with Machine Learning. arXiv:1906.05433. Cited by: §4, §4.
- Regulator looking at use of facial recognition at King’s Cross site. The Guardian. External Links: Cited by: §2.
Future Warming Increases Probability of Globally Synchronized Maize Production Shocks. Proceedings of the National Academy of Sciences 115 (26), pp. 6644–6649 (en). Cited by: §4.
- State of Food Security and Nutrition in the World 2019: Safeguarding Against Economic Slowdowns and Downturns. Technical report Food and Agriculture Organisation of the UN, Rome (en). External Links: Cited by: §4.
- AI Now Report 2018. Technical report AI Now Institute. Cited by: §2, §2.
Temperature Increase Reduces Global Yields of Major Crops in Four Independent Estimates. Proceedings of the National Academy of Sciences 114 (35), pp. 9326–9331 (en). Cited by: §4.
- Thinking About Risks From AI: Accidents, Misuse and Structure. (en). External Links: Cited by: §3, §5.