AGI labs need an internal audit function

05/26/2023
by   Jonas Schuett, et al.
0

The paper argues that organizations that have the stated goal of building artificial general intelligence (AGI) need an internal audit function. First, it explains what internal audit is: a specific team that performs an ongoing assessment of an organization's risk management practices and reports directly to the board of directors, while being organizationally independent from senior management. Next, the paper discusses the main benefits of internal audit for AGI labs: it can make their risk management practices more effective; ensure that the board of directors has a more accurate view of the current level of risk and the effectiveness of the lab's risk management practices; signal that the lab follows best practices in corporate governance; and serve as a contact point for whistleblowers. However, AGI labs should be aware of a number of limitations: internal audit adds friction; there is not much empirical evidence in support of the above-mentioned benefits; the benefits depend on the people involved and their ability and willingness to identify ineffective risk management practices; setting up and maintaining an internal audit team is costly; and it should only be seen as an additional "layer of defense", not a silver bullet against emerging risks from AI. Finally, the paper provides a blueprint for how AGI labs could set up an internal audit team and suggests concrete things the team would do on a day-to-day basis. These suggestions are based on the International Standards for the Professional Practice of Internal Auditing Standards. In light of rapid progress in AI research and development, AGI labs need to professionalize their risk management practices. Instead of "reinventing the wheel", they should follow existing best practices in corporate governance. This will not be sufficient as they approach AGI, but they should not skip this obvious first step.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/17/2023

Risk assessment at AGI companies: A review of popular risk assessment techniques from other safety-critical industries

Companies like OpenAI, Google DeepMind, and Anthropic have the stated go...
research
12/16/2022

Three lines of defense against risks from AI

Organizations that develop and deploy artificial intelligence (AI) syste...
research
10/10/2022

Towards an efficient and risk aware strategy for guiding farmers in identifying best crop management

Identification of best performing fertilizer practices among a set of co...
research
03/04/2020

Risk Management Practices in Information Security: Exploring the Status Quo in the DACH Region

Information security management aims at ensuring proper protection of in...
research
02/09/2021

Residue Density Segmentation for Monitoring and Optimizing Tillage Practices

"No-till" and cover cropping are often identified as the leading simple,...
research
05/11/2023

Towards best practices in AGI safety and governance: A survey of expert opinion

A number of leading AI companies, including OpenAI, Google DeepMind, and...
research
06/23/2021

A Silicon Valley Love Triangle: Hiring Algorithms, Pseudo-Science, and the Quest for Auditability

In this paper, we suggest a systematic approach for developing socio-tec...

Please sign up or login with your details

Forgot password? Click here to reset