Modeling Transformative AI Risks (MTAIR) Project – Summary Report

by   Sam Clarke, et al.

This report outlines work by the Modeling Transformative AI Risk (MTAIR) project, an attempt to map out the key hypotheses, uncertainties, and disagreements in debates about catastrophic risks from advanced AI, and the relationships between them. This builds on an earlier diagram by Ben Cottier and Rohin Shah which laid out some of the crucial disagreements ("cruxes") visually, with some explanation. Based on an extensive literature review and engagement with experts, the report explains a model of the issues involved, and the initial software-based implementation that can incorporate probability estimates or other quantitative factors to enable exploration, planning, and/or decision support. By gathering information from various debates and discussions into a single more coherent presentation, we hope to enable better discussions and debates about the issues involved. The model starts with a discussion of reasoning via analogies and general prior beliefs about artificial intelligence. Following this, it lays out a model of different paths and enabling technologies for high-level machine intelligence, and a model of how advances in the capabilities of these systems might proceed, including debates about self-improvement, discontinuous improvements, and the possibility of distributed, non-agentic high-level intelligence or slower improvements. The model also looks specifically at the question of learned optimization, and whether machine learning systems will create mesa-optimizers. The impact of different safety research on the previous sets of questions is then examined, to understand whether and how research could be useful in enabling safer systems. Finally, we discuss a model of different failure modes and loss of control or takeover scenarios.


page 9

page 17

page 18

page 20

page 23

page 25

page 36

page 40


X-Risk Analysis for AI Research

Artificial intelligence (AI) has the potential to greatly improve societ...

AI Risk Skepticism, A Comprehensive Survey

In this thorough study, we took a closer look at the skepticism that has...

Arguments about Highly Reliable Agent Designs as a Useful Path to Artificial Intelligence Safety

Several different approaches exist for ensuring the safety of future Tra...

The Roles and Modes of Human Interactions with Automated Machine Learning Systems

As automated machine learning (AutoML) systems continue to progress in b...

Concrete Problems in AI Safety

Rapid progress in machine learning and artificial intelligence (AI) has ...

Risks from Learned Optimization in Advanced Machine Learning Systems

We analyze the type of learned optimization that occurs when a learned m...

Could AI be the Great Filter? What Astrobiology can Teach the Intelligence Community about Anthropogenic Risks

Where is everybody? This phrase distills the foreboding of what has come...

Please sign up or login with your details

Forgot password? Click here to reset