DeepAI AI Chat
Log In Sign Up

X-Risk Analysis for AI Research

by   Dan Hendrycks, et al.

Artificial intelligence (AI) has the potential to greatly improve society, but as with any powerful technology, it comes with heightened risks and responsibilities. Current AI research lacks a systematic discussion of how to manage long-tail risks from AI systems, including speculative long-term risks. Keeping in mind the potential benefits of AI, there is some concern that building ever more intelligent and powerful AI systems could eventually result in systems that are more powerful than us; some say this is like playing with fire and speculate that this could create existential risks (x-risks). To add precision and ground these discussions, we provide a guide for how to analyze AI x-risk, which consists of three parts: First, we review how systems can be made safer today, drawing on time-tested concepts from hazard analysis and systems safety that have been designed to steer large processes in safer directions. Next, we discuss strategies for having long-term impacts on the safety of future systems. Finally, we discuss a crucial concept in making AI systems safer by improving the balance between safety and general capabilities. We hope this document and the presented concepts and tools serve as a useful guide for understanding how to analyze AI x-risk.


page 10

page 11

page 12

page 20

page 21

page 25

page 31

page 36


Current and Near-Term AI as a Potential Existential Risk Factor

There is a substantial and ever-growing corpus of evidence and literatur...

Fairness in AI and Its Long-Term Implications on Society

Successful deployment of artificial intelligence (AI) in various setting...

AI Research Considerations for Human Existential Safety (ARCHES)

Framed in positive terms, this report examines how technical AI research...

Understanding and Avoiding AI Failures: A Practical Guide

As AI technologies increase in capability and ubiquity, AI accidents are...

Transdisciplinary AI Observatory – Retrospective Analyses and Future-Oriented Contradistinctions

In the last years, AI safety gained international recognition in the lig...

Modeling Transformative AI Risks (MTAIR) Project – Summary Report

This report outlines work by the Modeling Transformative AI Risk (MTAIR)...

The Peril of Artificial Intelligence

— The integration of AI technology is with the hope of reducing human er...