DeepAI AI Chat
Log In Sign Up

How Do AI Timelines Affect Existential Risk?

08/30/2022
by   Stephen McAleese, et al.
0

Superhuman artificial general intelligence could be created this century and would likely be a significant source of existential risk. Delaying the creation of superintelligent AI (ASI) could decrease total existential risk by increasing the amount of time humanity has to work on the AI alignment problem. However, since ASI could reduce most risks, delaying the creation of ASI could also increase other existential risks, especially from advanced future technologies such as synthetic biology and molecular nanotechnology. If AI existential risk is high relative to the sum of other existential risk, delaying the creation of ASI will tend to decrease total existential risk and vice-versa. Other factors such as war and a hardware overhang could increase AI risk and cognitive enhancement could decrease AI risk. To reduce total existential risk, humanity should take robustly positive actions such as working on existential risk analysis, AI governance and safety, and reducing all sources of existential risk by promoting differential technological development.

READ FULL TEXT
09/21/2022

Current and Near-Term AI as a Potential Existential Risk Factor

There is a substantial and ever-growing corpus of evidence and literatur...
07/25/2016

A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis

An artificial superintelligence (ASI) is artificial intelligence that is...
07/10/2020

Senior Living Communities: Made Safer by AI

There is a historically unprecedented shift in demographics towards seni...
05/02/2021

AI Risk Skepticism

In this work, we survey skepticism regarding AI risk and show parallels ...
03/28/2020

AI reputational risk management

The benefits of AI are many. It can help tackle climate change, strength...
02/16/2023

AI Risk Skepticism, A Comprehensive Survey

In this thorough study, we took a closer look at the skepticism that has...
05/09/2023

Could AI be the Great Filter? What Astrobiology can Teach the Intelligence Community about Anthropogenic Risks

Where is everybody? This phrase distills the foreboding of what has come...