DeepAI AI Chat
Log In Sign Up

Arguments about Highly Reliable Agent Designs as a Useful Path to Artificial Intelligence Safety

by   Issa Rice, et al.

Several different approaches exist for ensuring the safety of future Transformative Artificial Intelligence (TAI) or Artificial Superintelligence (ASI) systems, and proponents of different approaches have made different and debated claims about the importance or usefulness of their work in the near term, and for future systems. Highly Reliable Agent Designs (HRAD) is one of the most controversial and ambitious approaches, championed by the Machine Intelligence Research Institute, among others, and various arguments have been made about whether and how it reduces risks from future AI systems. In order to reduce confusion in the debate about AI safety, here we build on a previous discussion by Rice which collects and presents four central arguments which are used to justify HRAD as a path towards safety of AI systems. We have titled the arguments (1) incidental utility,(2) deconfusion, (3) precise specification, and (4) prediction. Each of these makes different, partly conflicting claims about how future AI systems can be risky. We have explained the assumptions and claims based on a review of published and informal literature, along with consultation with experts who have stated positions on the topic. Finally, we have briefly outlined arguments against each approach and against the agenda overall.


page 1

page 2

page 3

page 4


Guidelines for Artificial Intelligence Containment

With almost daily improvements in capabilities of artificial intelligenc...

A Systematic Literature Review about the impact of Artificial Intelligence on Autonomous Vehicle Safety

Autonomous Vehicles (AV) are expected to bring considerable benefits to ...

Explainable AI through the Learning of Arguments

Learning arguments is highly relevant to the field of explainable artifi...

Uncontrollability of AI

Invention of artificial general intelligence is predicted to cause a shi...

On Controllability of AI

Invention of artificial general intelligence is predicted to cause a shi...

Modeling Transformative AI Risks (MTAIR) Project – Summary Report

This report outlines work by the Modeling Transformative AI Risk (MTAIR)...

Less (Data) Is More: Why Small Data Holds the Key to the Future of Artificial Intelligence

The claims that big data holds the key to enterprise successes and that ...