Transdisciplinary AI Observatory – Retrospective Analyses and Future-Oriented Contradistinctions

11/26/2020
by   Nadisha-Marie Aliman, et al.
0

In the last years, AI safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently transdisciplinary AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing concrete practical examples. Distinguishing between unintentionally and intentionally triggered AI risks with diverse socio-psycho-technological impacts, we exemplify a retrospective descriptive analysis followed by a retrospective counterfactual risk analysis. Building on these AI observatory tools, we present near-term transdisciplinary guidelines for AI safety. As further contribution, we discuss differentiated and tailored long-term directions through the lens of two disparate modern AI safety paradigms. For simplicity, we refer to these two different paradigms with the terms artificial stupidity (AS) and eternal creativity (EC) respectively. While both AS and EC acknowledge the need for a hybrid cognitive-affective approach to AI safety and overlap with regard to many short-term considerations, they differ fundamentally in the nature of multiple envisaged long-term solution patterns. By compiling relevant underlying contradistinctions, we aim to provide future-oriented incentives for constructive dialectics in practical and theoretical AI safety research.

READ FULL TEXT
research
06/13/2022

X-Risk Analysis for AI Research

Artificial intelligence (AI) has the potential to greatly improve societ...
research
02/12/2020

AI safety: state of the field through quantitative lens

Last decade has seen major improvements in the performance of artificial...
research
08/31/2022

Negative Human Rights as a Basis for Long-term AI Safety and Regulation

If future AI systems are to be reliably safe in novel situations, they w...
research
02/27/2023

Safety without alignment

Currently, the dominant paradigm in AI safety is alignment with human va...
research
08/08/2017

Robust Computer Algebra, Theorem Proving, and Oracle AI

In the context of superintelligent AI systems, the term "oracle" has two...
research
08/30/2022

Foreseeing the Impact of the Proposed AI Act on the Sustainability and Safety of Critical Infrastructures

The AI Act has been recently proposed by the European Commission to regu...
research
07/26/2019

Modelling the Safety and Surveillance of the AI Race

Innovation, creativity, and competition are some of the fundamental unde...

Please sign up or login with your details

Forgot password? Click here to reset