Does AlphaGo actually play Go? Concerning the State Space of Artificial Intelligence

by   Holger Lyre, et al.

The overarching goal of this paper is to develop a general model of the state space of AI. Given the breathtaking progress in AI research and technologies in recent years, such conceptual work is of substantial theoretical interest. The present AI hype is mainly driven by the triumph of deep learning neural networks. As the distinguishing feature of such networks is the ability to self-learn, self-learning is identified as one important dimension of the AI state space. Another main dimension lies in the possibility to go over from specific to more general types of problems. The third main dimension is provided by semantic grounding. Since this is a philosophically complex and controversial dimension, a larger part of the paper is devoted to it. We take a fresh look at known foundational arguments in the philosophy of mind and cognition that are gaining new relevance in view of the recent AI developments including the blockhead objection, the Turing test, the symbol grounding problem, the Chinese room argument, and general use-theoretic considerations of meaning. Finally, the AI state space, spanned by the main dimensions generalization, grounding and "selfx-ness", possessing self-x properties such as self-learning, is outlined.


page 1

page 2

page 3

page 4


AI, orthogonality and the Müller-Cannon instrumental vs general intelligence distinction

The by now standard argument put forth by Yudkowsky, Bostrom and others ...

An Initial Look at Self-Reprogramming Artificial Intelligence

Rapid progress in deep learning research has greatly extended the capabi...

Self-Regulating Artificial General Intelligence

Here we examine the paperclip apocalypse concern for artificial general ...

2006: Celebrating 75 years of AI - History and Outlook: the Next 25 Years

When Kurt Goedel layed the foundations of theoretical computer science i...

Artificial Intelligence: A Child's Play

We discuss the objectives of any endeavor in creating artificial intelli...

The Vector Grounding Problem

The remarkable performance of large language models (LLMs) on complex li...

Taming AI Bots: Controllability of Neural States in Large Language Models

We tackle the question of whether an agent can, by suitable choice of pr...

Please sign up or login with your details

Forgot password? Click here to reset