The Vector Grounding Problem

04/04/2023
by   Dimitri Coelho Mollo, et al.
0

The remarkable performance of large language models (LLMs) on complex linguistic tasks has sparked a lively debate on the nature of their capabilities. Unlike humans, these models learn language exclusively from textual data, without direct interaction with the real world. Nevertheless, they can generate seemingly meaningful text about a wide range of topics. This impressive accomplishment has rekindled interest in the classical 'Symbol Grounding Problem,' which questioned whether the internal representations and outputs of classical symbolic AI systems could possess intrinsic meaning. Unlike these systems, modern LLMs are artificial neural networks that compute over vectors rather than symbols. However, an analogous problem arises for such systems, which we dub the Vector Grounding Problem. This paper has two primary objectives. First, we differentiate various ways in which internal representations can be grounded in biological or artificial systems, identifying five distinct notions discussed in the literature: referential, sensorimotor, relational, communicative, and epistemic grounding. Unfortunately, these notions of grounding are often conflated. We clarify the differences between them, and argue that referential grounding is the one that lies at the heart of the Vector Grounding Problem. Second, drawing on theories of representational content in philosophy and cognitive science, we propose that certain LLMs, particularly those fine-tuned with Reinforcement Learning from Human Feedback (RLHF), possess the necessary features to overcome the Vector Grounding Problem, as they stand in the requisite causal-historical relations to the world that underpin intrinsic meaning. We also argue that, perhaps unexpectedly, multimodality and embodiment are neither necessary nor sufficient conditions for referential grounding in artificial systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/29/2017

Symbol, Conversational, and Societal Grounding with a Toy Robot

Essential to meaningful interaction is grounding at the symbolic, conver...
research
03/17/2015

How the symbol grounding of living organisms can be realized in artificial agents

A system with artificial intelligence usually relies on symbol manipulat...
research
08/17/2021

A Game Interface to Study Semantic Grounding in Text-Based Models

Can language models learn grounded representations from text distributio...
research
06/04/2021

Grounding 'Grounding' in NLP

The NLP community has seen substantial recent interest in grounding to f...
research
06/01/1999

The Symbol Grounding Problem

How can the semantic interpretation of a formal symbol system be made in...
research
06/23/2022

Do Trajectories Encode Verb Meaning?

Distributional models learn representations of words from text, but are ...
research
12/13/2019

Does AlphaGo actually play Go? Concerning the State Space of Artificial Intelligence

The overarching goal of this paper is to develop a general model of the ...

Please sign up or login with your details

Forgot password? Click here to reset