Symbolic and Language Agnostic Large Language Models

08/27/2023
by   Walid S. Saba, et al.
0

We argue that the relative success of large language models (LLMs) is not a reflection on the symbolic vs. subsymbolic debate but a reflection on employing an appropriate strategy of bottom-up reverse engineering of language at scale. However, due to the subsymbolic nature of these models whatever knowledge these systems acquire about language will always be buried in millions of microfeatures (weights) none of which is meaningful on its own. Moreover, and due to their stochastic nature, these models will often fail in capturing various inferential aspects that are prevalent in natural language. What we suggest here is employing the successful bottom-up strategy in a symbolic setting, producing symbolic, language agnostic and ontologically grounded large language models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/12/2023

Stochastic LLMs do not Understand Language: Towards Symbolic, Explainable and Ontologically Based LLMs

In our opinion the exuberance surrounding the relative success of data-d...
research
05/30/2023

Towards Explainable and Language-Agnostic LLMs: Symbolic Reverse Engineering of Language at Scale

Large language models (LLMs) have achieved a milestone that undenia-bly ...
research
05/01/2022

MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning

Huge language models (LMs) have ushered in a new era for AI, serving as ...
research
10/14/2022

Transparency Helps Reveal When Language Models Learn Meaning

Many current NLP systems are built from language models trained to optim...
research
02/15/2021

Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm

Prevailing methods for mapping large generative language models to super...
research
04/03/2023

Can the Inference Logic of Large Language Models be Disentangled into Symbolic Concepts?

In this paper, we explain the inference logic of large language models (...
research
04/04/2021

A Context-Dependent Gated Module for Incorporating Symbolic Semantics into Event Coreference Resolution

Event coreference resolution is an important research problem with many ...

Please sign up or login with your details

Forgot password? Click here to reset