Voluminous yet Vacuous? Semantic Capital in an Age of Large Language Models

05/29/2023
by   Luca Nannini, et al.
0

Large Language Models (LLMs) have emerged as transformative forces in the realm of natural language processing, wielding the power to generate human-like text. However, despite their potential for content creation, they carry the risk of eroding our Semantic Capital (SC) - the collective knowledge within our digital ecosystem - thereby posing diverse social epistemic challenges. This paper explores the evolution, capabilities, and limitations of these models, while highlighting ethical concerns they raise. The study contribution is two-fold: first, it is acknowledged that, withstanding the challenges of tracking and controlling LLM impacts, it is necessary to reconsider our interaction with these AI technologies and the narratives that form public perception of them. It is argued that before achieving this goal, it is essential to confront a potential deontological tipping point in an increasing AI-driven infosphere. This goes beyond just adhering to AI ethical norms or regulations and requires understanding the spectrum of social epistemic risks LLMs might bring to our collective SC. Secondly, building on Luciano Floridi's taxonomy for SC risks, those are mapped within the functionality and constraints of LLMs. By this outlook, we aim to protect and enrich our SC while fostering a collaborative environment between humans and AI that augments human intelligence rather than replacing it.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/25/2023

On pitfalls (and advantages) of sophisticated large language models

Natural language processing based on large language models (LLMs) is a b...
research
04/07/2023

Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models

As the capabilities of generative language models continue to advance, t...
research
09/15/2020

The Radicalization Risks of GPT-3 and Advanced Neural Language Models

In this paper, we expand on our previous research of the potential for a...
research
03/31/2023

Augmented Collective Intelligence in Collaborative Ideation: Agenda and Challenges

AI systems may be better thought of as peers than as tools. This paper e...
research
07/18/2023

ChatGPT: ascertaining the self-evident. The use of AI in generating human knowledge

The fundamental principles, potential applications, and ethical concerns...
research
07/05/2023

Citation: A Key to Building Responsible and Accountable Large Language Models

Large Language Models (LLMs) bring transformative benefits alongside uni...
research
06/06/2023

Applying Standards to Advance Upstream Downstream Ethics in Large Language Models

This paper explores how AI-owners can develop safeguards for AI-generate...

Please sign up or login with your details

Forgot password? Click here to reset