A Case Report On The "A.I. Locked-In Problem": social concerns with modern NLP

09/22/2022
by   Yoshija Walter, et al.
0

Modern NLP models are becoming better conversational agents than their predecessors. Recurrent Neural Networks (RNNs) and especially Long-Short Term Memory (LSTM) features allow the agent to better store and use information about semantic content, a trend that has become even more pronounced with the Transformer Models. Large Language Models (LLMs) such as GPT-3 by OpenAI have become known to be able to construct and follow a narrative, which enables the system to adopt personas on the go, adapt them and play along in conversational stories. However, practical experimentation with GPT-3 shows that there is a recurring problem with these modern NLP systems, namely that they can "get stuck" in the narrative so that further conversations, prompt executions or commands become futile. This is here referred to as the "Locked-In Problem" and is exemplified with an experimental case report, followed by practical and social concerns that are accompanied with this problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/30/2019

Text Steganalysis with Attentional LSTM-CNN

With the rapid development of Natural Language Processing (NLP) technolo...
research
11/15/2018

Multi-cell LSTM Based Neural Language Model

Language models, being at the heart of many NLP problems, are always of ...
research
11/26/2016

Attention-based Memory Selection Recurrent Network for Language Modeling

Recurrent neural networks (RNNs) have achieved great success in language...
research
06/09/2020

Tensor train decompositions on recurrent networks

Recurrent neural networks (RNN) such as long-short-term memory (LSTM) ne...
research
09/30/2021

Is my agent good enough? Evaluating Embodied Conversational Agents with Long and Short-term interactions

The use of digital resources has been increasing in every instance of to...

Please sign up or login with your details

Forgot password? Click here to reset