Large Language Models and the Reverse Turing Test

07/28/2022
by   Terrence Sejnowski, et al.
0

Large Language Models (LLMs) have been transformative. They are pre-trained foundational models that can be adapted with fine tuning to many different natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide range of reactions on whether these LLMs understand what they are saying or exhibit signs of intelligence. This high variance is exhibited in three interviews with LLMs reaching wildly different conclusions. A new possibility was uncovered that could explain this divergence. What appears to be intelligence in LLMs may in fact be a mirror that reflects the intelligence of the interviewer, a remarkable twist that could be considered a Reverse Turing Test. If so, then by studying interviews we may be learning more about the intelligence and beliefs of the interviewer than the intelligence of the LLMs. As LLMs become more capable they may transform the way we access and use information.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/03/2022

Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively

Large-scale pre-trained language models have achieved impressive results...
research
06/21/2023

Investigating Pre-trained Language Models on Cross-Domain Datasets, a Step Closer to General AI

Pre-trained language models have recently emerged as a powerful tool for...
research
05/19/2020

Controlled Language and Baby Turing Test for General Conversational Intelligence

General conversational intelligence appears to be an important part of a...
research
10/06/2022

Modelling Commonsense Properties using Pre-Trained Bi-Encoders

Grasping the commonsense properties of everyday concepts is an important...
research
03/13/2022

Towards Personalized Intelligence at Scale

Personalized Intelligence (PI) is the problem of providing customized AI...
research
06/01/2020

Deceiving computers in Reverse Turing Test through Deep Learning

It is increasingly becoming difficult for human beings to work on their ...
research
09/12/2023

The first step is the hardest: Pitfalls of Representing and Tokenizing Temporal Data for Large Language Models

Large Language Models (LLMs) have demonstrated remarkable generalization...

Please sign up or login with your details

Forgot password? Click here to reset