ChatGPT: Jack of all trades, master of none

02/21/2023
by   Jan Kocoń, et al.
0

OpenAI has released the Chat Generative Pre-trained Transformer (ChatGPT) and revolutionized the approach in artificial intelligence to human-model interaction. The first contact with the chatbot reveals its ability to provide detailed and precise answers in various areas. There are several publications on ChatGPT evaluation, testing its effectiveness on well-known natural language processing (NLP) tasks. However, the existing studies are mostly non-automated and tested on a very limited scale. In this work, we examined ChatGPT's capabilities on 25 diverse analytical NLP tasks, most of them subjective even to humans, such as sentiment analysis, emotion recognition, offensiveness and stance detection, natural language inference, word sense disambiguation, linguistic acceptability and question answering. We automated ChatGPT's querying process and analyzed more than 38k responses. Our comparison of its results with available State-of-the-Art (SOTA) solutions showed that the average loss in quality of the ChatGPT model was about 25 few-shot evaluation. We showed that the more difficult the task (lower SOTA performance), the higher the ChatGPT loss. It especially refers to pragmatic NLP problems like emotion recognition. We also tested the ability of personalizing ChatGPT responses for selected subjective tasks via Random Contextual Few-Shot Personalization, and we obtained significantly better user-based predictions. Additional qualitative analysis revealed a ChatGPT bias, most likely due to the rules imposed on human trainers by OpenAI. Our results provide the basis for a fundamental discussion of whether the high quality of recent predictive NLP models can indicate a tool's usefulness to society and how the learning and validation procedures for such systems should be established.

READ FULL TEXT

page 8

page 11

page 12

page 15

page 20

page 26

page 27

page 36

research
02/08/2023

Is ChatGPT a General-Purpose Natural Language Processing Task Solver?

Spurred by advancements in scale, large language models (LLMs) have demo...
research
12/20/2022

Is GPT-3 a Good Data Annotator?

GPT-3 (Generative Pre-trained Transformer 3) is a large-scale autoregres...
research
10/09/2020

Artificial Intelligence (AI) in Action: Addressing the COVID-19 Pandemic with Natural Language Processing (NLP)

The COVID-19 pandemic has had a significant impact on society, both beca...
research
03/21/2021

TextFlint: Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing

Various robustness evaluation methodologies from different perspectives ...
research
07/06/2023

Can ChatGPT's Responses Boost Traditional Natural Language Processing?

The employment of foundation models is steadily expanding, especially wi...
research
07/17/2022

A Multibias-mitigated and Sentiment Knowledge Enriched Transformer for Debiasing in Multimodal Conversational Emotion Recognition

Multimodal emotion recognition in conversations (mERC) is an active rese...
research
04/28/2023

SemEval-2023 Task 11: Learning With Disagreements (LeWiDi)

NLP datasets annotated with human judgments are rife with disagreements ...

Please sign up or login with your details

Forgot password? Click here to reset