Is ChatGPT a Good Personality Recognizer? A Preliminary Study

07/08/2023
by   Yu Ji, et al.
0

In recent years, personality has been regarded as a valuable personal factor being incorporated into numerous tasks such as sentiment analysis and product recommendation. This has led to widespread attention to text-based personality recognition task, which aims to identify an individual's personality based on given text. Considering that ChatGPT has recently exhibited remarkable abilities on various natural language processing tasks, we provide a preliminary evaluation of ChatGPT on text-based personality recognition task for generating effective personality data. Concretely, we employ a variety of prompting strategies to explore ChatGPT's ability in recognizing personality from given text, especially the level-oriented prompting strategy we designed for guiding ChatGPT in analyzing given text at a specified level. We compare the performance of ChatGPT on two representative real-world datasets with traditional neural network, fine-tuned RoBERTa, and corresponding state-of-the-art task-specific model. The experimental results show that ChatGPT with zero-shot chain-of-thought prompting exhibits impressive personality recognition ability. Triggered by zero-shot chain-of-thought prompting, ChatGPT outperforms fine-tuned RoBERTa on the two datasets and is capable to provide natural language explanations through text-based logical reasoning. Furthermore, relative to zero-shot chain-of-thought prompting, zero-shot level-oriented chain-of-thought prompting enhances the personality prediction ability of ChatGPT and reduces the performance gap between ChatGPT and corresponding state-of-the-art task-specific model. Besides, we also conduct experiments to observe the fairness of ChatGPT when identifying personality and discover that ChatGPT shows unfairness to some sensitive demographic attributes such as gender and age.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/18/2023

A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models

GPT series models, such as GPT-3, CodeX, InstructGPT, ChatGPT, and so on...
research
05/26/2023

Zero is Not Hero Yet: Benchmarking Zero-Shot Performance of LLMs for Financial Tasks

Recently large language models (LLMs) like ChatGPT have shown impressive...
research
12/15/2022

On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning

Generating a chain of thought (CoT) can increase large language model (L...
research
07/05/2023

Comparative Analysis of GPT-4 and Human Graders in Evaluating Praise Given to Students in Synthetic Dialogues

Research suggests that providing specific and timely feedback to human t...
research
05/28/2023

Tab-CoT: Zero-shot Tabular Chain of Thought

The chain-of-though (CoT) prompting methods were successful in various n...
research
05/22/2023

Multi-Task Instruction Tuning of LLaMa for Specific Scenarios: A Preliminary Study on Writing Assistance

ChatGPT and GPT-4 have attracted substantial interest from both academic...
research
05/17/2023

Reprompting: Automated Chain-of-Thought Prompt Inference Through Gibbs Sampling

We introduce Reprompting, an iterative sampling algorithm that searches ...

Please sign up or login with your details

Forgot password? Click here to reset