Will Affective Computing Emerge from Foundation Models and General AI? A First Evaluation on ChatGPT

03/03/2023
by   Mostafa M. Amin, et al.
0

ChatGPT has shown the potential of emerging general artificial intelligence capabilities, as it has demonstrated competent performance across many natural language processing tasks. In this work, we evaluate the capabilities of ChatGPT to perform text classification on three affective computing problems, namely, big-five personality prediction, sentiment analysis, and suicide tendency detection. We utilise three baselines, a robust language model (RoBERTa-base), a legacy word model with pretrained embeddings (Word2Vec), and a simple bag-of-words baseline (BoW). Results show that the RoBERTa trained for a specific downstream task generally has a superior performance. On the other hand, ChatGPT provides decent results, and is relatively comparable to the Word2Vec and BoW baselines. ChatGPT further shows robustness against noisy data, where Word2Vec models achieve worse results due to noise. Results indicate that ChatGPT is a good generalist model that is capable of achieving good results across various problems without any specialised training, however, it is not as good as a specialised model for a downstream task.

READ FULL TEXT
research
07/06/2023

Can ChatGPT's Responses Boost Traditional Natural Language Processing?

The employment of foundation models is steadily expanding, especially wi...
research
07/10/2022

Myers-Briggs personality classification from social media text using pre-trained language models

In Natural Language Processing, the use of pre-trained language models h...
research
08/26/2023

A Wide Evaluation of ChatGPT on Affective Computing Tasks

With the rise of foundation models, a new artificial intelligence paradi...
research
11/25/2021

TunBERT: Pretrained Contextualized Text Representation for Tunisian Dialect

Pretrained contextualized text representation models learn an effective ...
research
06/12/2023

Linear Classifier: An Often-Forgotten Baseline for Text Classification

Large-scale pre-trained language models such as BERT are popular solutio...
research
08/29/2023

Measurement Tampering Detection Benchmark

When training powerful AI systems to perform complex tasks, it may be ch...
research
10/13/2022

Spontaneous Emerging Preference in Two-tower Language Model

The ever-growing size of the foundation language model has brought signi...

Please sign up or login with your details

Forgot password? Click here to reset