Crowd Score: A Method for the Evaluation of Jokes using Large Language Model AI Voters as Judges

12/21/2022
by   Fabricio Goes, et al.
0

This paper presents the Crowd Score, a novel method to assess the funniness of jokes using large language models (LLMs) as AI judges. Our method relies on inducing different personalities into the LLM and aggregating the votes of the AI judges into a single score to rate jokes. We validate the votes using an auditing technique that checks if the explanation for a particular vote is reasonable using the LLM. We tested our methodology on 52 jokes in a crowd of four AI voters with different humour types: affiliative, self-enhancing, aggressive and self-defeating. Our results show that few-shot prompting leads to better results than zero-shot for the voting question. Personality induction showed that aggressive and self-defeating voters are significantly more inclined to find more jokes funny of a set of aggressive/self-defeating jokes than the affiliative and self-enhancing voters. The Crowd Score follows the same trend as human judges by assigning higher scores to jokes that are also considered funnier by human judges. We believe that our methodology could be applied to other creative domains such as story, poetry, slogans, etc. It could both help the adoption of a flexible and accurate standard approach to compare different work in the CC community under a common metric and by minimizing human participation in assessing creative artefacts, it could accelerate the prototyping of creative artefacts and reduce the cost of hiring human participants to rate creative artefacts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/20/2023

Imaginations of WALL-E : Reconstructing Experiences with an Imagination-Inspired Module for Advanced AI Systems

In this paper, we introduce a novel Artificial Intelligence (AI) system ...
research
05/12/2023

ZARA: Improving Few-Shot Self-Rationalization for Small Language Models

Language models (LMs) that jointly generate end-task answers as well as ...
research
06/05/2023

Prompt to be Consistent is Better than Self-Consistent? Few-Shot and Zero-Shot Fact Verification with Pre-trained Language Models

Few-shot or zero-shot fact verification only relies on a few or no label...
research
07/05/2023

Comparative Analysis of GPT-4 and Human Graders in Evaluating Praise Given to Students in Synthetic Dialogues

Research suggests that providing specific and timely feedback to human t...
research
02/27/2015

Probabilistic Zero-shot Classification with Semantic Rankings

In this paper we propose a non-metric ranking-based representation of se...
research
06/05/2023

AHA!: Facilitating AI Impact Assessment by Generating Examples of Harms

While demands for change and accountability for harmful AI consequences ...
research
08/15/2023

Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification

Recent progress in large language models (LLMs) like GPT-4 and PaLM-2 ha...

Please sign up or login with your details

Forgot password? Click here to reset