Closing the Loop: Testing ChatGPT to Generate Model Explanations to Improve Human Labelling of Sponsored Content on Social Media

06/08/2023
by   Thales Bertaglia, et al.
0

Regulatory bodies worldwide are intensifying their efforts to ensure transparency in influencer marketing on social media through instruments like the Unfair Commercial Practices Directive (UCPD) in the European Union, or Section 5 of the Federal Trade Commission Act. Yet enforcing these obligations has proven to be highly problematic due to the sheer scale of the influencer market. The task of automatically detecting sponsored content aims to enable the monitoring and enforcement of such regulations at scale. Current research in this field primarily frames this problem as a machine learning task, focusing on developing models that achieve high classification performance in detecting ads. These machine learning tasks rely on human data annotation to provide ground truth information. However, agreement between annotators is often low, leading to inconsistent labels that hinder the reliability of models. To improve annotation accuracy and, thus, the detection of sponsored content, we propose using chatGPT to augment the annotation process with phrases identified as relevant features and brief explanations. Our experiments show that this approach consistently improves inter-annotator agreement and annotation accuracy. Additionally, our survey of user experience in the annotation task indicates that the explanations improve the annotators' confidence and streamline the process. Our proposed methods can ultimately lead to more transparency and alignment with regulatory requirements in sponsored content detection.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/18/2021

A Machine Learning Pipeline to Examine Political Bias with Congressional Speeches

Computational methods to model political bias in social media involve se...
research
07/12/2023

Learning from Exemplary Explanations

eXplanation Based Learning (XBL) is a form of Interactive Machine Learni...
research
03/27/2021

Annotating Hate and Offenses on Social Media

This paper describes a corpus annotation process to support the identifi...
research
07/16/2019

Modeling Human Annotation Errors to Design Bias-Aware Systems for Social Stream Processing

High-quality human annotations are necessary to create effective machine...
research
03/01/2021

ToxCCIn: Toxic Content Classification with Interpretability

Despite the recent successes of transformer-based models in terms of eff...
research
10/03/2019

Mapping (Dis-)Information Flow about the MH17 Plane Crash

Digital media enables not only fast sharing of information, but also dis...
research
08/02/2018

Cyberbullying Detection -- Technical Report 2/2018, Department of Computer Science AGH, University of Science and Technology

The research described in this paper concerns automatic cyberbullying de...

Please sign up or login with your details

Forgot password? Click here to reset