Can AI Moderate Online Communities?
The task of cultivating healthy communication in online communities becomes increasingly urgent, as gaming and social media experiences become progressively more immersive and life-like. We approach the challenge of moderating online communities by training student models using a large language model (LLM). We use zero-shot learning models to distill and expand datasets followed by a few-shot learning and a fine-tuning approach, leveraging open-access generative pre-trained transformer models (GPT) from OpenAI. Our preliminary findings suggest, that when properly trained, LLMs can excel in identifying actor intentions, moderating toxic comments, and rewarding positive contributions. The student models perform above-expectation in non-contextual assignments such as identifying classically toxic behavior and perform sufficiently on contextual assignments such as identifying positive contributions to online discourse. Further, using open-access models like OpenAI's GPT we experience a step-change in the development process for what has historically been a complex modeling task. We contribute to the information system (IS) discourse with a rapid development framework on the application of generative AI in content online moderation and management of culture in decentralized, pseudonymous communities by providing a sample model suite of industrial-ready generative AI models based on open-access LLMs.
READ FULL TEXT