Automatically Detecting Cyberbullying Comments on Online Game Forums

by   Hanh Hong-Phuc Vo, et al.

Online game forums are popular to most of game players. They use it to communicate and discuss the strategy of the game, or even to make friends. However, game forums also contain abusive and harassment speech, disturbing and threatening players. Therefore, it is necessary to automatically detect and remove cyberbullying comments to keep the game forum clean and friendly. We use the Cyberbullying dataset collected from World of Warcraft (WoW) and League of Legends (LoL) forums and train classification models to automatically detect whether a comment of a player is abusive or not. The result obtains 82.69 macro F1-score for LoL forum and 83.86 Toxic-BERT model on the Cyberbullying dataset.



page 4


Automatic Player Identification in Dota 2

Dota 2 is a popular, multiplayer online video game. Like many online gam...

Predicting Different Types of Subtle Toxicity in Unhealthy Online Conversations

This paper investigates the use of machine learning models for the class...

iNNk: A Multi-Player Game to Deceive a Neural Network

This paper presents iNNK, a multiplayer drawing game where human players...

Machine learning and semantic analysis of in-game chat for cyberbullying

One major problem with cyberbullying research is the lack of data, since...

A ground-truth dataset and classification model for detecting bots in GitHub issue and PR comments

Bots are frequently used in Github repositories to automate repetitive a...

Time-sync Video Tag Extraction Using Semantic Association Graph

Time-sync comments reveal a new way of extracting the online video tags....

DL-DDA – Deep Learning based Dynamic Difficulty Adjustment with UX and Gameplay constraints

Dynamic difficulty adjustment (DDA) is a process of automatically changi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.