In-game Toxic Language Detection: Shared Task and Attention Residuals

11/11/2022
by   Yuanzhe Jia, et al.
0

In-game toxic language becomes the hot potato in the gaming industry and community. There have been several online game toxicity analysis frameworks and models proposed. However, it is still challenging to detect toxicity due to the nature of in-game chat, which has extremely short length. In this paper, we describe how the in-game toxic language shared task has been established using the real-world in-game chat data. In addition, we propose and introduce the model/framework for toxic language token tagging (slot filling) from the in-game chat. The data and code will be released.

READ FULL TEXT

page 1

page 2

research
06/11/2021

CONDA: a CONtextual Dual-Annotated dataset for in-game toxicity understanding and detection

Traditional toxicity detection models have focused on the single utteran...
research
12/23/2009

Consensus Dynamics in a non-deterministic Naming Game with Shared Memory

In the naming game, individuals or agents exchange pairwise local inform...
research
01/08/2019

The power of moral words: Loaded language generates framing effects in the extreme dictator game

Understanding whether preferences are sensitive to the frame has been a ...
research
11/16/2022

PU GNN: Chargeback Fraud Detection in P2E MMORPGs via Graph Attention Networks with Imbalanced PU Labels

The recent advent of play-to-earn (P2E) systems in massively multiplayer...
research
08/24/2023

Out of the Box Thinking: Improving Customer Lifetime Value Modelling via Expert Routing and Game Whale Detection

Customer lifetime value (LTV) prediction is essential for mobile game pu...

Please sign up or login with your details

Forgot password? Click here to reset