YouTubers Not madeForKids: Detecting Channels Sharing Inappropriate Videos Targeting Children
In the last years, hundreds of new Youtube channels have been creating and sharing videos targeting children, with themes related to animation, superhero movies, comics, etc. Unfortunately, many of these videos are inappropriate for consumption by their target audience, due to disturbing, violent, or sexual scenes. In this paper, we study YouTube channels found to post suitable or disturbing videos targeting kids in the past. We identify a clear discrepancy between what YouTube assumes and flags as inappropriate content and channel, vs. what is found to be disturbing content and still available on the platform, targeting kids. In particular, we find that almost 60% of videos that were manually annotated and classified as disturbing by an older study in 2019 (a collection bootstrapped with Elsa and other keywords related to children videos), are still available on YouTube in mid 2021. In the meantime, 44 channels that uploaded such disturbing videos, have yet to be suspended and their videos to be removed. For the first time in literature, we also study the "madeForKids" flag, a new feature that YouTube introduced in the end of 2019, and compare its application to the channels that shared disturbing videos, as flagged from the previous study. Apparently, these channels are less likely to be set as "madeForKids" than those sharing suitable content. In addition, channels posting disturbing videos utilize their channel features such as keywords, description, topics, posts, etc., to appeal to kids (e.g., using game-related keywords). Finally, we use a collection of such channel and content features to train ML classifiers able to detect, at channel creation time, when a channel will be related to disturbing content uploads. These classifiers can help YouTube moderators reduce such incidences, pointing to potentially suspicious accounts without analyzing actual videos.
READ FULL TEXT