Improved Abusive Comment Moderation with User Embeddings

08/11/2017 ∙ by John Pavlopoulos, et al. ∙ Athens University of Economics and Business StrainTek 0

Experimenting with a dataset of approximately 1.6M user comments from a Greek news sports portal, we explore how a state of the art RNN-based moderation method can be improved by adding user embeddings, user type embeddings, user biases, or user type biases. We observe improvements in all cases, with user embeddings leading to the biggest performance gains.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

News portals often allow their readers to comment on articles, in order to get feedback, engage their readers, and build customer loyalty. User comments, however, can also be abusive (e.g., bullying, profanity, hate speech), damaging the reputation of news portals, making them liable to fines (e.g., when hosting comments encouraging illegal actions), and putting off readers. Large news portals often employ moderators, who are frequently overwhelmed by the volume and abusiveness of comments.111See, for example, https://goo.gl/WTQyio. Readers are disappointed when non-abusive comments do not appear quickly online because of moderation delays. Smaller news portals may be unable to employ moderators, and some are forced to shut down their comments.222See https://goo.gl/2eKdeE.

In previous work (Pavlopoulos et al., 2017a), we introduced a new dataset of approx. 1.6M manually moderated user comments from a Greek sports news portal, called Gazzetta, which we made publicly available.333The portal is http://www.gazzetta.gr/. Instructions to download the dataset will become available at http://nlp.cs.aueb.gr/software.html. Experimenting on that dataset and the datasets of Wulczyn et al. Wulczyn et al. (2017)

, which contain moderated English Wikipedia comments, we showed that a method based on a Recurrent Neural Network (

rnn) outperforms detox (Wulczyn et al., 2017), the previous state of the art in automatic user content moderation.444Two of the co-authors of Wulczyn et al. Wulczyn et al. (2017) are with Jigsaw, who recently announced Perspective, a system to detect ‘toxic’ comments. Perspective is not the same as detox (personal communication), but we were unable to obtain scientific articles describing it. Our previous work, however, considered only the texts of the comments, ignoring user-specific information (e.g., number of previously accepted or rejected comments of each user). Here we add user embeddings or user type embeddings to our rnn

-based method, i.e., dense vectors that represent individual users or user types, similarly to word embeddings that represent words

(Mikolov et al., 2013; Pennington et al., 2014). Experiments on Gazzetta comments show that both user embeddings and user type embeddings improve the performance of our rnn-based method, with user embeddings helping more. User-specific or user-type-specific scalar biases also help to a lesser extent.

2 Dataset

Dataset/Split Gold Label Comments Per User Type Total
Accepted Rejected Green Yellow Red Unknown
g-train 960,378 (66%) 489,222 (34%) 724,247 (50%) 585,622 (40%) 43,702 (3%) 96,029 (7%) 1.45M
g-dev 20,236 (68%) 9,464 (32%) 14,378 (48%) 10,964 (37%) 546 (2%) 3,812 (13%) 29,700
g-test 20,064 (68%) 9,636 (32%) 14,559 (49%) 10,681 (36%) 621 (2%) 3,839 (13%) 29,700
Table 1: Comment statistics of the dataset used.
Dataset/Split Individual Users Per User Type Total
Green Yellow Red Unknown
g-train 4,451 3,472 251 21,865 1 8,175
g-dev 1,631 1,218 64 1,281 1 2,914
g-test 1,654 1,203 67 1,254 1 2,925
Table 2: User statistics of the dataset used.

We first discuss the dataset we used, to help acquaint the reader with the problem. The dataset contains Greek comments from Gazzetta (Pavlopoulos et al., 2017a). There are approximately 1.45M training comments (covering Jan. 1, 2015 to Oct. 6, 2016); we call them g-train (Table 1). An additional set of 60,900 comments (Oct. 7 to Nov. 11, 2016) was split to development set (g-dev, 29,700 comments) and test set (g-test, 29,700).555The remaining 1,500 comments are not used here. Smaller subsets of g-train and g-test are also available (Pavlopoulos et al., 2017a), but are not used in this paper. The Wikipedia comment datasets of Wulczyn et al. Wulczyn et al. (2017) cannot be used here, because they do not provide user ids. Each comment has a gold label (‘accept’, ‘reject’). The user id of the author of each comment is also available, but user ids were not used in our previous work.

When experimenting with user type embeddings or biases, we group the users into the following types. is the number of training comments posted by user (id) . is the ratio of training comments posted by that were rejected.

Red: Users with and .

Yellow: and .

Green: and .

Unknown: Users with .

Table 2 shows the number of users per type.

3 Methods

rnn: This is the rnn-based method of our previous work (Pavlopoulos et al., 2017a). It is a chain of gru cells (Cho et al., 2014) that transforms the tokens of each comment to the hidden states (). Once

has been computed, a logistic regression (

lr

) layer estimates the probability that comment

should be rejected:

(1)

is the sigmoid function,

, .666In our previous work (Pavlopoulos et al., 2017a), we also considered a variant of rnn, called a-rnn, with an attention mechanism. We do not consider a-rnn here to save space.

uernn: This is the rnn-based method with user embeddings added. Each user of the training set with is mapped to a user-specific embedding . Users with are mapped to a single ‘unknown’ user embedding. The lr layer is modified as follows; is the embedding of the author of ; and .

(2)

ternn: This is the rnn-based method with user type embeddings added. Each user type is mapped to a user type embedding . The lr layer is modified as follows, where is the embedding of the type of the author of .

(3)

ubrnn: This is the rnn-based method with user biases added. Each user of the training set with is mapped to a user-specific bias . Users with are mapped to a single ‘unknown’ user bias. The lr layer is modified as follows, where is the bias of the author of .

(4)

We expected ubrnn to learn higher (or lower) biases for users whose posts were frequently rejected (accepted) in the training data, biasing the system towards rejecting (accepting) their posts.

tbrnn: This is the rnn-based method with user type biases. Each user type is mapped to a user type bias . The lr layer is modified as follows; is the bias of the type of the author.

(5)

We expected tbrnn to learn a higher for the red user type (frequently rejected), and a lower for the green user type (frequently accepted), with the biases of the other two types in between.

In all methods above, we use 300-dimensional word embeddings, user and user type embeddings with dimensions, and hidden units in the gru cells, as in our previous experiments (Pavlopoulos et al., 2017a)

, where we tuned all hyper-parameters on 2% held-out training comments. Early stopping evaluates on the same held-out subset. User and user type embeddings are randomly initialized and updated by backpropagation. Word embeddings are initialized to the

word2vec embeddings of our previous work (Pavlopoulos et al., 2017a), which were pretrained on 5.2M Gazzetta comments. Out of vocabulary words, meaning words not encountered or encountered only once in the training set and/or words with no initial embeddings, are mapped (during both training and testing) to a single randomly initialized word embedding, updated by backpropagation. We use Glorot initialization (Glorot and Bengio, 2010) for other parameters, cross-entropy loss, and Adam (Kingma and Ba, 2015).777

We used Keras (

http://keras.io/

) with the TensorFlow back-end (

http://www.tensorflow.org/).

ubase: For a comment authored by user , this baseline returns the rejection rate of the author’s training comments, if there are training comments of , and 0.5 otherwise.

tbase: This baseline returns the following probabilities, considering the user type of the author.

4 Results and Discussion

System g-dev g-test
uernn
ubrnn
ternn
tbrnn
rnn
ubase
tbase
Table 3: auc

scores. Standard error in brackets.

Table 3 shows the auc scores (area under roc curve) of the methods considered. Using auc allows us to compare directly to the results of our previous work (Pavlopoulos et al., 2017a) and the work of Wulczyn et al. Wulczyn et al. (2017). Also, auc considers performance at multiple classification thresholds (rejecting comment when , for different

values), which gives a more complete picture compared to reporting precision, recall, or F-scores for a particular

only. Accuracy is not an appropriate measure here, because of class imbalance (Table 1). For methods that involve random initializations (all but the baselines), the results are averaged over three repetitions; we also report the standard error across the repetitions.

Figure 1: User embeddings learned by uernn (2 principal components). Color represents the rejection rate of the user’s training comments.

User-specific information always improves our original rnn-based method (Table 3), but the best results are obtained by adding user embeddings (uernn). Figure 1 visualizes the user embeddings learned by uernn. The two dimensions of Fig. 1 correspond to the two principal components of the user embeddings, obtained via pca.The colors and numeric labels reflect the rejection rates of the corresponding users. Moving from left to right in Fig. 1, the rejection rate increases, indicating that the user embeddings of uernn capture mostly the rejection rate . This rate (a single scalar value per user) can also be captured by the simpler user-specific biases of ubrnn, which explains why ubrnn also performs well (second best results in Table 3). Nevertheless, uernn performs better than ubrnn, suggesting that user embeddings capture more information than just a user-specific rejection rate bias.888We obtained no clear clusterings with tSNE (van der Maaten and Hinton, 2008).

Three of the user types (Red, Yellow, Green) in effect also measure , but in discretized form (three bins), which also explains why user type embeddings (ternn) also perform well (third best method). The performance of tbrnn is close to that of ternn, suggesting again that most of the information captured by user type embeddings can also be captured by simpler scalar user-type-specific biases. The user type biases learned by tbrnn are shown in Table 4. The bias of the Red type is the largest, the bias of the Green type is the smallest, and the biases of the Unknown and Yellow types are in between, as expected (Section 3). The same observations hold for the average user-specific biases learned by ubrnn (Table 4).

Overall, Table 3 indicates that user-specific information (uernn, ubrnn) is better than user-type information (ternn, tbrnn), and that embeddings (uernn, ternn) are better than the scalar biases (ubrnn, tbrnn), though the differences are small. All the rnn-based methods outperform the two baselines (ubase, tbase), which do not consider the texts of the comments.

Let us provide a couple of examples, to illustrate the role of user-specific information. We encountered a comment saying just “Ooooh, down to Pireaus…” (translated from Greek), which the moderator had rejected, because it is the beginning of an abusive slogan. The rejection probability of rnn was only 0.34, presumably because there are no clearly abusive expressions in the comment, but the rejection probability of uernn was 0.72, because the author had a very high rejection rate. On the other hand, another comment said “Indeed, I know nothing about the filth of Greek soccer.” (translated, apparently not a sarcastic comment). The original rnn method marginally rejected the comment (rejection probability 0.57), presumably because of the ‘filth’ (comments talking about the filth of some sport or championship are often rejected), but uernn gave it a very low rejection probability (0.15), because the author of the comment had a very low rejection rate.

5 Related work

In previous work (Pavlopoulos et al., 2017a), we showed that our rnn-based method outperforms detox (Wulczyn et al., 2017), the previous state of the art in user content moderation. detox uses character or word -gram features, no user-specific information, and an lr or mlpclassifier. Other related work on abusive content moderation was reviewed extensively in our previous work (Pavlopoulos et al., 2017a). Here we focus on previous work that considered user-specific features and user embeddings.

User Type of tbrnn average of ubrnn
Green
Yellow
Unknown
Red
Table 4: Biases learned and standard error.

Dadvar et al. Dadvar et al. (2013) detect cyberbullying in YouTube comments, using an svm and features examining the content of each comment (e.g., second person pronouns followed by profane words, common bullying words), but also the profile and history of the author of the comment (e.g., age, frequency of profane words in past posts). Waseem et al. Waseem and Hovy (2016) detect hate speech tweets. Their best method is an lr classifier, with character -grams and a feature indicating the gender of the author; adding the location of the author did not help.

Cheng et al. Cheng et al. (2015)

predict which users will be banned from on-line communities. Their best system uses a Random Forest or

lr classifier, with features examining the average readability and sentiment of each user’s past posts, the past activity of each user (e.g., number of posts daily, proportion of posts that are replies), and the reactions of the community to the past actions of each user (e.g., up-votes, number of posts rejected). Lee et al. Lee et al. (2014) and Napoles et al. Napoles et al. (2017) include similar user-specific features in classifiers intended to detect high quality on-line discussions.

Amir et al. Amir et al. (2016)

detect sarcasm in tweets. Their best system uses a word-based Convolutional Neural Network (

cnn). The feature vector produced by the cnn (representing the content of the tweet) is concatenated with the user embedding of the author, and passed on to an mlp that classifies the tweet as sarcastic or not. This method outperforms a previous state of the art sarcasm detection method (Bamman and Smith, 2015) that relies on an lr classifier with hand-crafted content and user-specific features. We use an rnn instead of a cnn, and we feed the comment and user embeddings to a simpler lr layer (Eq. 2), instead of an mlp. Amir et al. discard unknown users, unlike our experiments, and consider only sarcasm, whereas moderation also involves profanity, hate speech, bullying, threats etc.

User embeddings have also been used in: conversational agents

(Li et al., 2016)

; sentiment analysis

(Chen et al., 2016); retweet prediction (Zhang et al., 2016); predicting which topics a user is likely to tweet about, the accounts a user may want to follow, and the age, gender, political affiliation of Twitter users (Benton et al., 2016).

Our previous work (Pavlopoulos et al., 2017a)

also discussed how machine learning can be used in

semi-automatic moderation, by letting moderators focus on ‘difficult’ comments and automatically handling comments that are easier to accept or reject. In more recent work (Pavlopoulos et al., 2017b) we also explored how an attention mechanism can be used to highlight possibly abusive words or phrases when showing ‘difficult’ comments to moderators.

6 Conclusions

Experimenting with a dataset of approx. 1.6M user comments from a Greek sports news portal, we explored how a state of the art rnn-based moderation method can be improved by adding user embeddings, user type embeddings, user biases, or user type biases. We observed improvements in all cases, but user embeddings were the best.

We plan to compare uernn to cnn-based methods that employ user embeddings (Amir et al., 2016), after replacing the lr layer of uernn by an mlp to allow non-linear combinations of comment and user embeddings.

Acknowledgments

This work was funded by Google’s Digital News Initiative (project ml2p, contract 362826).999See https://digitalnewsinitiative.com/. We are grateful to Gazzetta for the data they provided. We also thank Gazzetta’s moderators for their feedback, insights, and advice.

References

  • Amir et al. (2016) S. Amir, B. C. Wallace, H. Lyu, P. Carvalho, and Mario J. M. J. Silva. 2016. Modelling context with user embeddings for sarcasm detection in social media. In Proceedings of CoNLL, pages 167–177, Berlin, Germany.
  • Bamman and Smith (2015) D. Bamman and N.A. Smith. 2015. Contextualized sarcasm detection on Twitter. In Proc. of the 9th International Conference on Web and Social Media, pages 574–577, Oxford, UK.
  • Benton et al. (2016) A. Benton, R. Arora, and M. Dredze. 2016. Learning multiview embeddings of Twitter users. In Proc. of ACL, pages 14–19, Berlin, Germany.
  • Chen et al. (2016) H. Chen, M. Sun, C. Tu, Y. Lin, and Z. Liu. 2016. Neural sentiment classification with user and product attention. In Proc. of EMNLP, pages 1650–1659, Austin, TX.
  • Cheng et al. (2015) J. Cheng, C. Danescu-Niculescu-Mizil, and J. Leskovec. 2015. Antisocial behavior in online discussion communities. In Proc. of the International AAAI Conference on Web and Social Media, pages 61–70, Oxford University, England.
  • Cho et al. (2014) K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In EMNLP, pages 1724–1734, Doha, Qatar.
  • Dadvar et al. (2013) M. Dadvar, D. Trieschnigg, R. Ordelman, and F. de Jong. 2013. Improving cyberbullying detection with user context. In ECIR, pages 693–696, Moscow, Russia.
  • Glorot and Bengio (2010) X. Glorot and Y. Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In

    Proc. of the International Conference on Artificial Intelligence and Statistics

    , pages 249–256, Sardinia, Italy.
  • Kingma and Ba (2015) D. P. Kingma and J. Ba. 2015. Adam: A method for stochastic optimization. In ICLR, San Diego, CA.
  • Lee et al. (2014) J.-T. Lee, M.-C. Yang, and H.-C. Rim. 2014. Discovering high-quality threaded discussions in online forums. Journal of Computer Science and Technology, 29(3):519–531.
  • Li et al. (2016) J. Li, M. Galley, Chris C. Brockett, G. Spithourakis, J. Gao, and B. Dolan. 2016. A persona-based neural conversation model. In Proc. of ACL, pages 994–1003, Berlin, Germany.
  • van der Maaten and Hinton (2008) L. J. P. van der Maaten and G. E. Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579–2605.
  • Mikolov et al. (2013) T. Mikolov, W.-t. Yih, and G. Zweig. 2013. Linguistic regularities in continuous space word representations. In NAACL-HLT, pages 746–751, Atlanta, GA.
  • Napoles et al. (2017) C. Napoles, A. Pappu, and J. Tetreault. 2017. Automatically identifying good conversations online (yes, they do exist!). In Proc. of the International AAAI Conference on Web and Social Media.
  • Pavlopoulos et al. (2017a) J. Pavlopoulos, P. Malakasiotis, and Androutsopoulos I. 2017a. Deep learning for user comment moderation. In Proc. of the ACL Workshop on Abusive Language Online, Vancouver, Canada.
  • Pavlopoulos et al. (2017b) J. Pavlopoulos, P. Malakasiotis, and Androutsopoulos I. 2017b. Deeper attention to abusive user content moderation. In EMNLP, Copenhagen, Denmark.
  • Pennington et al. (2014) J. Pennington, R. Socher, and C. Manning. 2014. GloVe: Global vectors for word representation. In EMNLP, pages 1532–1543, Doha, Qatar.
  • Waseem and Hovy (2016) Z. Waseem and D. Hovy. 2016. Hateful symbols or hateful people? Predictive features for hate speech detection on Twitter. In Proc. of NAACL Student Research Workshop, pages 88–93, San Diego, CA.
  • Wulczyn et al. (2017) E. Wulczyn, N. Thain, and L. Dixon. 2017. Ex machina: Personal attacks seen at scale. In WWW, pages 1391–1399, Perth, Australia.
  • Zhang et al. (2016) Q. Zhang, Y. Gong, J. Wu, H. Huang, and X. Huang. 2016. Retweet prediction with attention-based deep neural network. In Proc. of the International on Conference on Information and Knowledge Management, pages 75–84, Indianapolis, IN.