Hateminers : Detecting Hate speech against Women

12/17/2018 ∙ by Punyajoy Saha, et al. ∙ IIT Kharagpur 0

With the online proliferation of hate speech, there is an urgent need for systems that can detect such harmful content. In this paper, We present the machine learning models developed for the Automatic Misogyny Identification (AMI) shared task at EVALITA 2018. We generate three types of features: Sentence Embeddings, TF-IDF Vectors, and BOW Vectors to represent each tweet. These features are then concatenated and fed into the machine learning models. Our model came First for the English Subtask A and Fifth for the English Subtask B. We release our winning model for public use and it's available at https://github.com/punyajoy/Hateminers-EVALITA.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Twitter defines hateful misconduct as “you may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease222https://goo.gl/RSSrrZ”. With the online proliferation of hate speech, several countries like USA, Germany, and France have laws to ban such hateful content. This situation calls for online hate detection systems that are necessary to curb the rapidly increasing hate speech. In particular, there is a rise in online violence against women (Misogyny).

According to Pew research333http://www.pewinternet.org/2017/07/11/online-harassment-2017, women encounter sexualized forms of abuse at much higher rates than men. Sites like Twitter are failing in acting promptly against online mysogyny and taking too much time to remove the content444https://goo.gl/zbYTWA. The research community has now started to focus on this issue and is developing methods to detect online mysogyny [Hewitt et al.2016b, Fersini et al.2018b, Poland2016].

In this paper, we focus on detection of misogynous posts in Twitter that are written in English and describe our submission (Hateminers) for the task of Automatic Misogyny Identification (AMI) at EVALITA2018 [Fersini et al.2018a]. We concatenate three types of features to represent each tweet and use machine learning models for classification.

For the English Task A, we are ranked (team “Hateminers”) at the AMI shared task at EVALITA 2018 competition, with an accuracy of 70.4%. For the English Task B, we are ranked (team rank ), with an average macro-average F1-score of 0.37.

2 Related works

The research on hatespeech is gaining momentum with several works which focus on different aspects such as analyzing hatespeech [ElSherief et al.2018, Mathew et al.2018, Silva et al.2016, Chandrasekharan et al.2017, Gröndahl et al.2018], and detection of hatespeech [Fortuna and Nunes2018, Davidson et al.2017, Qian et al.2018].

Recently, there seems to be growing interest in the identification of misogynous contents online [Ging and Siapera2018]. Some of the initial works on identification of misogynous contents online were performed by  [Hewitt et al.2016a]. In  [Fox et al.2015], the authors study the roles of anonymity and interactivity in response to sexist content posted on a social networking site. They concluded that interacting with sexist content anonymously promotes greater hostile sexism than interacting with it using an identified account.

3 Dataset and task description

The AMI shared task at EVALITA2018 had two balanced datasets for the English and Italian language. We participated in the English language shared task only. So, we present the systems developed for the English language AMI task only.

3.1 Dataset

The training dataset consisted of 4000 labelled tweets and the test dataset had 1000 unlabelled tweets. The distribution of different labels is presented in Table 1. The English corpora have been manually labelled by several annotators according to three levels:

  • Misogyny (Misogyny vs Not Misogyny)

  • Misogynistic category (discredit, derailing, dominance, sexual harassment & threats of violence, stereotype & objectification)

  • Target (active vs passive)

As observed from Table 1, the label distribution for Task A is balanced, while in Task B the distribution is highly unbalanced for both misogyny behaviors and targets. We will explain these categories in the following section.

Type Labels Training Test
Misogyny Misogyny 1785 540
Non-Misogyny 2215 460
Misogynistic category Discredit 1014 141
Derailing 92 11
Dominance 148 124
Sexual Harassment 352 44
Stereotype 179 140
Misogyny Target Active 1058 401
Passive 727 59
Table 1: The distribution of different labels in the English language dataset.

3.2 Tasks

Task A: First, it is asked to have a binary classification of the tweets, that is as either misogynous or not misogynous. The performance of the system is measured based on the accuracy.

Task B:

Next, it is asked to classify the misogynous tweets according to both the misogynistic behaviour and the target of the message. The evaluation metric is macro F1-score for this task.

A tweet must be classified uniquely within one of the following categories:

  1. Stereotype & objectification: a widely held but fixed and oversimplified image or idea of a woman; description of women’s physical appeal and/or comparisons to narrow standards.

  2. Dominance: to assert the superiority of men over women to highlight gender inequality.

  3. Derailing: to justify woman abuse, rejecting male responsibility; an attempt to disrupt the conversation in order to redirect women’s conversations on something more comfortable for men.

  4. Sexual harassment & threats of violence: to describe actions as sexual advances, requests for sexual favors, harassment of a sexual nature; intent to physically assert power over women through threats of violence.

  5. Discredit: slurring over women with no other larger intention.

On the other hand, the target classification is again binary:

  1. Active (individual): the text includes offensive messages purposely sent to a specific target.

  2. Passive (generic): it refers to messages posted to many potential receivers.

4 System description

In this section, we will explain the details regarding the features and machine learning models used for the task.

4.1 Feature generation


We pre-process the tweets before performing the feature extraction. The following steps were followed:

  • We remove all the URLs.

  • Convert tweet text to lowercase.

  • Words such as “ain’t”, “i’ll” were replaced by the corresponding expanded forms.

  • Removed emojis, stop words, and punctuation.

  • Performed tokenization and stemming.

Feature vector: The pre-processed tweets were used to generate the features for the classifiers. We generated three types of features and concatenate them for each tweet. We experimented with all the features and found that the combination of all the features worked the best. We explain each of the feature type below.

  • Sentence embeddings: The sentence vector is generated using an Universal Sentence Encoder [Cer et al.2018] which outputs a 512 dimensional vector representation of the text. Recent works [Conneau et al.2017] have shown stronger performance using pre-trained sentence level embeddings as compared to word level embeddings. We provide each of the preprocessed tweets as input to the sentence encoder and use the vector output for our task.

  • TF-IDF vector: TF-IDF vectors were generated using Scikit’s555https://goo.gl/9FrZLD TF-IDF vectorizer on the pre-processed tweets.

  • Bag of words vector (BoWV): The BoWV approach uses the average of the GloVe [Pennington et al.2014] word embeddings to represent a sentence. We set the size of the vector embeddings to 300.

4.2 Classifiers used

We experiment with three machine learning models for Task A & B.

Logistic Regression (LR): We use the LR implementation available in scikit-learn666https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html. We set as 1.0 for all the tasks.

XGBoost (XGB): XGB777https://github.com/dmlc/xgboost

is an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable. we set the

parameter as ‘binary:logistic’ was set to 0.8 and set to 3.0.

CatBoost (CB): CB [Dorogush et al.2017]

is a state-of-the-art open-source gradient boosting on decision trees library developed by Yandex

888https://catboost.ai. We set to 0.8 for all experiments.

Figure 1: AMI results for the English Subtask A (Misogyny classification). Our System came at the top position.
Figure 2: AMI results for the English Subtask B (Category and target classification). Out best system came at 5 position (3 best team).

5 Results:

The results of our system for English Task A are presented in Figure 1 and for English Task B are presented in Figure 2. Our systems captured the top three ranks for the English Subtask A of Misogyny identification. For Subtask B, our best system came at fifth position ( best team).

We obtained the best result for English Subtask A in run#1 (0.704 accuracy) in which we used Logistic Regression classifier and had the rank. Our other two runs, both using Catboost model, ranked and in the task.

For the English Subtask B, in which we needed to classify the category and target of misogyny, we kept the same set of features as we used for Task A. Our best system ranked (0.37 average F-Measure for run#3) which used Catboost classifier for both category and target classification.

6 Discussion

We found that our system was able to achieve good performance for classifying the targets, but was not able to perform good in category classification. On closer inspection of Subtask B results999https://amievalita2018.files.wordpress.com/2018/11/english-detailed-results-category-target.pdf, we found that the main reason for the poor performance was the high data imbalance. We observe that several of the submitted systems perform poorly on the different categories of Task B. The under represented categories such as DERAILING and DOMINANCE were hard to detect due to the data imbalance.

7 Conclusion

In this paper we present our approach to detect misogynous tweets in twitter. We generate sentence embeddings, TF-IDF Vectors, and BOW vectors for each tweet and then concatenate them. These vectors are then used as features for models such as CatBoost and Logistic Regression. Our model occupied the top three positions for English Subtask A and our best model for English Subtask B came at rank ( best team).

We have also made the winning model public1 for other researchers to use.


  • [Cer et al.2018] Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175.
  • [Chandrasekharan et al.2017] Eshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, and Eric Gilbert. 2017. You can’t stay here: The efficacy of reddit’s 2015 ban examined through hate speech. Proceedings of the ACM on Human-Computer Interaction, 1(CSCW):31.
  • [Conneau et al.2017] Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In

    Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

    , pages 670–680.
  • [Davidson et al.2017] Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. arXiv preprint arXiv:1703.04009.
  • [Dorogush et al.2017] Anna Veronika Dorogush, Vasily Ershov, and Andrey Gulin. 2017. Catboost: gradient boosting with categorical features support.
  • [ElSherief et al.2018] Mai ElSherief, Shirin Nilizadeh, Dana Nguyen, Giovanni Vigna, and Elizabeth M. Belding-Royer. 2018. Peer to peer hate: Hate speech instigators and their targets. In ICWSM.
  • [Fersini et al.2018a] Elisabetta Fersini, Debora Nozza, and Paolo Rosso. 2018a. Overview of the evalita 2018 task on automatic misogyny identification (ami).
  • [Fersini et al.2018b] Elisabetta Fersini, Paolo Rosso, and Maria Anzovino. 2018b. Overview of the task on automatic misogyny identification at ibereval 2018.
  • [Fortuna and Nunes2018] Paula Fortuna and Sérgio Nunes. 2018. A survey on automatic detection of hate speech in text. ACM Computing Surveys (CSUR), 51(4):85.
  • [Fox et al.2015] Jesse Fox, Carlos Cruz, and Ji Young Lee. 2015. Perpetuating online sexism offline: Anonymity, interactivity, and the effects of sexist hashtags on social media. Computers in Human Behavior, 52:436–442.
  • [Ging and Siapera2018] Debbie Ging and Eugenia Siapera. 2018. Special issue on online misogyny.
  • [Gröndahl et al.2018] Tommi Gröndahl, Luca Pajola, Mika Juuti, Mauro Conti, and N Asokan. 2018. All you need is” love”: Evading hate-speech detection. arXiv preprint arXiv:1808.09115.
  • [Hewitt et al.2016a] Sarah Hewitt, T. Tiropanis, and C. Bokhove. 2016a. The problem of identifying misogynist language on twitter (and other online social spaces). In Proceedings of the 8th ACM Conference on Web Science, WebSci ’16, pages 333–335. ACM.
  • [Hewitt et al.2016b] Sarah Hewitt, Thanassis Tiropanis, and Christian Bokhove. 2016b. The problem of identifying misogynist language on twitter (and other online social spaces). In Proceedings of the 8th ACM Conference on Web Science, pages 333–335. ACM.
  • [Mathew et al.2018] Binny Mathew, Ritam Dutt, Pawan Goyal, and Animesh Mukherjee. 2018. Spread of hate speech in online social media. arXiv preprint arXiv:1812.01693.
  • [Pennington et al.2014] Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543.
  • [Poland2016] Bailey Poland. 2016. Haters: Harassment, abuse, and violence online. U of Nebraska Press.
  • [Qian et al.2018] Jing Qian, Mai ElSherief, Elizabeth Belding, and William Yang Wang. 2018. Hierarchical cvae for fine-grained hate speech classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3550–3559.
  • [Silva et al.2016] Leandro Araújo Silva, Mainack Mondal, Denzil Correa, Fabrício Benevenuto, and Ingmar Weber. 2016. Analyzing the targets of hate in online social media. In ICWSM, pages 687–690.