Demographic Biases of Crowd Workers in Key Opinion Leaders Finding

10/18/2021 ∙ by Hossein A. Rahmani, et al. ∙ UCL Delft University of Technology 0

Key Opinion Leaders (KOLs) are people that have a strong influence and their opinions are listened to by people when making important decisions. Crowdsourcing provides an efficient and cost-effective means to gather data for the KOL finding task. However, data collected through crowdsourcing is affected by the inherent demographic biases of crowd workers. To avoid such demographic biases, we need to measure how biased each crowd worker is. In this paper, we propose a simple yet effective approach based on demographic information of candidate KOLs and their counterfactual value. We argue that it is effectiveness because of the extra information that we can consider together with labeled data to curate a less biased dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Key Opinion Leaders (KOLs) are people that have such a strong social and professional status that their recommendations and opinions are listened to when making important decisions. For instance, in the field of medical and health informatics, KOLs are the people who can influence public opinion and lead the medical community through their research papers, clinical practices, and early acceptance of new technologies. Traditionally, consulting companies provide services for identifying KOLs by conducting user surveys. The problems of these solutions are that they use only a limited number of information resources and focus on a small number of involved clients. Therefore, they are not very effective in the real scenarios as well as sensitive domains. Existing studies (Li et al., 2013; Xu et al., 2010)

address these problems using Machine Learning (ML) approaches that are scalable and are able to deal with a large number of candidate KOLs. However, ML approaches require large amount of labeled training data. The datasets are hand-labeled by people who are domain experts and usually very hard to gather: finding KOLs is a time-consuming and typically difficult process even for domain experts. Consequently, training models based on such datasets makes them highly dependent and limited to expert labels. By allowing to reach to large number of online crowds, crowdsourcing has recently become one of the most promising approaches in collecting data for training ML models for different tasks such as

political ideology detection, detecting biased statements, and finding social influencer (Gadiraju and Yang, 2020; Parshotam, 2013; Hube and Fetahu, 2018; Iyyer et al., 2014; Arous et al., 2020). However, in many tasks, like KOL mapping, the annotation data are usually affected by the biases of the crowd workers.

In this paper, we consider a crowdsourcing task where the crowd workers are asked to name as many as KOLs as possible in a specific domain. We propose an approach to measure how biased a crowd worker is, through which we can mitigate worker biases and clean the collected data from biased crowd workers.

2. Related Work

Recent works have explored the mitigating crowd worker biases, for example, Hube et al. (2019) focused on the subjective task and tried to understand the influence of worker’s preferences on their performance. To do this, they examine the annotations of crowd workers on different topics to see the effect of worker’s opinions on their annotations. Their findings show that crowd workers with strong opinions produce biased annotations. The proposed approach is promising to mitigate such bias and can improve the quality of the data collected. Chakraborty et al. (2017) analyzed the demographics of people who suggest the recommendation of contents to understand the demographic distribution of content promoters in social networks. This distribution can show whether these people are representative of the social network population or there is a bias to the groups of people. To this end, they collect extensive data from Twitter of trending topics and study the demographic biases of trends. Their analysis indicates that a large part of the demographic information of crowds who promoted the trends is significantly different from the overall Twitter population. In (Hube et al., 2018), the authors extensively analyzed the effect of crowd worker’s opinions on the quality of the annotated data. They proposed an approach that relies on the labels of the statements and the worker’s personal opinion on each statement’s topic. Using this additional information, they are able to measure how biased a crowd worker is and how they can mitigate the measured bias. Raykar et al., in (Raykar et al., 2009)

, proposed an approach based on the combination of labels provided by different types of crowd workers, i.e., experts and beginners. Therefore, to acquire the final labels of the task they can evaluate the different labels from both experts and beginners then give an estimate of the actual labels. Few researchers

(Karger et al., 2011; Liu et al., 2012) have addressed the problem of crowd workers’ biases by assuming different experts between the crowd workers and based on that they proposed the label aggregation models. These approaches usually improve the collected labels by the majority voting among the workers. However, the idea is suitable when there is no agreement between the workers; in subjective tasks such as KOLs, there may be biases also with complete agreement labels due to the varying ideological backgrounds of workers.

3. Proposed Approach

We propose an approach for collecting data and measuring crowd worker biases for the task of mapping candidate/potential KOLs as either KOL or non-KOL. Our approach can be used for other similar tasks.

Our approach automates the finding and suggesting of potential candidate KOLs. To achieve this, we will prepare a crawling module that collects information using different APIs and scarping different sources. In this study, we target two different aspects of KOLs related to their professionalism in their topics (i.e., scientific aspect) as well as their socialites’ expertise in organizing events and conferences (i.e., social aspect). In particular, we consider Google Scholar111https://scholar.google.com/, PubMed222https://pubmed.ncbi.nlm.nih.gov/, and ClinicalTrials.gov333https://clinicaltrials.gov/. But our crawling module is not limited only to these sources and it is able to be applied to different information sources on various domains. In the next step, we ask crowd workers to suggest as many candidate KOLs as possible. Here, the KOL is about “influence”, and crowds are the target who are directly addressed. The crawling module provides useful information which helps crowd workers to carefully indicate the candidate KOLs. For example, in the category of scientific information, the number of citations of the potential KOLs can be a good parameter to evaluate the quality of candidate KOLs. To this end, we will present the set of collected features representing a candidate KOL, namely, demographic, scientific, and social information to a crowd worker. The KOL mapping task is to predict the likelihood of a candidate KOL to be a potential KOL on a rating scale based on the collected features. Then, we will ask each crowd worker to label out of candidate KOLs where is the number of all candidate KOLs. In this step, to consider the crowd worker bias, we generate the counterfactual of the features for those candidate KOLs.

In this study, what we are considering is a simple yet effective class of counterfactual which can be generated by changing the value of demographic information such as age, sex, and race of candidate KOLS. For example, if we want to deal with gender bias what can we do is generating counterfactual information of candidate KOLs when we change their sex attributes. As shown in Eq. 1, we compute biases of crowd workers using the mean absolute difference of rating score provided for all pairs of the main candidate KOL and the counterfactual candidate KOL as follows:

(1)

where the and are the rating score for the main and counterfactual candidate KOL, respectively. Future work will concentrate on extend Eq. 1 to a weighted biased crowd workers that consider all feature categories, i.e., demographic, scientific, and social aspects. In this case, crowd workers will assign rating scores for each dimension and we compute the bias score as follows:

(2)

where the and are the rating score related to the demographic aspect, and are correspond to the scientific aspect, and and are related to the social aspect.

The final score indicates how biased is a crowd worker; the lower the WorkerBias score, the lower unbiased behavior of crowd worker, and in contrast, the higher values of the WorkerBias score shows a more biased behavior of crowd workers. Therefore, we can use this information of crowd workers bias in conjunction with the crowd worker responses to collect fairer labels and achieve a better dataset. For instance, we can define a threshold based on the biased scores of crowd workers and filter out the labels from those crowd workers whose biased score is beyond the threshold. There may be an issue with generating counterfactual sensitive attributes when a crowd worker relates a previously rated candidate KOL with its counterfactual. This makes a problem to understand is really a crowd worker biased when the crowd worker realized she rated a very similar candidate KOL just before. To address this issue we can consider several solutions: (1) we can play with the order of the candidate KOL information which will be presented to the crowd worker, we should place the candidate information far from each other; (2) we can add noise to some features, for instance, we can change the age of the candidate KOL; (3) we can change the irrelevant and unimportant attributes that those attributes will not have any effects on the crowd worker’s rate, for example, first name, last name, email or phone number, etc.

4. Conclusion and Future Work

In this position paper, we propose a simple yet effective method to measure the demographic biases of crowd workers using a counterfactual approach. Although introduce this approach on the key opinion leaders finding problem, our proposed method can be applied to any social computing problem, when crowd workers classify data based on the social or demographic information. In our future work, we first plan to evaluate this approach using an empirical study by comparing the dataset obtained using our approach and other reported results. Next, we want to extend this approach with a polynomial regression approach when we can consider different weights for each attribute. Finally, we plan to explore how the existing methods fare against different fairness metrics.

References

  • I. Arous, J. Yang, M. Khayati, and P. Cudré-Mauroux (2020) Opencrowd: a human-ai collaborative approach for finding social influencers via open-ended answers aggregation. In Proceedings of The Web Conference 2020, pp. 1851–1862. Cited by: §1.
  • A. Chakraborty, J. Messias, F. Benevenuto, S. Ghosh, N. Ganguly, and K. Gummadi (2017) Who makes trends? understanding demographic biases in crowdsourced recommendations. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 11. Cited by: §2.
  • U. Gadiraju and J. Yang (2020) What can crowd computing do for the next generation of ai systems?. In NeurIPS 2020 Crowd Science Workshop: Remoteness, Fairness, and Mechanisms as Challenges of Data Supply by Humans for Automation, External Links: Link Cited by: §1.
  • C. Hube, B. Fetahu, and U. Gadiraju (2018) LimitBias! measuring worker biases in the crowdsourced collection of subjective judgments (short paper).. See conf/hcomp/2018sad, CEUR Workshop Proceedings, Vol. 2276, pp. 78–82. External Links: Link Cited by: §2.
  • C. Hube, B. Fetahu, and U. Gadiraju (2019) Understanding and mitigating worker biases in the crowdsourced collection of subjective judgments. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–12. Cited by: §2.
  • C. Hube and B. Fetahu (2018) Detecting biased statements in wikipedia. In Companion proceedings of the the web conference 2018, pp. 1779–1786. Cited by: §1.
  • M. Iyyer, P. Enns, J. Boyd-Graber, and P. Resnik (2014)

    Political ideology detection using recursive neural networks

    .
    In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1113–1122. Cited by: §1.
  • D. R. Karger, S. Oh, and D. Shah (2011) Iterative learning for reliable crowdsourcing systems. In Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems, pp. 1953–1961. External Links: Link Cited by: §2.
  • Y. Li, S. Ma, Y. Zhang, R. Huang, et al. (2013) An improved mix framework for opinion leader identification in online learning communities. Knowledge-Based Systems 43, pp. 43–51. Cited by: §1.
  • Q. Liu, U. ICS, J. Peng, and A. Ihler (2012) Variational inference for crowdsourcing. sign 10, pp. 701–709. External Links: Link Cited by: §2.
  • K. Parshotam (2013) Crowd computing: a literature review and definition. In Proceedings of the South African Institute for Computer Scientists and Information Technologists Conference, pp. 121–130. Cited by: §1.
  • V. C. Raykar, S. Yu, L. H. Zhao, A. Jerebko, C. Florin, G. H. Valadez, L. Bogoni, and L. Moy (2009) Supervised learning from multiple experts: whom to trust when everyone lies a bit. In Proceedings of the 26th Annual international conference on machine learning, pp. 889–896. Cited by: §2.
  • H. Xu, S. P. Stenner, S. Doan, K. B. Johnson, L. R. Waitman, and J. C. Denny (2010) MedEx: a medication information extraction system for clinical narratives. Journal of the American Medical Informatics Association 17 (1), pp. 19–24. Cited by: §1.