Adoption and implication of the Biased-Annotator Competence Estimation (BACE) model into COVID-19 vaccine Twitter data: Human annotation, valence prediction, and persuasiveness
Traditional quantitative content analysis approach (human coding method) has arbitrary weaknesses, such as assuming all human coders are equally accurate once the intercoder reliability for training reaches a threshold score. We adopt a new Bayesian statistical model in Political Science, the Biased-Annotator Competence Estimation (BACE) model (Tyler, 2021), into the Communication discipline, that draws on the Bayes rule. An important contribution of this model is it takes each coder's potential biases and reliability into consideration and assigns each coder a competence parameter for predicting the more accurate latent labels. In this extended abstract, we first criticize the weaknesses of conventional human coding; and then adopt the BACE model with COVID-19 vaccine Twitter data and compare BACE with other statistical models; finally, we look forward to adopting crowdsourcing as the next step to better understand the persuasiveness of COVID-19 vaccine social media content.
READ FULL TEXT