Algorithmic bias amplifies opinion polarization: A bounded confidence model

03/06/2018 ∙ by Alina Sîrbu, et al. ∙ University of Pisa 0

The flow of information reaching us via the online media platforms is optimized not by the information content or relevance but by popularity and proximity to the target. This is typically performed in order to maximise platform usage. As a side effect, this introduces an algorithmic bias that is believed to enhance polarization of the societal debate. To study this phenomenon, we modify the well-known continuous opinion dynamics model of bounded confidence in order to account for the algorithmic bias and investigate its consequences. In the simplest version of the original model the pairs of discussion participants are chosen at random and their opinions get closer to each other if they are within a fixed tolerance level. We modify the selection rule of the discussion partners: there is an enhanced probability to choose individuals whose opinions are already close to each other, thus mimicking the behavior of online media which suggest interaction with similar peers. As a result we observe: a) an increased tendency towards polarization, which emerges also in conditions where the original model would predict convergence, and b) a dramatic slowing down of the speed at which the convergence at the asymptotic state is reached, which makes the system highly unstable. Polarization is augmented by a fragmented initial population.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Abstract

The flow of information reaching us via the online media platforms is optimized not by the information content or relevance but by popularity and proximity to the target. This is typically performed in order to maximise platform usage. As a side effect, this introduces an algorithmic bias that is believed to enhance polarization of the societal debate. To study this phenomenon, we modify the well-known continuous opinion dynamics model of bounded confidence in order to account for the algorithmic bias and investigate its consequences. In the simplest version of the original model the pairs of discussion participants are chosen at random and their opinions get closer to each other if they are within a fixed tolerance level. We modify the selection rule of the discussion partners: there is an enhanced probability to choose individuals whose opinions are already close to each other, thus mimicking the behavior of online media which suggest interaction with similar peers. As a result we observe: a) an increased tendency towards polarization, which emerges also in conditions where the original model would predict convergence, and b) a dramatic slowing down of the speed at which the convergence at the asymptotic state is reached, which makes the system highly unstable. Polarization is augmented by a fragmented initial population.

1 Introduction

Political polarization is a generally observed, ingravescent negative trend in modern western societies [1, 2, 21, 22] with such concomitants as “alternative realities”, “filter bubbles”, “echo chambers”, and “fake news”. Several causes have been identified (see, e.g., [3]) but there is increasing evidence that new online media is one of them [4, 5, 6]. Earlier it was assumed that traditional mass media mostly influence the politically active elite of the society and they only indirectly affect polarization of the entire population. The recent dramatic changes with the occurrence of online media, the ubiquity of the Internet with all the information within reach of a few clicks and the general usage of online social networks have increased the number of communication channels by which political information can reach citizens. Somewhat counterintuitively, this has not lead to a more balanced information acquisition and a stronger tendency towards consensus, as argued in [7], on the contrary. One of the reasons may be that the new media have enhanced the reachability of people, which can be used for transmitting simplified political answers to complex questions and thus act toward polarization [8]. Moreover, the stream of news is not organized in the new media in a balanced way, but by algorithms, which are built to maximise platform usage. It is conjectured that this generates an “algorithmic bias”, which artificially enhances opinion polarization. This is an artefact of online platforms, also called ”algorithmic segregation” [9]. The link between opinion polarization and algorithmic bias from online platforms has not been proven to date, so the aim of the present paper is to study this effect by a simple, bounded-confidence-type opinion dynamics model.

A considerable and rapidly increasing part of the population does not use traditional media (printed press, radio, TV or even online journals) for obtaining news [10, 11, 12] but turns to the new media like online social networks or blogs. However, the flow of news in the new media is not selected by the information value but rather by popularity, by ”likes” [13]. As people tend to identify themselves with views similar to their own and will like corresponding news with higher probability, it is in the interest of the service providers to channel the information already in a targeted way. This means users do not even get confronted with narratives different from their favorite ones. Large effort is paid to develop efficient algorithms for the appropriate channeling of the news to provide the stream that has the highest chance to collect maximum number of likes.

Another important factor pointing in the same direction is related to the ”share” function, which is largely responsible for the fast spreading of news and thus enhancing popularity. This spreading takes place on the social network, where links are formed mostly as a consequence of homophily, i.e., exchange of information takes place between people with similar views. The variety of interactions of a person (family, school, work, hobby, etc.) may contribute to diversification of information sources [14], nevertheless, regarding political views homophily seems particularly strong [15]. The sharing among users with similar beliefs was also shown to be true in the context of spreading of misinformation on Facebook, leading to an echo-chamber effect [20], as well as on Twitter where partisan users were demonstrated to have an important role in polarization [24].

Network effects are obviously very important in the spreading of news and opinions, however, in our present study we will ignore the network structure of the system and will exclusively focus on the consequences of the algorithmic bias for selecting the content presented to the users. This corresponds to a mean field approach, which is widely used as a first approximation to spreading problems [16].

The task is therefore to model the evolution of the attitude of the society, provided there is a bias in selecting partners whose opinions are confronted. Recent years has seen the introduction of several models of opinion dynamics [19], however none of them includes algorithmic bias in the sense described here. Hence we provide an extension of one of the existing models to study this effect. For the sake of simplicity we use a bounded confidence opinion dynamics model [17], which is known to be able to describe both consensus and polarization of opinions depending on the tolerance level of the agents. In the mean field version two agents are selected at random and their opinions, represented by real numbers, get closer if they are within the tolerance level. The bias is introduced such that already at the selection of the partners their opinions are considered and agents with closer opinions are selected with higher probability. With this simple modification we see two effects. First, the tendency toward consensus is hindered, i.e., larger tolerance level is needed to achieve consensus; second, the approach to the asymptotic state is slowed down tremendously. Therefore, the selection bias affects the opinion formation process in two profound ways: a) by exacerbating polarization, which emerges also in conditions where the original model would predict consensus, and b) by slowing down the spreading process, which makes the system highly unstable.

The paper is organized as follows. In the next section we introduce the model in detail. Section Results contains the results of the simulations. We close the paper with a discussion and an account of further research.

2 The model

The original bounded confidence model [17] considers a population of individuals, where each individual holds a continuous opinion . This opinion can be considered the degree by which an individual agrees or not to a certain position. Individuals are connected by a complete social network, and interact pairwise at discrete time steps. The interacting pair is selected randomly from the population at each time point . After interaction, the two opinions, and may change, depending on a so called bounded confidence parameter . This can be seen as a measure of the open-mindedness of individuals in a population. It defines a threshold on the distance between the opinion of the two individuals, beyond which communication between individuals is not possible due to conflicting views.

If we define the distance between two opinions and as , then information is exchanged between the two individuals only if , otherwise nothing happens. The exchange of information results in the two opinions becoming closer to one another, modulated by a convergence parameter

In the following we consider only the case of , hence when both individuals take the average opinion.

To introduce algorithmic bias in the interaction between individuals, we modify the procedure by which the pair is selected. In the original model, and are selected uniformly at random from the population. Instead, algorithmic bias makes encounters of similar individuals more probable. To account for this, we first select , then the selection of is performed with a probability that depends on :

(1)

In this way, the probability to select once was selected is larger if is smaller, i.e. alike individuals interact more. The parameter is the strength of the algorithmic bias: the larger , the more rare will be the encounters among individuals with distant opinions. For we obtain the original bounded confidence model, .

In the following we will analyse the new model numerically, through simulation of the opinion formation process. To avoid undefined operations in Eq 1, when (), we use a lower bound for , . So, if then is replaced with in Eq 1. We use in the following.

3 Results

In order to understand how the introduction of the algorithmic bias affects model performance, we study the model under multiple criteria for various combinations of parameters and . We are interested in whether the population converges to consensus or to multiple opinion clusters (Section 3.1), and how fast convergence appears (Section 3.2). For this we concentrate on the transition between one and two clusters. We also consider the influence of the size of the population on the behavior observed, both for the original and extended model (Section 3.3). Furthermore, the effect of a segregated initial population is studied (Section 3.4). For each analysis we repeat simulations multiple times to account for the stochastic nature of the model, and show average values obtained for each criterion above.

3.1 Consensus versus opinion segregation

Figure 1: Number of clusters obtained for various and . The top panel shows the space for while the bottom panel zooms into the area where . Values are averaged over 85 runs.

The behaviour of the original bounded confidence model is defined by the parameter  [17]. When this is large enough, an initially uniformly random population converges to consensus, while as decreases, clusters emerge in the population. It was shown that the number of major clusters can be approximated by . This approximation ignores minor clusters that may emerge in some situations [18].

In the following we analyse the number of clusters obtained for our extended model, starting from an uniform initial distribution of opinions, for a population of size . We concentrate on the area in the space where, in the original model, the number of clusters is smaller or equal to 2, i.e. . In order to quantify the number of clusters, given the existence of major and minor clusters, we use the cluster participation ratio as a criterion. This takes into account not only the number of clusters, but also the fraction of the population in each, measuring thus the effective number of clusters. Hence two perfectly equal clusters will result in a cluster participation ratio of , however if one cluster is larger, the measure will take a value in . The effective number of clusters measured in this section is thus computed as:

(2)

where is the size of cluster .

Fig 1 displays the effective number of clusters for various and values, using averages over multiple runs. The top panel shows results for between and , while the bottom panel zooms in to the area where is between and . Note that in these simulations, the population converges to either one, two or three major clusters of similar mass, plus some minor negligible clusters. For the same parameter setting, the population may converge to one cluster in some simulations, or to two clusters in others. We present the average values as obtained for 85 independent runs.

Figure 2: Total number of interactions required for convergence normalized by the number of individuals, averaged over 85 runs.

The plot shows that our simulations reproduce perfectly the behaviour of the original model (), where a transition between 2 and 1 cluster takes place for . It is clear that the introduction of causes an increase in the effective number of clusters, compared to the original model. For instance, for , the original model results in one cluster, while for our model, new clusters start to emerge for . For , the transition starts even earlier before , with the average number of clusters very close to 2 at . For the case of , when the original number of clusters was already 2, increases towards 3 opinion clusters. Hence, it appears from our simulations that algorithmic bias causes segregation in the bounded confidence model, by increasing the number of clusters with increasing bias.

3.2 Time to convergence

While the asymptotic number of opinion clusters is very important, the time to obtain these clusters is equally so. In a real setting, available time is finite, and so if consensus forms only after a very long period of time, it may never actually emerge in the real population. Thus, we measure the time needed for convergence (to either one or more opinion clusters) in our extended model. This can be counted as total number of pairwise interactions required to obtain a stable configuration, divided by , the population size.

Fig 2 shows how the total number of interactions required depends on both and . It is clear that the time to convergence grows very fast with , for all values, including the situation when the population converges to consensus. This indicates that, in a real setting with finite observation time this consensus may not emerge at all. Hence has a double segregation effect: not only the number of clusters grows, but consensus becomes extremely slow as well.

To support this observation, we show in Fig 3 the evolution of the population for various values of , when and . In the first case (), the population always converges to one cluster, however we can notice the increase in the number of iterations required, with many clusters coexisting for a long period of time before convergence. When , the original Deffuant model results in one cluster, with fast convergence. As grows, convergence slows down first, and when reaches a certain threshold two clusters emerge. The case where is close to the transition. It is particularly interesting, because we can notice that initially two clusters coexist for a while, but they eventually merge into one cluster. In other simulation instances with the same parameter values, the two clusters never merge.

Figure 3: Evolution of the population of opinions for various and values. The first row corresponds to the case where , while the second row corresponds to . In both cases (left to right).

Another observation emerging from Fig 2 is that, besides the general fast growth of the time to convergence, we also observe a smaller peak in the convergence time for an between 0.25 and 0.35, for all values of

. This corresponds to a slowing down of convergence around the phase transition (from one to two clusters), which is a known physical phenomenon.

One may argue, however, that measuring the time as the total number of interactions may inflate the figures. Each interaction can have 3 outcomes: (1) nothing happens because a pair of individuals with identical opinions () were selected, (2) nothing happens because of bounded confidence or (3) the two opinions actually change. In the following we denominate the third type of interaction as ‘active’ interactions.

Fig 4 details the total and active number of interactions for the example case of . Both measures grow like where , with a small difference visible between active and total interactions. This extremely fast growth of the convergence time means that, in practice, consensus is hindered even by weak algorithmic bias, since consensus is slow to form, hence the population stays in a disordered state for a long time.

3.3 Finite size effects

Figure 4: Time to convergence. Normalized total, non-null difference and active number of interactions required for convergence for . The reference is shown as a visual aid only.
Figure 5: Number of clusters without algorithmic bias. Effective number of clusters obtained for various and , when . Values are averaged over 200, 125, 85, 45 and 45 runs for

, respectively. Error bars show one standard deviation from the mean.

A third analysis that we performed aimed at understanding whether the size of the population plays a role in the effect of the algorithmic bias. Again, this is important for realistic scenarios, since opinion formation may happen both at small and at large scale. Hence we look at the transition between consensus and segregation for variable population sizes, both for the original and for the extended model.

Figure 6: Number of clusters with algorithmic bias. Effective number of clusters obtained for various , and . Values are averaged over 200, 125, 85, 45 and 45 runs for , respectively. Error bars show one standard deviation from the mean.

In the original model, the transition between and clusters is shown to be continuous, with an interval for where both cases can appear in different simulation instances (see Fig 4 in [17]). In particular, the transition between one and two clusters happens for . We performed numerical simulations to test whether this interval changes with , given that, to the authors’ knowledge, a detailed study in this direction does not exist for the bounded confidence model. Fig 5 shows the mean effective number of clusters over multiple runs obtained with . Error bars represent one standard deviation from the mean. It is clear that, as increases, the transition becomes more abrupt, i.e. the transition interval decreases in length, and gets closer to . For very small populations, in particular, the number of clusters in the transition area is larger, probably due to a lower density of opinions in the starting population, which facilitates formation of clusters even when larger populations would converge to consensus. Hence we can conclude that a small can also favour segregation. The error bars are almost invisible outside the phase transition interval, but quite large inside this interval. This is due to the fact that within the transition interval some simulations converge to one cluster, while other to two clusters, obtaining thus large standard deviations from the mean. analyse the effect of the population size when , we consider two different values (0.3 and 0.32). These were chosen because for they yield one cluster, while segregation emerges as grows. Fig 6 shows the effective number of clusters obtained for various population sizes. Again, the transition from one to two clusters is more steep in as increases, with small population sizes favouring segregation. In these conditions, it seems that algorithmic bias can actually be more efficient in hindering consensus for smaller groups.

3.4 Effect of the initial condition

Figure 7: Initial condition. Effect of the initial condition on the effective number of clusters (averages over 10 runs).
Figure 8: Effect of the initial condition on the convergence time measured in number of active interactions (averages over 10 runs).

Previous results were obtained for the case where numerical simulations assumed uniformly random initial opinions in the population. However, in reality, opinion formation may start from slightly fragmented initial conditions. To simulate this, we introduced artificially a symmetric gap around the opinion value , to simulate a population where opinions are already forming. The width of the gap was varied to understand the effect of various fragmentation levels both for the original bounded confidence model () and for our extension.

Fig 7 shows the mean effective number of clusters obtained for various gap sizes, with error bars showing one standard deviation from the mean. For the original model, the gap shifts the transition from two to one cluster towards larger values of . Hence a fragmented initial condition favors fragmentation even later during the evolution of opinions. However, as grows, fragmentation disappears, hence a higher tolerance level can overcome a fragmented initial condition. We also note that as grows, the two effects from algorithmic bias and initial fragmentation add up to push the transition to consensus ever closer to . Error bars show again how in the transition interval the population converges to one cluster is some simulations and to two clusters in others, resulting in relatively large deviations from the mean. However, outside the transition interval error bars are almost invisible, hence the number of cluster is very stable from one simulation to another.

In terms of time for convergence, Fig 8 plots the number of active interactions for the case of a fragmented initial condition. It appears that initial fragmentation speeds up convergence when the final population is also fragmented. However, when the final population reaches consensus (one cluster), the effect is reversed, i.e. initial fragmentation slows down convergence. The time to convergence continues to grow very fast with , as seen previously for a uniform initial condition.

Hence, again, the two effects appear to work together against reaching consensus, either by favoring the appearance of additional clusters or by slowing down consensus when this could, in principle, emerge.

4 Discussion and Conclusions

A model of algorithmic bias in the framework of bounded confidence was presented, and its behavior analyzed. Algorithmic bias is a mechanism that encourages interaction among like-minded individuals, similar to patterns observed in real social network data. We found that, for this model, algorithmic bias hinders consensus and favors opinion segregation through two different mechanisms. On one hand, consensus is hindered by a very strong slowdown of convergence, so that even when one cluster is asymptotically obtained, the time to reach it is so long that in practice consensus will never appear. Additionally, we observed segregation of the population as the bias grows stronger, with the number of clusters obtained increasing compared to the original model. A fragmented initial condition also enhances the fragmentation, augmenting the effect of the algorithmic bias. Additionally, we observed that small populations may be less resilient to segregation, due to finite size effects.

The results presented here are based on the assumption that bounded confidence exists, i.e. individuals with very distinct opinions do not exchange information hence do not influence each other. However, our conclusions regarding the fact that algorithmic bias hinders consensus still stand even when bounded confidence is removed (i.e. ). In this case, consensus still becomes extremely slow as the bias increases, hence is never achieved in practice, a result that we believe will apply to any other model of opinion dynamics. It would also be interesting to see how taking into account a more realistic social network structure among individuals, instead of a complete graph where anybody may interact with anybody else, would impact the opinion formation process, possibly exacerbating the effects observed in this study.

Although there is evidence that many types of social interactions are subject to algorithmic bias, the debate still continues on whether this generates or not opinion segregation in the long term. Our numerical results support the first option, which we plan to analyse in more detail in the future by applying our model to real data from social network processes. We would also like to understand how external information could affect the behavior observed, especially when the sources of information are also selected based on a similar bias. Recent work on how to counteract opinion polarisation on social networks has also appeared [23], and initial results suggest that facilitating interaction among chosen individuals in polarised communities can alleviate the issue. We will also investigate this with our model.

Acknowledgements

We thank the IT Center of the University of Pisa (Centro Interdipartimentale di Servizi e Ricerca) for providing access to computing resources for simulations. This work was supported by the European Community’s H2020 Program under the funding scheme “FETPROACT-1-2014: Global Systems Science (GSS)”, grant agreement # 641191 CIMPLEX “Bringing CItizens, Models and Data together in Participatory, Interactive SociaL EXploratories”.

References

  • [1] Pew Research Center: Political Polarization in the American Public, June 12, 2014, http://www.people-press.org/2014/06/12/political-polarization-in-the-american-public/
  • [2] Quartz Media: European politics is more polarized than ever, and these numbers prove it, March 30, 2016, https://qz.com/645649/european-politics-is-more-polarized-than-ever-and-these-numbers-prove-it/
  • [3] J. Campbell: Polrized: Making Sense of a Divided America (Princeton UP, 2016)
  • [4] M. Prior: Post-Broadcast Democracy: How Media Choice Increases Inequality in Political Involvement and Polarizes Elections (Cambridge UP, 2007)
  • [5] M.A. Baum and T. Groeling: New Media and the Polarization of American Political Discourse, Political Communication, 25, 345-365 (2008)
  • [6] J.A. Graber and T. Dunaway: Mass Media and American Politics (9th edition, Sage, 2012)
  • [7] S. Messing, S. J. Westwood: Selective Exposure in the Age of Social Media: Endorsements Trump Partisan Source Affiliation, When Selecting News OnlineCommunic. Res. 41, 1042-1063 (2012)
  • [8] A. Charles: Media and/or Democracy, in: Media/Democracy: A Comparative Study, ed: A. Charles (Cambridge Scholars, 2012)
  • [9] M. Ignatieff: Media Pluralism and Democracy, Annual Colloquium on Fundamental Rights, Brussels, November 17, 2016, http://ec.europa.eu/information_society/newsroom/image/document/2016-47/media_pluralism_and_democracy_speech_en_19783.pdf
  • [10] K. Purcell, L. Rainie, A. Mitchell, T. Rosenstiel and L. Olmstead: Understanding the Participatory News Consumer, Pew Internet and American Life 375 Project, 1 March, http://www.pewinternet.org/Reports/2010/Online-News.aspx
  • [11] A. Hermida, F. Fletcher, D. Korell, and D. Logan: Share, like, recommend: Decoding the social media news consumer, Journalism Studies, 0, 1-10 (2012)
  • [12] M. Villi, J. Matikainen and I. Khaldarova: Recommend, Tweet, Share: User-Distributed Content (UDC) and the Convergence of News Media and Social Networks. In: Media Convergence Handbook Vol. 1. Journalism, Broadcasting, and Social Media Aspects of Convergence, edited by Artur Lugmayr and Cinzia Dal Zotto, pp289-306 (Springer, Heidelberg, 2015)
  • [13] E. Pariser: The filter bubble: What the internet is hiding from you (Penguin, London, 2011)
  • [14] E. Bakshy, S. Messing, L.A. Adamic: Exposure to ideologically diverse news and opinion on Facebook, Science, 348, 6239 (2015)
  • [15] E. Colleoni, A. Rozza, A. Arvidsson: Echo Chamber or Public Sphere? PredictingPolitical Orientation and Measuring PoliticalHomophily in Twitter Using Big Data, Journal of Communication 64, 317-332 (2014)
  • [16] R. Pastor-Satorras, C. Castellano, P. Van Mieghem, and A. Vespignani: Epidemic processes in complex networks, Rev. Mod. Phys. 87, 925 (2015)
  • [17] G Deffuant, D Neau, F Amblard, G Weisbuch: Mixing beliefs among interacting agents, Advances in Complex Systems 3, 87-98 (2000)
  • [18] J. Lorenz. Continuous opinion dynamics under bounded confidence: A survey. International Journal of Modern Physics C, 18(12):1819–1838, 2007.
  • [19] Sîrbu, A., Loreto, V., Servedio, V. D., and Tria, F, . Opinion dynamics: models, extensions and external effects. In Participatory Sensing, Opinions and Collective Awareness (pp. 363-401), 2017. Springer International Publishing.
  • [20] Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, H.E. and Quattrociocchi, W., 2016. The spreading of misinformation online. Proceedings of the National Academy of Sciences, 113(3), pp.554-559.
  • [21] Clio Andris, David Lee, Marcus J. Hamilton, Mauro Martino, Christian E. Gunning, John Armistead Selden. The Rise of Partisanship and Super-Cooperators in the U.S. House of Representatives. PlosONE Published: April 21, 2015 https://doi.org/10.1371/journal.pone.0123507
  • [22]

    Michela Del Vicario, Sabrina Gaito, Walter Quattrociocchi, Matteo Zignani and Fabiana Zollo. News Consumption during the Italian Referendum: A Cross-platform Analysis on Facebook and Twitter. Proc. DSAA2017 - The 4th IEEE International Conference on Data Science and Advanced Analytics (2017).

  • [23] Garimella, K., De Francisci Morales, G., Gionis, A. and Mathioudakis, M., 2017, February. Reducing controversy by connecting opposing views. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining (pp. 81-90). ACM.
  • [24] Garimella, K., Morales, G.D.F., Gionis, A. and Mathioudakis, M., 2018. Political Discourse on Social Media: Echo Chambers, Gatekeepers, and the Price of Bipartisanship. WWW 2018, arXiv preprint arXiv:1801.01665.