A rigorous formulation of and partial results on Lorenz's "consensus strikes back" phenomenon for the Hegselmann-Krause model

07/26/2021 ∙ by Edvin Wedin, et al. ∙ 0

In a 2006 paper, Jan Lorenz observed a curious behaviour in numerical simulations of the Hegselmann-Krause model: Under some circumstances, making agents more closed-minded can produce a consensus from a dense configuration of opinions which otherwise leads to fragmentation. Suppose one considers initial opinions equally spaced on an interval of length L. As first observed by Lorenz, simulations suggest that there are three intervals [0, L_1), (L_1, L_2) and (L_2, L_3), with L_1 ≈ 5.23, L_2 ≈ 5.67 and L_3 ≈ 6.84 such that, when the number of agents is sufficiently large, consensus occurs in the first and third intervals, whereas for the second interval the system fragments into three clusters. In this paper, we prove consensus for L ≤ 5.2 and for L sufficiently close to 6. These proofs include large computations and in principle the set of L for which consensus can be proven using our approach may be extended with the use of more computing power. We also prove that the set of L for which consensus occurs is open. Moreover, we prove that, when consensus is assured for the equally spaced systems, this in turn implies asymptotic almost sure consensus for the same values of L when initial opinions are drawn independently and uniformly at random. We thus conjecture a pair of phase transitions, making precise the formulation of Lorenz's "consensus strikes back" hypothesis. Our approach makes use of the continuous agent model introduced by Blondel, Hendrickx and Tsitsiklis. Indeed, one contribution of the paper is to provide a presentation of the relationships between the three different models with equally spaced, uniformly random and continuous agents, respectively, which is more rigorous than what can be found in the existing literature.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 25

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the classical Hegselmann-Krause model (the HK-model for short) in opinion dynamics, each agent in a set of agents indexed by integers possesses an opinion at time . All agents then simultaneously update their opinion at the next time step according to the rule

(1.1)

where , and .

The paper normally cited in connection to this model is [5], which presents simulations and some important basic results. Strictly speaking, [5] gives a slightly different definition where the 1 in the expression for is replaced by a confidence radius . We note that simultaneously scaling along with all opinions does not change the qualitative behaviour of the model, and the formulation given here, referred to as the normalised model, is common. When discussing the HK-model, it is useful to employ the concept of the connectivity graph, which takes as nodes the agents and connects the agents and precisely when .

Perhaps the most basic observation is that if two agents hold opinions separated by more than 1, and no other agents holds an opinion in between, the two will never interact. A second, slightly less obvious, observation is that even if the current state has a connected connectivity graph, that of the updated state might be disconnected, as may be readily verified by assigning the opinions 0, 0, 1, 2, 3 and 3 to six agents and computing the update. Breaking of the connectivity graph, which by the first observation is irreversible, is referred to as fragmentation. A third observation of [5], which requires a little more mathematical work to verify, is that for each possible initial choice of opinions, there is some finite number such that after updating the system times, the opinion profile reaches a fixed point which is not changed by subsequent updates. When we reach such a fixed point, we say that the system freezes111The study of the time needed for freezing in the Hegselmann-Krause model has spawned at least half a dozen papers by about as many authors. State of the art results can be found in [2], [8] and [10]., and it is not hard to verify that a frozen state must consist of a set of clusters, where agents in a cluster are in agreement and clusters are pairwise separated by strictly more than 1. A configuration consisting of a single cluster is called a consensus.

The HK-model has received considerable attention, and the original paper [5] has close to 3000 citations on Google Scholar at the time of writing. Most of the citing papers present simulations of all sorts of variations on the original model. There are, to date, only a handful of papers with rigorous mathematical results for the basic model, e.g.: [2], [8], [10], [4], [3].

In many instances, interesting hypotheses have first arisen from simulations. One particularly nice example of this concerns the question of what the final configuration typically looks like when the model is initiated with a large number of agents equidistributed on the interval , for some fixed . In a seminal 2006 paper [7]

, Lorenz approached this problem in two ways, the first of which was to simply simulate the dynamics of (1) for equally spaced agents on various intervals, including half-infinite ones. The second way was to devise a clever interactive Markov chain (IMC) model where the opinion space is discretised and the agents change sections according to a stochastic matrix chosen so as to mimic the original behaviour of the model, arguing that the models should, intuitively, be equal in the limit when the discretisation is refined. In this way, he produced an early way of simulating not the dynamics of the actual agents, but rather that of their

distribution. The point is that this, at least morally, should hint at the typical behaviour of the actual model for large numbers of agents.

The paper contains no mathematical proofs, but various interesting observations and remarks on the presented simulations.

One of Lorenz’s observations, which gave the paper its title, is that, in his IMC model, the resulting configuration of clusters behaves unexpectedly when the radius of confidence is varied. Adhering to the convention of using a normalised radius, which we will keep throughout this paper, his finding translates to the following. When opinions are spread on an interval of length , all agents reach a consensus, and this remains true for a while when grows larger than 1. At around , the final configuration undergoes a bifurcation, and changes from one to three clusters. What is even more interesting is that around the system undergoes another bifurcation, and the final state returns to consisting of a single cluster. In the words of Lorenz, consensus “strikes back”!

The following conjecture is implicit in Lorenz’s paper:

Conjecture 1.1.

Denote by the random final number of clusters reached by updating according to (1.1) when starting from agents whose opinions are drawn uniformly and independently at random from the interval .

Then the limit

exists as a random variable and there exist numbers

, such that:

  1. If , then almost surely.

  2. If , then almost surely.

  3. If , then almost surely.

Lorenz discusses his observation in relation to a 2004 conjecture by Hegselmann, stating that for any there might be a number such that equally spaced agents on an interval of length must eventually reach a consensus, a conjecture that is still not disproven rigorously.

A step forward in our understanding of equidistributed agents on an interval of length was taken in a 2007 paper of Blondel, Hendrickx and Tsitsiklis [3]. The authors choose another approach for studying the dynamics of the distribution of agents, namely to consider a continuum of agents and index them not by a set of natural numbers, but by an interval of real numbers. Replacing the sum in (1.1) by an integral, the analogue of (1.1) is then that for every agent , its updated opinion is given by

(1.2)

where .

In contrast to the IMC model of Lorenz this formulation doesn’t require a finite discretisation of the opinion space. The downside is that it is hard to use for actual formal computations, but it is very useful from a theoretical point of view.

The three chief contributions in this paper are

  1. to develop techniques for finding rigorous bounds on how much the evolution of a finite number of equally spaced agents on an interval of length may differ from the limiting case when goes to infinity,

  2. to give a rigorous presentation of the relationships between the three different models with equally spaced, uniformly random and continuous agents, respectively,

  3. using (i) and (ii), to prove the following theorem and corollary:

Theorem 1.2.

Denote by the final number of clusters reached by updating according to (1.1) when starting from agents whose opinions are equally spaced on the interval , and let

whenever the limit exists, i.e. when is constant for all sufficiently large .

  1. is an open set.

  2. If , , there exists some number such that, if then, for all sufficiently large , the ’th update of the corresponding equally spaced profile is a consensus.

    Hence, there exist numbers such that if then .

Corollary 1.3.

Denote by the random final number of clusters reached by updating according to (1.1) when starting from agents whose opinions are drawn uniformly and independently at random from the interval , and let

whenever the limit random variable exists. Then

  1. is an open set.

  2. With the same numbers , and as in Theorem 1.2 we have that, if , then and, if then, as , the ’th update of the corresponding uniformly random profile is a consensus asymptotically almost surely.

We will build on the results of [3] in several ways, and refer to that paper for some proofs and additional background.

The rest of the paper will be structured as follows:

Section 2 will serve as a theoretical foundation. Here, we will develop a rigorous theory of opinion profiles, both in the traditional discrete case, i.e. when the number of agents is finite, and in that of an agent continuum, as well as tools to relate the two. In particular, we will introduce the concepts of refining and coarsening, which will be used heavily to handle and relate different deterministic samples from the same distribution. This section also presents, and in some cases strengthens, some previously known results that will be used. An important result (Proposition 2.25) is that the updating operation (1.2) is continuous, with respect to the infinity norm, at so-called regular profiles (Definition 2.20). At the end of the section, we prove part (i) of Theorem 1.2 and show how Corollary 1.3 follows from Theorem 1.2.

We are then left to prove part (ii) of Theorem 1.2 in subsequent sections. Our basic strategy is to reduce the proof to a finite computation222What we mean by this is a computation that is certainly finite and, if it produces a certain result, allows us to deduce the theorem.. To do so, we need to go beyond the general theory of Section 2 and develop explicit quantitative bounds when comparing the updates of a discrete profile and small perturbations of it. In particular, we compare updates of a discrete profile and its refinements. This material is presented in Section 3.

In Sections 4 and 5, we apply the results of Section 3 to the case of equally spaced opinions. In order to ensure that the resulting finite computations are manageable, we will also use a result from Section 2 (Corollary 2.29).

In Section 4 we prove the existence of . This involves a large but manageable number of computations for a grid of -values up to . In principle we could push beyond 5.2, but as one gets closer to the conjectured phase transition at about 5.23 the amount of computing power needed increases drastically.

Consensus at is proven in Section 5. This time, to reduce the proof to a manageable computation requires more than just applying the theory from Section 3. Lorenz already observed that the mechanism by which consensus is reached after it strikes back is different than for smaller values of . The profile quickly settles into a state where most agents reside in 5 groups and from there the span of opinions shrinks very slowly. The error analysis from Section 3 is no longer practical over such time scales. Hence, we prove a theorem (Theorem 5.2) saying, informally, that a certain class of profiles in which most agents reside in 5 groups must evolve to consensus. This effectively means that we just need to compute the updates of a single profile until the conditions in the theorem are satisfied, and then use the error analysis from Section 3. This turns out to lead to a manageable computation.

All our computations are carried out in the high precision ball arithmetic of the package Arb[6]. All code is written in Julia[1].

Section 6 contains a discussion of our results and of possible future work.

2 Definitions and results

In what follows, we will find it convenient to adopt a notation that differs slightly from (1.1). Still, we consider a set of agents, and their “opinions” . For the updates we will follow the notation of [3] which uses the updating operator , defined as

(2.1)

where .

This formulation, clearly equivalent with that given by (1.1), will be referred to as the traditional model, and we will frequently describe results and procedures in terms of this, although the setting formally will be more general.

We begin by introducing some notation: Let denote the average of a function with values taken from a non-empty set . With this notation, (2.1) is condensed to

(2.2)

In what follows, we will use a more general formalism, largely following [3]. This is to be better able to compare the behaviour of the model for different values of , and to relate this to the resulting behaviour if we let .

Definition 2.1.

If, for two bounded and Lebesgue measurable functions , there exists a measure preserving bijection such that , we say that and are permutation equivalent and write .

Observation 2.2.

If two functions and are permutation equivalent, it follows that for all .

It is easy to check that is an equivalence relation.

Definition 2.3.

An (opinion) profile is a non-decreasing function .

An element of the unit interval will be referred to as an agent, and will be referred to as the opinion of the agent .

The set of opinion profiles is denoted by .

These profiles will be updated according to the following adaptation of the rule (2.2):

Definition 2.4.

The updating operator takes a profile to its update according to the rule

where . The average over an interval of positive Lebesgue measure is given by

(2.3)
Observation 2.5.

It is easy to check that if is non-decreasing so is , so for any profile and any natural number the -fold update is well defined.

Observation 2.6.

From the definition, we see immediately that is translation invariant, in the sense that

for any profile and any .

Observation 2.7.

Though the operator will mainly be used for profiles, the same definition can be made for all measurable functions. With this in mind, we will occasionally without comment let act on a measurable function without checking whether or not it’s non-decreasing. In particular, note that for any profile and any measurable function , we have .

For an agent and a profile , we will refer to the set as the neighbourhood of , and to the members of said set as the neighbours of . For two agents and , we will also say that can see , or that is within sight of , if and only if .

The following definition lets us use this formalism to emulate the traditional model:

Definition 2.8.

A discrete pre-profile on agents is a function which, for every integer , is constant on the interval , as well as on the interval .

A discrete profile on agents is a profile which is also a discrete pre-profile on agents.

For a discrete profile on agents, we will let the term agent refer to an interval of the form , for , or .

The set of discrete profiles on agents is denoted by .

For discrete profiles, we will abuse notation by referring to agents by their index, and often adopt the shorthand notation of writing instead of and instead of when there is no risk of confusion.

It should be clear from the context which of the two notations is being used, but as a rule we will use the Latin indices , , and so on to denote integers, whenever the shorthand is used, and Greek letters or fractions otherwise.

As observed in Section 1, a well known property of the traditional Hegselmann-Krause model is that any profile with a finite number of agents must freeze, that is reach a fixed point, in finite time. This can be summarised as follows.

Observation 2.9.

Let be a discrete profile. Then there exists such that for any . The smallest such is called the freezing time of .

Definition 2.10.

For a profile , let denote the pointwise limit , whenever the limit exists.

By Observation 2.9, is well defined for any discrete profile . It would follow from Conjecture 2 in [3] that it is well defined for any profile , but this fundamental problem remains unsolved.

On the way to freezing, the agents in a profile will typically agregate into clusters, and we make the following definition.

Definition 2.11.

In a profile , a maximal set of agents which share the same opinion, that is, a maximal set of agents for some , is called a cluster.

A profile where all agents lie in a single cluster, i.e. a constant profile, is referred to as a consensus, and, given a time , a profile such that consists of a single cluster is said to have reached a consensus at time .

To have a formal way of manipulating profiles, we make the following definition.

Definition 2.12.

Given a discrete profile on agents, moving an agent will refer to the act of changing the value of for all in the interval corresponding to the index to some common value, and replacing the resulting function with a profile such that we have the equivalence with the relation from Definition 2.1. The amount by which is changed will be referred to as the amount by which was moved.

We note that, in this definition, we will have if the initial change in value preserves the non-decreasing quality of .

The following definitions present the new notions of coarsening and regular refinement of profiles, which will be central in the proofs to come.

Definition 2.13.

Given a discrete profile on agents and , a -regular refinement of is a profile on agents, such that for any .

If is a -regular refinement of for some it will sometimes be referred to as just a regular refinement, without specifying for which .

Definition 2.14.

Given a discrete profile on agents, the canonical -regular refinement is the -regular refinement of for which the sequence constitutes an arithmetic progression for each .

In terms of the traditional model, regularly refining a profile means adding some fixed number of new agents between every existing pair of consecutive agents.

The canonical

-regular refinements are those which, in some sense, are the most spread out. They represent a linear interpolation of the opinions in a discrete profile.

Definition 2.15.

For any function , and , we define the -coarsening of , , as the discrete pre-profile on agents which satisfies for each . When the is not specified, or otherwise where there is no risk of confusion, we will simply refer to coarsenings.

Further, we define the limit coarsening of , , as the discrete pre-profile on agents which satisfies for each .

The way to think about Definition 2.15 is that, given any discrete profile on agents, the operator takes any -regular refinement of and returns . For instance, for any discrete profile on agents and any we have that . Also note that a coarsening of a profile is always a profile.

As for the limit coarsening, it should be thought of mainly as an operator to use on regular profiles, defined below. It can for instance be instructive to note that if we define as the pointwise limit of the -regular refinements of , then .

For a reader familiar with signal processing, yet another way to view the concepts of refining and coarsening is to consider profiles as signals. The two then roughly correspond to (admittedly degenerate) upsampling and downsampling, respectively.

Definition 2.16.

For a positive real number , we define the canonical linear profile of diameter by .

For , the limit coarsening will be called the canonical equally spaced profile on agents with diameter . Thus, consists of agents equally spaced on the interval , i.e.: for .

We note that, for any , .

The following three definitions will be much employed throughout the whole paper. The third is an essential cornerstone in the theory we need to prove Theorem 1.2.

Definition 2.17.

Given , a profile is said to be symmetric about c if for almost every333That is, outside of a set of measure zero. .

We do not require everywhere, as this would not allow us to speak of symmetric discrete profiles.

Observation 2.18.

If is symmetric about , then so is for any .

Definition 2.19.

The diameter of a profile is defined as .

Definition 2.20.

Given a set , an injective function is said to be regular on if there exist strictly positive real numbers and , such that

for any distinct .

A function that is regular on the whole set is simply called regular, and the term -regular is used when there are specific numbers and which satisfy the above inequalities.

We stress that these parameters and are not defined in the same way as in [3]. Our and correspond to and in [3].

In this paper all regular functions will be non-decreasing. In this case, another way of phrasing the definition is that and act as lower and upper bounds, respectively, on the derivative of wherever it is defined. Yet another way is to say that and are Lipschitz constants for and , respectively.

Evidently, a discrete profile cannot be regular, but any regular profile must instead be continuous.

Remark 2.21.

A reader might ask: “In order to prove Theorem 1.2, why don’t you just compute explicit formulas for , alternatively , for general , and ?” The short answer is that, though it might be possible in principle, the calculations quickly become messy as increases.

Consider an arbitrary regular profile and an agent . If is differentiable at the three agents , and , then is also differentiable. If has a corner at exactly one of the three agents, however, will have a corner at

. Heuristically, each corner should have three opportunities, or two if its opinion is close to that of an extremist, to beget another corner. Counting the endpoints as corners, we conclude that the number of corners of

should lie between and .

To even further complicate the matter, the expression on each piece quickly grows unmanageable as well, and already after a few updates it’s nontrivial to write them in terms of elementary functions.

Similar remarks apply to the discrete profiles . In Appendix A we present formulas for . By studying these formulas, we think it is clear that this is not a fruitful strategy for general .

A central topic in this text is that of random profiles, by which we mean profile-valued random variables. These will be generated by drawing a number

of opinions independently at random from some probability distribution, sorting them, and creating a profile with

agents holding the drawn opinions. In working with these random profiles, we will use the following, very helpful, lemma:

Lemma 2.22 (Glivenko–Cantelli (see for instance [9], p 266)).

Let

be the cumulative distribution function of some real valued random variable and let

be the empirical distribution function for a sample of size . Then

(2.4)

asymptotically almost surely (a.a.s.), i.e. almost surely when .

Note that, if the random variable in question is bounded, the quantile function given by

is a profile. Further, the empirical quantile function for a sample of size , given by

(2.5)

is a discrete profile with agents.

The following is immediate.

Corollary 2.23.

With as in Lemma 2.22, if is regular, then

(2.6)

a.a.s. as .

Proof.

See for instance [9] p. 305. ∎

For any profile the set is an interval, which is non-empty if and only if .

Definition 2.24.

Let be a profile and suppose there exists a closed (possibly empty) subinterval of such that the following hold.

  1. is -regular on .

  2. is constant on .

  3. If , then neither endpoint of is an endpoint of .

Then is said to be -weakly regular.

Proposition 2.25.

The operator is continuous at any weakly regular profile , with respect to the norm . In particular, is continuous at any regular profile.

proofsprel

Proof.

This result was essentially proven as Proposition 4 in [3], but in their formulation was assumed to be regular on all of . We show that the proof goes through for this stronger formulation, which we will need later.

Let be a weakly regular profile, with regularity bounds and on .

Choose such that, if then the distance between an endpoint of and an endpoint of is greater than .

We will show that, if is a profile such that , then

Fix such a profile . Fix some agent and define the following sets:

From Definition 2.4, we get the following:

(2.7)
(2.8)

As both and are subsets of , the absolute value of the last parenthesis in (2.7) can be at most , and the same holds for the parenthesis in (2.8). Using the triangle inequality we get that

(2.9)

Since , it is obvious that .

We now note that the third condition for being weakly regular and the definition of imply that neither of the sets and intersect , and hence is regular on both. Thus and are each bounded by the measure of the set of agents which may be added to or removed from by moving each agent at most , which, by regularity, is at most .

Similarly, . Hence , since .

The bounds from the previous paragraphs may be put into (2.9), to give us

Lemma 2.26.

Let be a regular profile and define

In words, and are the leftmost and rightmost agents, respectively, that interact with a given agent when the profile is updated by , and is the length of the set of neighbours of .

Then the derivative of , where it exists, is given by

(2.10)

where the primes denote derivatives.

Proof.

See Lemma 2.5 in [4]. The statement of that lemma assumes , but the proof goes through even without this assumption. ∎

The following lemma was proved for regular profiles in [3] using a different technique, yielding weaker regularity bounds than those given here.

Proposition 2.27.

Let be an -weakly regular profile.

Then is constant on the closed interval and -regular on .

In particular, if is weakly regular then is either weakly regular or a consensus, and if is regular and then is regular.

proofsprel

Proof.

First note that the second statement is a direct consequence of the first, so it suffices to prove the first statement.

It is clear that is constant on , so for the rest of the proof we will assume and only consider .

One readily verifies that almost everywhere differentiability on along with uniform upper and lower bounds on the derivative imply regularity with the same bounds. We prove the theorem by providing such bounds for .

Since is monotone and regular on , for almost every the derivatives , , , and all exist, by Lebesgue’s theorem.

For any such , by Lemma 2.26,

(2.11)

We first prove the upper regularity bound for .

Applying the chain rule to

, we find that, if then

where the inequality follows from the regularity bounds on . If is large enough for to be constantly , the derivative is , so the inequality remains true. In the same way we have .

As we assume , .

We also have the trivial bound . Note also that the two parentheses and in (2.11) sum to .

Together, the observations from the previous paragraphs may be inserted into (2.11) to get that

As for the lower regularity bound, we first note the trivial bound is the best we can do.

Second, we note that, as we assume , we cannot have . We will here assume , and note that the other case is completely analogous. Using the chain rule, as above, we see that

To finish the proof, it is enough to prove that

(2.12)

Intuitively, to make as large as possible, we want to pack as many neighbours of as far to the right as possible, while having as few neighbours as possible in the rest of the neighbourhood.

To formalise this, consider the profile such that

(2.13)

The assumptions on imply that

(2.14)

which finishes the proof. ∎

Remark 2.28.

For regular profiles, including any weakly regular profile with diameter above 2, the lower regularity bound of Proposition 2.27 could be improved by exchanging the constant segment of the auxiliary profile in (2.13) by a segment of slope . We content ourselves with the current version as the extended proof is technical and the improvement is slight. When iterated, either version of the proposition results in the quotient asymptotically growing like . As these results will not be used we leave out the proofs.

We are now ready to prove part (i) of Theorem 1.2 and deduce Corollary 1.3 from Theorem 1.2.

Proof.

For the deduction of Corollary 1.3, it clearly suffices to prove the statement about the numbers .

Fix and let , as in Theorem 1.2, be an upper bound on the freezing time for equally spaced profiles with diameter .

Recall that denotes the canonical equally spaced profile on agents with diameter and that denotes the canonical linear profile with diameter . Let denote the empirical quantile function (see (2.5)) of a sample of size

from the uniform distribution on

. The results in this section then give the following chain of implications:

is a consensus for all
is a consensus
if .

From this last statement we deduce in turn the following.

  • On the one hand, if is sufficiently close to , then is a consensus and, by a further application of Propositions 2.25 and 2.27, is a consensus for all sufficiently large. This proves part (i) of Theorem 1.2.

  • On the other hand, by Corollary 2.23, is a consensus a.a.s. This proves that Corollary 1.3 follows from Theorem 1.2.

By observing the proof just presented, it is clear that for some fixed the following three statements are equivalent:

  1. There is some such that is a consensus for all .

  2. There is some such that is a consensus.

  3. There is some such that is a consensus a.a.s. as .

If we would have access to unlimited computing power, the theory developed this far would actually be enough to finish the proof of Theorem 1.2 in a few lines using the following strategy:

Choose a really large so that for some . Using Propositions 2.25 and 2.27 we can compute constants for every such that . We calculate the updates explicitly. If we find that is a consensus we check that , which must be true if is chosen large enough. We could then deduce that is a consensus as well, and the rest would follow as above.

The problem with this strategy is that, as was hinted at in Remark 2.28, the constant grows ridiculously fast as increases, so we would end up with needing to be much larger than can actually be simulated. Table 1 illustrates this.

0 1
1 40
2
3
4
5
6
Table 1: The constants grow too fast to be of practical use.

To get around this, we introduce a fourth statement.

  1. There is some such that is a consensus for some infinite sequence .

It is straightforward to check that this is also equivalent to the earlier three, and we will devote Sections 4 and 5 to prove (iv). The following corollary of Proposition 2.27 will be used in both sections.

Corollary 2.29.

Let be a symmetric regular profile. If for some , then is a consensus for some . Moreover, depends only on and the regularity bounds for .

Proof.

Without loss of generality, suppose is symmetric about 0. By Observation 2.18, the same is true of for any .

If then Proposition 2.27 tells us that is regular. By iterating this we see that either for all , in which case we are done, or there is some first time such that , in which case is regular.

Set if or if .

Then, by Proposition 2.27, is still regular and, clearly,
.

By regularity and Proposition 2.27, there is some , depending only on and the regularity bounds for , such that

By symmetry, must be constantly equal to 0 on . In fact, it is easy to see that must be constantly equal to 0 on this interval for any . Hence one can check that, as long as the diameter is above 1, each extremist must change its opinion by at least at each time step. Thus the diameter must be at most 1 after at most additional time steps.

This finishes the proof with .

3 Propagation of errors due to refinements

In the previous section we investigated the updating operator and, in particular, we noted that it is continuous at regular profiles. As we saw, the continuity by itself is not very helpful. In this section, we will shift our focus away from regular profiles back to discrete ones. Specifically, we will investigate the updates of profiles that have been perturbed, by movement or refinement, and derive bounds for the difference between these and the updates of the unperturbed profiles. All the profiles in this section are discrete.

As we have seen, by definition, if is a profile with agents, and is a -regular refinement of , we have , and thus . We will begin by comparing to . Thus, one could, informally, say that the following lemma bounds the commutator of the two operators and .

Lemma 3.1.

Let be a discrete profile with agents, and let be any -regular refinement of . For any agent

(3.1)

proofslemmas

Proof.

Fix an agent . We will proceed by constructing a -regular refinement of which maximises , in the sense that for any -regular refinement of .

It is clear that, to maximise , one may simply begin with and place all inserted opinions at the rightmost end of their interval, except for those in the interval containing the opinion who are placed there, and those in the interval immediately to the left of the leftmost neighbour of who are placed out of sight, i.e. below . We observe that, following this procedure, is increasing with and is bounded by what is obtained if one changes the opinion of the leftmost neighbour of to before updating . In other words,

where denotes the profile obtained from taking and moving the leftmost neighbour of to .

Now, note that moving one out of at least opinions a distance at most cannot affect the updated opinion by more than .

We finally note that the reasoning is completely analogous for finding a lower bound for . ∎

The difference between two discrete profiles and with the same number of agents is a pre-profile with agents. On the other hand, for an arbitrary pre-profile , need not be a profile. In what follows we will use the term deviation instead of pre-profile when thinking in terms of as a small perturbation of a given profile . We will adopt the same shorthand for deviations as for profiles, and write instead of when there is no risk of confusion.

Definition 3.2.

A deviation is called consistent with respect to a discrete profile on agents if is a profile, i.e. if

for all .

For a deviation on agents, we will refer to positive deviations and on agents as left and right bounds on , respectively, if they satisfy

for any .

For any profile on agents and any we get

where is clearly a consistent deviation. By Lemma 3.1, is uniformly bounded in . If we want to iteratively obtain bounds for , we need to compare to for generic and . We will not aim for bounds in , instead our bounds will depend on the agent . However, the bounds will still be uniform in for each , which is the crucial point.

Adding a deviation to a profile may cause the neighbourhoods of its agents to change and we start by introducing some notation to handle these changes.

Definition 3.3.

Given a profile on agents and a deviation with bounds and , we define the sets