Random noise increases Kolmogorov complexity and Hausdorff dimension

08/14/2018 ∙ by Gleb Posobin, et al. ∙ 0

Consider a binary string x of length n whose Kolmogorov complexity is α n for some α<1. We want to increase the complexity of x by changing a small fraction of bits in x. This is always possible: Buhrman, Fortnow, Newman and Vereshchagin (2005) showed that the increase can be at least δ n for large n (where δ is some positive number that depends on α and the allowed fraction of changed bits). We consider a related question: what happens with the complexity of x when we randomly change a small fraction of the bits (changing each bit independently with some probability τ)? It turns out that a linear increase in complexity happens with high probability, but this increase is smaller than in the case of arbitrary change. We note that the amount of the increase depends on x (strings of the same complexity could behave differently), and give an exact lower and upper bounds for this increase (with o(n) precision). The proof uses the combinatorial and probabilistic technique that goes back to Ahlswede, Gács and Körner (1976). For the reader's convenience (and also because we need a slightly stronger statement) we provide a simplified exposition of this technique, so the paper is self-contained.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The Kolmogorov complexity of a binary string is defined as the minimal length of a program that generates , assuming that we use an optimal programming language that makes the complexity function minimal up to an additive term (see [8, 13] for details). There are several versions of Kolmogorov complexity; we consider the original version, called plain complexity. In fact, for our considerations the difference between different versions of Kolmogorov complexity does not matter, since they differ only by additive term for -bit strings, but we restrict ourselves to plain complexity for simplicity.

The complexity of -bit strings is between and (we omit additive terms). Consider a string of length that has some intermediate complexity, say . Imagine that we are allowed to change a small fraction of bits in , say, of all bits. Can we decrease the complexity of ? Can we increase the complexity of ? What happens if we change randomly chosen of bits?

In other words, consider a Hamming ball with center and radius , i.e., the set of strings that differ from in at most positions. What can be said about the minimal complexity of strings in this ball? the maximal complexity of strings in this ball? the typical complexity of strings in this ball?

The answer may depend on : different strings of the same complexity may behave differently if we are interested in the complexities of neighbor strings. For example, if the first half of is a random string, and the second half contains only zeros, the string has complexity and it is easy to decrease its complexity by shifting the boundary between the random part and zero part: to move the boundary to from we need to change about bits, and the complexity becomes close to . On the other hand, if is a random codeword of an error-correcting code with codewords of length that corrects up to errors, then also has complexity , but no change of (or less) bits can decrease the complexity of , since  can be reconstructed from the changed version.

The question about the complexity decrease is studied by algorithmic statistics (see [15] or the survey [14]), and the full answer is known. For each one may consider the function

It starts at (when ) and then decreases, reaching at (since we can change all bits to zeros or to ones). The algorithmic statistic tells us which functions may appear in this way (see [14, section 6.2] or [13, theorem 257]).222Note that algorithmic statistics uses a different language. Instead of a string in the -ball centered at , it speaks about a -ball centered at and containing . This ball is considered as a statistical model for .

The question about the complexity increase is less studied. It is known that some complexity increase is always guaranteed, as shown in [2]. The amount of this increase may depend on . If is a random codeword of an error-correcting code, then the changed version of contains all the information both about itself and the places where it was changed. This leads to the maximal increase in complexity. The minimal increase, as shown in [2], happens for that is a random element of the Hamming ball of some radius with center . However, the natural question: which functions may appear as

remains open.

In our paper we study the typical complexity of a string that can be obtained from by changing a fraction of bits chosen randomly. Let us return to our example and consider again a string of length and complexity . Let us change about of bits in , changing each bit independently333From the probabilistic viewpoint it is more naturally to change all the bits independently with the same probability . Then the number of changed bits is not exactly , but is close to with high probability. with probability . Does this change increase the complexity of ? It depends on the changed bits, but it turns out that random change increases the complexity of the string with high probability: we get a string of complexity at least with probability at least , for all large enough (the result is necessarily asymptotic, since the Kolmogorov complexity function is defined up to terms).

Of course, the parameters above are chosen only as an example, and the following general statement is true. For some consider the random noise that changes each position in a given -bit string independently with probability .

Theorem 1.

There exists a strictly positive function defined for with the following property: for all sufficiently large , for every , for every , for , and for every such that , the probability of the event

is at least .

Remark 1.

We use the inequality (and not an equality ) to avoid technical problems: the complexity is an integer, and may not be an integer.

Remark 2.

One may consider only since reversing all bits does not change Kolmogorov complexity (so and give the same increase in complexity). For the variable

is uniformly distributed in the Boolean cube

, so its complexity is close to , and the statement is easy (for arbitrary ).

Remark 3.

We use as parameters while fixing the probability bound as . As we will see, the choice of this bound is not important: we could use a stronger bound (e.g., for arbitrary ) as well.

Now a natural question arises: what is the optimal bound in Theorem 1, i.e., the maximal possible value of ? In other words, fix and . Theorem 1 guarantees that there exists some such that every string of length (sufficiently large) and complexity at least is guaranteed to have complexity at least after -noise (with high probability). What is the maximal value of for which such a statement is true (for given and )?

Before answering this question, we should note that the guaranteed complexity increase depends on : for different strings of the same complexity the typical complexity of could be different. Here are the two opposite examples (with minimal and maximal increase, as we will see).

Example 1.

Consider some

and the Bernoulli distribution

on the Boolean cube (bits are independent; every bit equals with probability ). With high probability the complexity of a -random string is -close to (see, e.g., [13, chapter 7]), where is the Shannon entropy function

After applying -noise the distribution is transformed into , where

is the probability to change the bit if we first change it with probability and then (independently) with probability .444We use the letter (for “noise”) both in (random change with probability , one argument) and in (the parameter of the Bernoulli distribution after applying , no subscript, two arguments). The complexity of is close (with high probability) to since the -random string and the -noise are chosen independently. So in this case we have (with high probability) the complexity increase

Note that is closer to than , and is strictly increasing on , so indeed some increase happens.

Example 2.

Now consider an error-correcting code that has codewords and corrects up to errors (this means that the Hamming distance between codewords is greater than ). Such a code may exist or not depending on the choice of and . The basic result in coding theory, Gilbert’s bound, guarantees that such a code exists if and are not too large. Consider some pair of and for which such a code exist; moreover, let us assume that it corrects up to errors for some . We assume also that the code itself (the list of codewords) has small complexity, say, . This can be achieved by choosing the first (in some ordering) code with required parameters.

Now take a random codeword of this code; most of the codewords have complexity close to . If we randomly change each bit with probability , then with high probability we get at most errors, therefore, decoding is possible and the pair can be reconstructed from , the noisy version of . Then the complexity of is close to the complexity of the pair , which (due to independence) is close to with high probability. So in this case we have the complexity increase ]

Remark 4.

Note that this increase is the maximal possible not only for random independent noise but for any change in that changes a -fraction of bits. See below about the difference between random change and arbitrary change.

Now we formulate the result we promised. It says that the complexity increase observed in Example 1 is the minimal possible: such an increase is guaranteed for every string of given complexity.

Theorem 2.

Let for some . Let be an arbitrary number in . Let . Then for sufficiently large the following is true: for every string of length with , we have

Here denotes some function such that as . This function does not depend on , , and . As the proof will show, we may take for some .

Figure 1: The curves where . Six different values of are shown.

Figure 1 shows the values of where Theorem 2 can be applied, for six different values of . Example 1 shows that the value of in this theorem is optimal.

Theorem 2 is the main result of the paper. It is proven, as it often happens with results about Kolmogorov complexity, by looking at its combinatorial and probabilistic counterparts. In the next section we explain the scheme of the proof and outline its main ideas.

Then we explain the details of the proof. It starts with the Shannon information counterpart of our complexity statement that is proven in [16]. In Section 3 we derive two different combinatorial counterparts following [1]. Finally, in Section 4 we consider the details of the conversion of a combinatorial statement to a complexity one and finish the proof.

In Section 5 we extend our techniques to infinite sequences and compare the results obtained by our tools and the results about arbitrary change of a small fraction of bits from [5].

In fact, if we are interested only in some complexity increase (Theorem 1

), a simple argument (suggested by Fedor Nazarov) that uses Fourier transform is enough. A stronger result (but still not optimal) can be obtained by hypercontractivity techniques. These two arguments are sketched in Appendix A.

In Appendix B, for reader’s convenience, we reproduce the proof of the result from [1] (about the increase in entropy caused by random noise) used in the proof.

Finally in Appendix C we provide short (and quite standard) proofs of the McDiarmid inequality as a corollary of the Azuma–Hoeffding inequality and of the Azuma–Hoeffding inequality itself.

2 Proof sketch

2.1 Three ways to measure the amount of information

Kolmogorov’s first paper on algorithmic information theory [7] was called “Three approaches to the Quantitative Definition of Information”. These three approaches can be summarized as follows:

  • (Combinatorial): an element of a set of cardinality carries bits of information.

  • (Algorithmic): a binary string carries bits of information, where is the minimal bit length of a program that produces .

  • (Shannon information theory, or probabilistic approach): a random variable

    that has values with probabilities , carries bits of information, where is the Shannon entropy of , defined as

One cannot compare directly these three quantities since the measured objects are different (sets, strings, random variables). Still these quantities are closely related, and many statements that are true for one of these notions can be reformulated for other two. Several examples of this type are discussed in [13, chapters 7 and 10], and we use this technique in our proof.

2.2 Arbitrary change

We start by recalling an argument from [2] for the case when we are allowed to change arbitrary bits (only the number of changed bits is bounded) and want to increase complexity. (A similar reduction will be a part of our argument.)

Fix some parameters (determining the complexity of the original string), (the maximal fraction of changed bits), and (determining the complexity of the changed string). Let us repeat the complexity statement and give its combinatorial equivalent.

  • (Complexity version) Every string of length and complexity at least can be changed in at most positions to obtain a string of complexity at least .

  • (Combinatorial version) For every subset of the Boolean cube of cardinality at most , its -interior has cardinality at most .

Here by -interior of a set we mean the set of strings such that the entire ball of radius centered in belongs to . In other words, a string does not belong to the -interior of if can be changed in at most positions to get a string outside .

Remark 5.

The combinatorial statement can be reformulated in a dual way: for every set of cardinality greater than , its -neighborhood has cardinality greater than .

These two statements (combinatorial and complexity versions) are almost equivalent: one of them implies the other if we allow a small change in and (in fact, change is enough). Indeed, assume first that the combinatorial statement is true. Consider the set of all -bit strings that have complexity less than . Then , so we can apply the combinatorial statement.555For simplicity we assume that , , and are integers. This is not important, since we have term anyway. It guarantees that the -interior of (we denote it by ) has at most elements. The set can be enumerated given , and . Indeed, knowing and , one can enumerate elements of (by running in parallel all programs of length less than ; note that there are less than of them). Knowing also , we may enumerate (if a ball is contained in entirely, this will become known at some stage of the enumeration of ). Then every element of

can be encoded by its ordinal number in this enumeration. This guarantees that the complexity of all elements of

does not exceed (the additional bits are needed to encode , , and ). Therefore, if some has complexity greater that , it is not in , i.e., can be changed in at most positions to get a string outside . By the definition of , this changed string has complexity at least , as required. The term can be absorbed by a small change in .

Let us explain also (though this direction is not needed for our purpose) why the complexity statement implies the combinatorial one. Assume that there are some sets that violate the combinatorial statement, i.e., contain at most strings but have -interior of size greater than . Such a set can be found by exhaustive search, and the first set that appears during the search has complexity . Its elements, therefore, have complexity : to specify an element, we need to specify and the ordinal number of the element in . From this we conclude, using the complexity statement (and changing slightly) that all elements of the -interior of have complexity at most . Therefore, there are at most of them, and the size of the interior is bounded by (again up to a small change in ).

Now we return to the result from [2]. Let be a string of length and complexity at least , where for some . Let be a real such that , and . Then can be changed in at most positions to get a string of complexity at least . As we have seen, this statement from [2] is a corollary of the following combinatorial result.

Proposition 1.

Let be some number and let . Let be some positive number so that , and let . Let be an arbitrary subset of of size at most . Let be a subset of , and for every the Hamming ball of radius with center is contained in . Then the cardinality of does not exceed .

This proposition is a direct consequence of Harper’s theorem (see, e.g., [4]) that says that for a subset of of a given size, its -interior (for some fixed ) is maximal when the subset is a Hamming ball (formally speaking, is between two Hamming balls of sizes and for some ). Or, in dual terms, Harper’s theorem says that the -neighborhood of a set of a given size is minimal if this set is a Hamming ball. The relation between and in the proposition is just the relation between the sizes of balls of radii and (if we ignore factors that are polynomial in ). Note that is needed since otherwise the radius exceeds and then the log-size of the ball is close to and not to . The

factor is needed due to the polynomial factor in the estimate for the ball size in terms of Shannon entropy (the ball of radius

has size ).

We do not go into details here (and do not reproduce the proof of Harper’s theorem) since we need this result only to motivate the corresponding relation between combinatorial and complexity statements for the case of a random noise we are interested in.

2.3 Random noise: four versions

For the random noise case we need a more complicated argument. First, we need to consider also the probabilistic version of the statement (in addition to the complexity and combinatorial versions). Second, we need two combinatorial versions (strong and weak). Fix some , and . Here are the four versions we are speaking about; all four statements are equivalent (are true for the same parameters , , and , up to -changes in the parameters):

  • (Shannon information version, [16]) For every random variable with values in such that , the variable that is obtained from by applying independent noise changing each bit with probability , has entropy .

  • (Complexity version) For every string of length and complexity , the probability of the event “” is at least . (Again, is random noise that independently changes each bit with probability , but now it is applied to the string and not to a random variable)

  • (Strong combinatorial version) For every set of size at most the set of all strings such that has size .

  • (Weak combinatorial version) For every set of size at most the set of all strings such that has size .

The difference between weak and strong combinatorial versions is due to the different thresholds for the probability. In the weak version the set contains only strings that get into after the noise almost surely (with probability at least ). In the strong version the set is bigger and includes all strings that get into with non-negligible probability (at least ), so the upper bound for becomes a stronger statement.

Remark 6.

In the case of arbitrary changes (the result from [2]) we consider the -interior of , the set of points that remain in after arbitrary change in (at most) positions. If a point is not in the interior, it can be moved outside by changing at most bits. Now we consider (in the strong version) the set of points that get into with probability at least . If a point is not in this set, the random -noise will move it outside almost surely (with probability at least ). Again the complexity and (strong) combinatorial versions are equivalent up to changes in parameters, for the same reasons.

This explains why we are interested in the strong combinatorial statement. The weak one is used as an intermediate step in the chain of arguments. This chain goes as follows:

  • First the Shannon entropy statement is proven using tools from information theory (one-letter characterization and inequalities for Shannon entropy); this was done in [16].

  • Then we derive the weak combinatorial statement from the entropy statement using a simple coding argument from [1].

  • Then we show that weak combinatorial statement implies the strong one, using a tool that is called the “blowing-up lemma” in [1] (now it is more popular under the name of “concentration inequalities”).

  • Finally, we note that the strong combinatorial statement implies the complexity statement (using the argument sketched above).

2.4 Tools used in the proof

Let us give a brief description of the tools used in these arguments.

To prove the Shannon entropy statement, following [16], fix some . Consider the set of all pairs for all random variables with values in . For each we get a subset of the square . For it is a curve made of all points (shown in Figure 1 for six different values of ). We start by showing that this curve is convex (performing some computation with power series). Then we show, using the convexity of the curve and some inequalities for entropies, that for every the set is above the same curve (scaled by factor ), and this is the entropy statement we need. See Appendix B for details.

To derive the weak combinatorial statement from the entropy statement, we use a coding argument. Assume that two sets and are given, and for every point the random point belongs to with probability at least . Consider a random variable that is uniformly distributed in . Then , and if , then and (assuming the entropy statement is true for given , , and ). On the other hand, the variable can be encoded as follows:

  • one bit (flag) says whether is in or not;

  • if yes, then bits are used to encode an element of ;

  • otherwise bits are used to encode the value of (trivial encoding).

The average length of this code for does not exceed

(Note that if the second case has probability less than , the average length is even smaller.) The entropy of a random variable does not exceed the average length of the code. So we get and , assuming that .

The next step is to derive the strong combinatorial version from the weak one. Assume that two sets are given, and for each the probability of the event is at least . For some consider the set , the -neighborhood of . We will prove (using the concentration inequalities) that for some the probability of the event is at least (for each ). So one can apply the weak combinatorial statement to and get a lower bound for . On the other hand, there is a simple upper bound: ; combining them, we get the required bound for . See Section 3 for details.

Remark 7.

One may also note (though it is not needed for our purposes) that the entropy statement is an easy corollary of the complexity statement, and therefore all four are equivalent up to small changes in parameters. This can be proven in a standard way. Consider independent copies of random variable and independently apply noise to all of them. Then we write the inequality for the typical values of the complexities; in most cases they are close to the corresponding entropies (up to error). Therefore, we get the inequality for entropies with precision (for copies) and with precision for one copy (the entropies are divided by ). As , the additional term disappears and we get an exact inequality for entropies.

3 Combinatorial version

Recall the entropy bound from [16] discussed above (we reproduce its proof in Appendix B):

Proposition 2.

Let be an arbitrary random variable with values in , and let be its noisy version obtained by applying independently to each bit in . Choose in such a way that . Then consider , the probability to get if we apply to a variable that equals with probability . Then .

In this section we use this entropy bound to prove the combinatorial bounds. We start with the weak one and then amplify it to get the strong one, as discussed in Section 2. First, let us formulate explicitly the weak bound that is derived from Proposition 2 using the argument of Section 2.

Proposition 3.

Let and . Let and for every the probability of the event “” is at least . If , then .

In fact, this “” is just , but we do not want to be too specific here.

Now we need to extend the bound to the case when the probability of the event is at least . We already discussed how this is done. Consider for some (depending on ) the Hamming -neighborhood of . We need to show that

for every (for a suitable ). In fact, does not matter here: we may assume that (flipping bits in and simultaneously). In other terms, we use the following property of the Bernoulli distribution with parameter : if some set has probability not too small according to this distribution, then its neighborhood has probability close to . We need this property for , see below about the exact value of .

Such a statement is called a blowing-up lemma in [1]. There are several (and quite different) ways to prove statements of this type. The original proof in [1] used a result of Margulis from [9] that says that the (Bernoulli) measure of a boundary of an arbitrary set is not too small compared to the measure of a boundary of a ball of the same size. Iterating this statement (a neighborhood is obtained by adding boundary layer several times), we get the lower bound for the measure of the neighborhood. Another proof was suggested by Marton [10]; it is based on the information-theoretical considerations that involve transportation cost inequalities for bounding measure concentration. In this paper we provide a simple proof that uses the McDiarmid inequality [11], a simple consequence of the Azuma–Hoeffding inequality [6]. This proof works for .

Let us state the blowing-up lemma in a slightly more general version than we need. Let be (finite) probability spaces. Consider the space with the product measure (so the coordinates are independent) and Hamming distance (the number of coordinates that differ). In our case and is the Bernoulli measure with parameter . The blowing-up lemma says, informally speaking, that if a set is not too small, then its neighborhood has small complement (the size is measured by ). It can be reformulated in a more symmetric way: if two sets are not too small, then the distance between them is rather small. (Then this symmetric statement is applied to the original set and the complement of its neighborhood.) Here is the symmetric statement.

Proposition 4 (Blowing-up lemma, symmetric version).

Let be two subsets of with the product measure . Then

To prove the blowing-up lemma, we use the McDiarmid concentration inequality:

Proposition 5 (McDiarmid’s inequality, [11]).

Consider a function . Assume that changing the -th coordinate changes the value of at most by some :

if and coincide everywhere except for the th coordinate. Then

for arbitrary .

Here the probability and expectation are considered with respect to the product distribution (the same as in the blowing-up lemma, see above). This inequality shows that cannot be much larger than its average on a big set. Applying this inequality to , we get the same bound for the points where the function is less than its average by or more.

For the reader’s convenience, we reproduce the proof of the McDiarmid inequality in Appendix C.

Now let us show why it implies the blowing-up lemma (in the symmetric version).

Proof of the blowing-up lemma.

Let be the distance between and , i.e., the minimal number of coordinates that one has to change in to get into . This function satisfies the bounded differences property with , so we can apply the McDiarmid inequality to it. Let be the expectation of . The function equals zero for arguments in and therefore is below its expectation at least by (everywhere in ), so

On the other hand, the function is at least for arguments in , so it exceeds its expectation at least by (everywhere in ), therefore the McDiarmid inequality gives

and it remains to combine the last two inequalities. ∎

Here is the special case of the blowing-up lemma we need:

Corollary.

If is a distribution on with independent coordinates, and has measure , then for we have .

Indeed, we may apply the blowing-up lemma to and , where is a complement of . If both and have measures at least , we get a contradiction for (the distance between and the complement of its neighborhood exceeds ).

Remark 8.

In the same way we get a similar result for probabilities and for arbitrary constant (only the constant factor in will be different).

Now we are ready to prove the strong combinatorial version:

Proposition 6.

Let and . Let and for every the probability of the event “” is at least . If , then .

Proof.

As we have seen, the weak combinatorial version (Proposition 3) can be applied to the neighborhood for . The size of can be bounded by the size of multiplied by the size of a Hamming ball of radius . The latter is . Combining the inequalities, we get

For small we have

We have , so

as promised. ∎

4 Complexity statement

Now we combine all pieces and prove Theorem 2. It states:

Let for some . Let be an arbitrary number in . Let . Then for sufficiently large the following is true: for every string of length with , we have

Here is actually .

We already have all the necessary tools for the proof, but some adjustments are needed. We already know how to convert a combinatorial statement into a complexity one. For that we consider the set of all strings in that have complexity less than for some (to be chosen later). Then we consider the set of all such that . The combinatorial statement (strong version, Proposition 6) guarantees that . We would like to conclude that all elements of have complexity only slightly exceeding . (Then we have to deal with this excess, see below.) For that we need an algorithm that enumerates . First, we need to enumerate , and for that it is enough to know and the complexity bound for elements of . But now (unlike the case of arbitrary change where we need to know only the maximal number of allowed changes) we need to compute the probability , and the value of may not be computable, and an infinite amount of information is needed to specify . How can we overcome this difficulty?

Note that it is enough to enumerate some set that contains but has only slightly larger size. Consider some rational that is close to and the set

The combinatorial statement remains true (as we noted in Remark 8, even would be OK, not only ), so we may still assume that . We want . This will be guaranteed if the difference between and is less than . To use the coupling argument, let us assume that and are defined on the same space: to decide whether the noise changes th bit, we generate a fresh uniformly random real in and compare it with thresholds and . This comparison gives different results if this random real falls into the gap between and . Using the union bound for all bits, we conclude that in this setting is bounded by . Therefore, if the approximation error is less than , we get the desired result, and to specify that approximates with this precision we need only bits. This gives us the following statement:

for every string of length with , we have

The only difference with the statement of Theorem 2 is that we have a stronger requirement instead of . To compensate for this, we need to decrease a bit and apply the statement we have proven to . Then the corresponding value of also should be changed, to get a point on the curve (Figure 1) on the left of the original point . Note that the slope of the curve is bounded by (it is the case at the right end where the curve reaches , since the curve is above the diagonal , and the slope increases with due to convexity). Therefore, the difference between and is also and is absorbed by the bigger term .

Theorem 2 is proven.

In the next section we apply our technique to get some related results about infinite bit sequences and their effective Hausdorff dimension. We finish the part about finite strings with the following natural question.

Question 1.

Fix some and apply random noise . The complexity of becomes a random variable. What is the distribution of this variable? The blowing-up lemma implies that it is concentrated around some value. Indeed, if we look at strings below

-quantile and above

-quantile, the blowing-up lemma guarantees that the Hamming distance between these two sets is at most , and therefore the thresholds for Kolmogorov complexity differ at most by (recall that for two strings of length that differ in positions, their complexities differ at most by , since it is enough to add information about positions and each position can be encoded by bits).

So with high probability the complexity of is concentrated around some value (defined up to precision). For each we get some number (expected complexity, with guaranteed concentration) that depends not only on and , but on some more specific properties of . What are these properties? Among the properties of this type there exists a Vitanyi–Vereshchagin profile curve for balls, the minimal complexity in the neighborhood as function of the radius (see [13, section 14.4]); is it somehow related?

As we have mentioned, this question is open also for maximal complexity in -balls around , not only for typical complexity after -noise.

5 Infinite sequences and Hausdorff dimension

Let be an infinite bit sequence. The effective Hausdorff dimension of is defined as

A natural question arises: what happens with the Hausdorff dimension of a sequence when each its bit is independently changed with some probability ? The following result states that the dimension increases with probability  (assuming the dimension was less than , of course), and the guaranteed increase follows the same curve as for finite sequences.

Theorem 3.

Let be some reals, and . Let be an infinite sequence that has effective Hausdorff dimension at least . Then the effective Hausdorff dimension of the sequence that is obtained from by applying random -noise independently to each position, is at least with probability .

Proof.

It is enough to show, for every , that the dimension of is at least with probability . Consider so that the pair lies on the boundary curve. By definition of the effective Hausdorff dimension, we know that for all sufficiently large . Then Theorem 2 can be applied to and . It guarantees that with probability at least the changed string has complexity at least . Moreover, as we have said, the same is true with probability at least . This improvement is important for us: the series converges, so the Borel–Cantelli lemma says that with probability only finitely many prefixes have complexity less than , therefore the dimension of is at least with probability . ∎

In the next result we randomly change bits with probabilities depending on the bit position. The probability of change in the th position converges to as . This guarantees that with probability we get a sequence that is Besicovitch-close to a given one. Recall that the Besicovitch distance between two bit sequences and is defined as

(where stands for the Hamming distance). So means that the fraction of different bits in the -bit prefixes of two sequences converges to as

. The strong law of large numbers implies that if we start with some sequence

and change th bit independently with probability with , we get (with probability ) the sequence such that the Besicovitch distance between and is . This allows us to prove the following result using a probabilistic argument.

Theorem 4.

Let be a bit sequence whose effective Hausdorff dimension is at least for some . Let be a sequence of positive reals such that . Then there exists a sequence such that:

  • the Besicovitch distance between and is ;

  • is at least for all sufficiently large .

Proof.

For this result we use some decreasing sequence and change th bit with probability . Since , with probability the changed sequence is Besicovitch-equivalent (distance ) to the original one. It remains to prove that the probability of the last claim (the lower bound for complexities) is also for the changed sequence, if we choose in a suitable way.

To use different for different , we have to look again at our arguments. We start with Proposition 2: the proof (see Appendix B) remains valid if each bit is changed independently with probability depending on the bit’s position (). Indeed, for every the corresponding -curve is above the -curve, so the pairs of entropies (original bit, bit with noise) are above the -curve and we may apply the same convexity argument.

The derivation of the combinatorial statement (first the weak one, then the strong one) also remains unchanged. The proof of the weak version does not mention the exact nature of the noise at all; in the strong version we use only that different bits are independent (to apply the McDiarmid inequality and the blowing-up lemma). The only problem arises when we derive the complexity version from the combinatorial one. In our argument we need to know (or some approximation for ) to enumerate . If for each bit we have its own value of , even one bit to specify this value is too much for us.

To overcome this difficulty, let us agree that we start with , then change them to at some point, then to etc. If for th bit we use , then to specify all the for we need to specify

bits (each moment of change requires

bits). For we choose a pair on the -curve such that . To decide when we can start using this value of , we wait until becomes true and stays true forever, and also becomes and stays true. Note that is fixed when we decide when to start using , so such an can be found. In this way we guarantee that the probability that will have complexity more than is at least (we need a converging series, so we use the bound with ), and it remains to apply the Borel–Cantelli lemma. ∎

Theorem 4 implies that for every that has effective Hausdorff dimension there exist a Besicovitch equivalent that is -random (due to the complexity criterion for -randomness, see [5]), and we get the result of [5, Theorem 2.5] as a corollary. Moreover, we can get this result in a stronger version than in [5], since for slow converging sequence , for example, , we get strong -randomness instead of weak -randomness used in [5]. (For the definition of weak and strong -randomness and for the complexity criteria for them see [3, Section 13.5].)

Acknowledgements

Authors are grateful to the participants and organizers of the Heidelberg Kolmogorov complexity program where the question of the complexity increase was raised, and to all colleagues (from the ESCAPE team, LIRMM, Montpellier, Kolmogorov seminar and HSE Theoretical Computer Science Group and other places) who participated in the discussions, in particular to Bruno Bauwens, Noam Greenberg, Konstantin Makarychev, Yury Makarychev, Joseph Miller, Alexey Milovanov, Ilya Razenshteyn, Andrei Romashchenko, Nikolai Vereshchagin, Linda Brown Westrick.

Special thanks to Fedor Nazarov who kindly allowed us to include his argument (using Fourier series), and, last but not least, to Peter Gács who explained us how the tools from [1] can be used to provide the desired result about Kolmogorov complexity.

Appendix A: Simpler arguments and weaker bounds

If we are interested only in some increase of entropy and do not insist on the optimal lower bound, some simpler arguments (that do not involve entropy arguments and just prove the combinatorial statement with a weaker bound) are enough. In this section we provide two arguments of this type; the corresponding regions of parameters are shown in Figure 2 (together with the optimal bound of Theorem 2).

Figure 2: Bounds that could be obtained by different techniques

Using Fourier analysis

We start with a proof (suggested by Fedor Nazarov666see http://mathoverflow.net/questions/247193/union-of-almost-hamming-balls) of a weak version of Proposition 6 showing that for every and every there exists some such that required bound is valid for every of size .

Every real-valued function on the Boolean hypercube , identified with and considered as a multiplicative group in this section, can be written in the standard Fourier basis:

where are Fourier coefficients, and . Functions are characters of the Boolean cube as a multiplicative group. They form an orthonormal basis in the space of real-valued functions on with respect to the following inner product:

This Fourier representation will be useful for us, since the representation of the convolution of two functions is the point-wise product of their representations: , where the convolution is defined as

(in fact, in our case ).

For a set we are interested in the probability

This function is a convolution of the indicator function of the set (equal to inside the set and outside) and the distribution of the noise, multiplied by (since we divide by when computing the expectation):

where . Here is the unit of the group, i.e., . The Fourier coefficient is easy to compute:

and both functions and are products of functions depending on one coordinate:

where and , and

where is constant if , and for . Due to independence, the expectation of the product is a product of expectations; they are for and for , so

In other terms, noise (convolution with ) decreases the -th coefficient of the Fourier transform by multiplying it by . We need to apply noise to the indicator function of that we denote by , and get a bound for the number of points where exceeds .

Why cannot be relatively large (greater than ) on a large set ? We know that

This sum consists of

terms (its elements form a vector of length

) and can be split into two parts: for “small” , where , and for “large” , where . Here is some threshold to be chosen later in such a way that the first part (for small ) does not exceed, say for all . Then the second part should exceed everywhere on , and this makes the -norm of the second part (as a vector of the corresponding coefficients) large, while all coefficients in the second part are multiplied by small factors not exceeding .

How should we choose the threshold ? The coefficient equals , the uniform measure of , and for all other coefficients we have . The size (the number of terms) in the first part is the number of sets of cardinality less than , and is bounded by . Therefore, if we choose in such a way that

we achieve our goal; the first part of the sum never exceeds .

Now the second part: compared to the same part of the sum for , we have all coefficients multiplied by or smaller coefficients, so the -norm of this part is bounded:

On the other hand, if the second part exceeds inside , we have the lower bound:

In this way we get

or

where is chosen in such a way that

For we have and

We see that the first term gives an exponentially small factor since is proportional to :

(here is the preimage of between and ). So we get the required bound for some as promised.

Using hypercontractivity

We can get a better bound using two-functions hypercontractivity inequality for uniform bits, whose proof can be found in [12, chapter 10]:

Proposition 7 (Two-function hypercontractivity inequality).

Let , let , and assume . Then

Here the distribution of is the uniform distribution in , and is obtained from by applying -noise: . The same distribution can be obtained in a symmetric way, starting from . The notation denotes -norm: