Genericity and Rigidity for Slow Entropy Transformations

06/27/2020
by   Terry Adams, et al.
IEEE
0

The notion of slow entropy, both upper and lower slow entropy, was defined by Katok and Thouvenot as a more refined measure of complexity for dynamical systems, than the classical Kolmogorov-Sinai entropy. For any subexponential rate function a_n(t), we prove there exists a generic class of invertible measure preserving systems such that the lower slow entropy is zero and the upper slow entropy is infinite. Also, given any subexponential rate a_n(t), we show there exists a rigid, weak mixing, invertible system such that the lower slow entropy is infinite with respect to a_n(t). This gives a general solution to a question on the existence of rigid transformations with positive polynomial upper slow entropy, Finally, we connect slow entropy with the notion of entropy covergence rate presented by Blume. In particular, we show slow entropy is a strictly stronger notion of complexity and give examples which have zero upper slow entropy, but also have an arbitrary sublinear positive entropy convergence rate.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

06/27/2020

Generic Transformations Have Zero Lower Slow Entropy and Infinite Upper Slow Entropy

The notion of slow entropy, both upper and lower slow entropy, were defi...
10/28/2021

Convergence of Conditional Entropy for Long Range Dependent Markov Chains

In this paper we consider the convergence of the conditional entropy to ...
09/05/2018

Some results relating Kolmogorov complexity and entropy of amenable group actions

It was proved by Brudno that entropy and Kolmgorov complexity for dynami...
01/03/2014

Computing Entropy Rate Of Symbol Sources & A Distribution-free Limit Theorem

Entropy rate of sequential data-streams naturally quantifies the complex...
10/18/2020

Deterministic Identification Over Fading Channels

Deterministic identification (DI) is addressed for Gaussian channels wit...
10/12/2020

Inaccessible Entropy I: Inaccessible Entropy Generators and Statistically Hiding Commitments from One-Way Functions

We put forth a new computational notion of entropy, measuring the (in)fe...
02/17/2021

Dynamical behavior of alternate base expansions

We generalize the greedy and lazy β-transformations for a real base β to...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

The notion of slow entropy was introduced by Katok and Thouvenot in [katok1997slow] for amenable discrete group actions. It generalizes the classical notion of Kolmogorov-Sinai entropy [Kolmogorov, Sinai2] for Z-actions and gives a method for distinguishing the complexity of transformations with zero Kolmogorov-Sinai entropy111The Kolmogorov-Sinai entropy of a transformation is referred to as the entropy of .. The recent survey [KKW2020] gives a general account of several extensions of entropy, including a comprehensive background on slow entropy. Slow entropy has been computed for several examples including compact group rotations, Chacon-3 [Fr70], the Thue-Morse system and the Rudin-Shapiro system. In [ferenczi1997measure], it is shown that the lower slow entropy of any rank-one transformation is less than or equal to 2. Also, in [ferenczi1996rank], it is shown there exist rank-one transformations with infinite upper slow entropy with respect to any polynomial. In [kanigowski2018slow], Kanigowski is able to get more precise upper bounds on slow entropy of local rank-one flows. Also, in [kanigowski2019slow], the authors obtain polynomial slow entropies for unipotent flows.

In [KKW2020], the following question is given:

Question 6.1.2. Is it possible to have the upper slow entropy for a rigid transformation positive with respect to ?

We give a positive answer to this question. Given any subexponential rate, we show that a generic transformation has infinite upper slow entropy with respect to that rate. We say , for and , is subexponential, if given and , . We will only consider monotone such that for . Let

be a standard probability space (i.e., isomorphic to

with Lebesgue measure). Also, let

We consider the weak topology on the space which is induced by the strong operator topology on the space of Koopman operators, . One of our three main results is the following.

Theorem 1.

Let be any subexponential rate function. There exists a dense subset such that for each , the upper slow entropy of is infinite with respect to .

Thus, the generic transformation answers question 6.1.2 in the affirmative, since the generic transformation is known to be weak mixing and rigid. Our proof is constructive and provides a recipe for constructing rigid rank-ones with infinite upper slow entropy.

We show that there is a generic class of transformations such that the lower slow entropy is zero with respect to a given divergent rate.

Theorem 2.

Suppose is a rate such that for , . There exists a dense subset such that for each , the lower slow entropy of is zero with respect to .

This shows for any slow rate , the generic transformation has infinitely occurring time spans where the complexity is sublinear. This is due to “super” rigidity times for a typical tranformation. This raises the question of whether there exists an invertible rigid measure preserving transformation with infinite polynomial lower slow entropy. We answer this question by constructing examples with infinite subexponential lower slow entropy in section 5. This also answers question 6.1.2.

Theorem 3.

There exists a family of rigid, weak mixing transformations such that given any subexponential rate , there exists a transformation in which has infinite lower slow entropy with respect to .

In the final section, we give the connections with entropy convergence rate as defined by Frank Blume in [blume2012relation].

2. Preliminaries

We describe the setup and then give a few lemmas used in the proofs of our main results.

2.1. Definitions

Given an alphabet , a codeword of length

is a vector

such that for . Our codewords will be obtained from a measure preserving system and finite partition . In this case, we will consider the alphabet to be . Given and , define the codeword such that . When using this notation, the transformation will be fixed.

Let be codewords of length . The (normalized) Hamming distance is defined as:

Given a codeword of length and , an -ball is the subset such that for . We will denote the -ball as . If given a transformation and partition , define

Given , , , finite partition and dynamical system , define as:

Now we give the definition of upper and lower slow entropy for -actions. For more general discrete amenable group actions, the interested reader may see the survey [KKW2020]. Also, in [hochman2012slow], slow entropy is used to construct infinite-measure preserving -actions which cannot be realized as a group of diffeomorphisms of a compact manifold preserving a Borel measure. Let be an invertible measure preserving transformation defined on a standard probability space . Let be a family of positive sequences monotone in and such that for . Define the upper (measure-theoretic) slow entropy of with respect to a finite partition as

The upper slow entropy of with respect to is defined as

To define the lower slow entropy of , replace in the definition above with .

2.2. Supporting lemmas

Define the binary entropy function,

We give some preliminary lemmas involving binary codewords and measurable partitions that are used in the main results.

Lemma 2.3.

Suppose are binary words of length with Hamming distance . Let and be the set of all codewords consisting of all possible sequences of words from of length . Given with , the minimum number of -balls required to cover of the words in satisfies:

Proof.

The proof follows from a standard bound on the size of Hamming balls [macwilliams1977theory] (p.310). Suppose are a minimum number of centers such that -balls cover at least of codewords in . For each , choose . Thus, and the -balls cover at least of the codewords in .

This reduces the problem to a basic Hamming ball size question. Since all words are generated by , we can map to 0 and to 1, and consider the number of Hamming balls needed to cover of all binary words of length . Thus, if at least words differ, then the distance is greater than or equal to . Also, . By [macwilliams1977theory](p.310), a Hamming ball of radius has a volume less than or equal to:

Therefore, the minimum number of balls required to cover at least of the space is:

Lemma 2.4.

Suppose the setup is similar to Lemma 2.3 and there are two generating words of length with distance . Suppose is the set of codewords consisting of all possible sequences of blocks of either or . Let such that . Define and measure such that for . Suppose is a map satisfying:

The minimum number of -Hamming balls such that

satisfies

Proof.

Let . For , . By Lemma 2.3,

-balls are needed to cover of words. Thus, the total number of -balls needed to cover mass of words is at least:

The following lemma is used in the proof of Proposition 4.5.

Lemma 2.5.

Let and . Let be an invertible measure preserving system and a set of positive measure such that is a disjoint union (except for a set of measure zero). Suppose and are partitions such that

(2.1)

Then for ,

Proof.

Define

Define

We show . Otherwise, for and such that for , this contributes to the sum (2.1). Thus, for , the number of such gives measure greater than or equal to . Adding up over all gives measure greater than or equal to . For a.e. , and this holds for . ∎

The following lemma is a more general version of Lemma 2.5 and used in multiple places throughout this paper. Given two ordered partitions and , let

Lemma 2.6.

Let be ergodic. Let and . Suppose and are ordered partitions such that

(2.2)

Then for ,

Proof.

Let such that

(2.3)

Define

Let and be a Rohklin tower such that . Define

We show . Otherwise, for each such that for , this contributes to the sum (2.3). Thus, for , the number of such gives measure greater than . Adding up over all gives measure greater than . For , and this holds for . By showing the analogous result for defined as:

then . Hence,

Therefore, since may be chosen arbitrarily small, our claim holds. ∎

2.7. Infinite rank

A result of Ferenczi [ferenczi1997measure] shows that the lower slow entropy of a rank-one transformation is less than or equal to 2 with respect to . Thus, our examples in section 5.4 are not rank-one and instead, have infinite rank. We will adapt the technique of independent cutting and stacking to construct rigid transformations with infinite lower slow entropy. Independent cutting and stacking was originally defined in [friedman1972mixing, shields1973cutting]. A variation of this technique is used in [katok1997slow] to obtain different types of important counterexamples. For a general guide on the cutting and stacking technique, see [Fri92].

3. Generic class with zero lower slow entropy

Let be a sequence of real numbers such that for and for . For and any finite partition , define

(3.1)
Proposition 3.1.

For and finite partition , the set is open in the weak topology on .

Proof.

Let and , be such that . Let . Choose such that . Let . In the weak topology, choose an open set containing such that for , , and ,

(3.2)

We will prove inductively in for that

(3.3)

The case follows directly from (3.2):

Also, the case is trivial. Suppose equation (3.3) holds for . Below shows it holds for :

For , let

Thus,

(3.4)
(3.5)
(3.6)

Hence, if , . Each has the same -name under and . Suppose is such that satisfies and

Let and . Since , then

Therefore, since , and we are done. ∎

Now we prove the density of the class .

Proposition 3.2.

For and finite partition , the set is dense in the weak topology on .

Proof.

Let be the partition into elements for . We can discard elements with zero measure. Since rank-ones are dense in , let be a rank-one transformation and let . Let and define . Choose a rank-one column for such that

  1. ,

  2. ,

  3. there exist disjoint collections such that .

Let and . Now we show how to construct a transformation . Since will differ by inside the top level or outside the column, then will be within of . Choose such that and for ,

Choose such that . Cut column into columns of equal width and stack from left to right. Call this column which has height . Let . By Lemma 2.6, since

then

Let be the union of levels in except for the top levels. For , gives at most distinct vectors. Also, -balls centered at these words will cover . Precisely,

Since , then . Therefore, we are done. ∎

Theorem 4.

Suppose is such that for , and for , . There exists a dense subset such that for each , the lower slow entropy of is zero with respect to .

Proof.

Let be a sequence of nontrivial measurable partitions such that for each , the collection is dense in the class of all measurable partitions with nontrivial elements. By Proposition 3.2, for , the set is dense, and also open by Proposition 3.1. Thus,

is a dense . Given a nontrivial measurable partition and , choose such that

For ,

By Lemma 2.6, . Therefore, for , the lower slow entropy is zero with respect to . ∎

Corollary 3.3.

In the weak topology, the generic transformation in is rigid, weak mixing, rank-one and has zero polynomial lower slow entropy.

4. Generic class with infinite upper slow entropy

The transformations in this section are constructed by including alternating stages of cutting and stacking. Suppose is representated by a single Rokhlin column of height .

4.1. Two approximately independent words

Cut column into two subcolumns and of equal width. Given , cut into subcolumns of equal width, stack from left to right, and place spacers on top. Cut into subcolumns of equal width and place a single spacer on top of each subcolumn, then stack from left to right. After this stage, there are two columns of height .

4.2. Independent cutting and stacking

Independent cutting and stacking is defined similar to [shields1973cutting]. As opposed to [shields1973cutting], here it is not necessary to use columns of different heights, since weak mixing is generic and we are establishing a generic class of transformations. Also, in section 5, we include a weak mixing stage which allows all columns to have the same height and facilitates counting of codewords. Given two columns and of height , and , independent cutting and stacking the columns times produces columns, each with height .

4.3. Infinite upper slow entropy

Let be a nontrivial measurable 2-set partition. We construct a dense for the case where , although a similar procedure will handle the more general case where . Let be a sequence of real numbers with subexponential growth. In particular, for every , . For , define

(4.1)
Proposition 4.4.

For , the set is open in the weak topology on .

Proof.

Let and , be such that . Let . The elements of of positive measure correspond to the various -names of length . For almost every , and have the same -name under , if and only if for some . Choose such that . Let . In the weak topology, choose an open set containing such that for , , and ,

(4.2)

We will prove inductively in for that

(4.3)

The case follows directly from (4.2):

Also, the case is trivial. Suppose equation (4.3) holds for . Below shows it holds for :

For , let

Thus,

(4.4)
(4.5)
(4.6)

Hence, if , . Each has the same -name under and . Suppose is such that satisfies . Let and . Since , then

Therefore, since , then and we are done. ∎

Our density result follows.

Proposition 4.5.

For sufficiently large , and all , the set is dense in the weak topology on .

Proof.

It will be sufficient to consider . This will allow us to choose . Also, in Lemma 2.4 can be chosen . The value in Lemma 2.4 will equal for some . It will not be difficult to choose such that . Also, let . Thus,

Since rank-ones are dense in , let be a rank-one transformation. Let . We will show there exists within of in the weak topology. It is sufficient to construct such that . We can reset . Let and . Since is ergodic, we can choose