High order low-bit Sigma-Delta quantization for fusion frames

06/17/2020 ∙ by Zhen Gao, et al. ∙ Technische Universität München Vanderbilt University 0

We construct high order low-bit Sigma-Delta (ΣΔ) quantizers for the vector-valued setting of fusion frames. We prove that these ΣΔ quantizers can be stably implemented to quantize fusion frame measurements on subspaces W_n using log_2( dim(W_n)+1) bits per measurement. Signal reconstruction is performed using a version of Sobolev duals for fusion frames, and numerical experiments are given to validate the overall performance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Fusion frames provide a mathematical setting for representing signals in terms of projections onto a redundant collection of closed subspaces. Fusion frames were introduced in [11] as a tool for data fusion, distributed processing, and sensor networks, e.g., see [12, 13]. In this work we consider the question of how to perform quantization, i.e., analog-to-digital conversion, on a collection of fusion frame measurements.

Our motivation comes from the stylized sensor network in [24]. Suppose that one seeks to measure a signal over a large environment using a collection of remotely dispersed sensors that are constrained by limited power, limited computational resources, and limited ability to communicate. Each sensor is only able to make local measurements of the signal, and the goal is to communicate the local measurements to a distantly located base station where the signal

can be accurately estimated. The sensor network is modeled as a fusion frame and is physically constrained in the following manner:

  • Each local measurement is a projection of onto a subspace associated to .

  • Each sensor has knowledge of the proximities of a small number of nearby sensors.

  • Each sensor can communicate analog signals to a small number of nearby sensors.

  • Each sensor can transmit a low-bit signal to the distant base station.

  • The base station is relatively unconstrained in power and computational resources.

Mathematically, the above sensor network problem can be formulated as a quantization problem for fusion frames, e.g., [24]. Suppose that are subspaces of and suppose that each is a finite quantization alphabet. Given and the orthogonal projections , we seek an efficient algorithm for rounding the continuum-valued measurements to finite-valued . This rounding process is called quantization and it provides a digital representation of through . Here, corresponds to the low-bit signal that the sensor transmits to the central base station. We will focus on the case where the are computed sequentially and we allow the algorithm to be implemented with a limited amount of memory. The memory variables correspond to the analog signals that sensors communicate to other nearby sensors. Finally, once the quantized measurements have been computed, we seek a reconstruction procedure for estimating ; this corresponds to the role of the base station.

We address the above problem with a new low-bit version of Sigma-Delta () quantization for fusion frames. Sigma-Delta quantization is a widely applicable class of algorithms for quantizing oversampled signal representations. Sigma-Delta quantization was introduced in [23], underwent extensive theoretical development in the engineering literature [18], and has been widely implemented in the circuitry of analog-to-digital converters [30]. Starting with [14], the mathematical literature provided approximation-theoretic error bounds for Sigma-Delta quantization in a variety of settings, starting with bandlimited sampling expansions [14, 15, 19, 20, 22, 32]. The best known constructions yield error bounds decaying exponentially in the bit budget [19, 15], which is also the qualitative behavior that one encounters when quantizing Nyquist rate samples at high precision. The rate of this exponential decay, however, is provably slower for Sigma-Delta [28].

Subsequently, Sigma-Delta was generalized to time-frequency representations [33] and finite frame expansions [4]. As it turned out, the direct generalization to frames has significant limitations unless the frame under consideration has certain smoothness properties [8]. A first approach to overcome this obstacle was to recover signals using a specifically designed dual frame, the so-called Sobolev Dual [6, 26]; this approach has also been implemented for compressed sensing [21, 27, 16]. Another class of dual frames that sometimes outperform Sobolev duals are the so-called beta duals [10]. In the context of compressed sensing, Sobolev duals have inspired a convex optimization approach for recovery [31], which has also been analyzed for certain structured measurements [17].

Since fusion frames employ vector-valued measurements, our approach in Definition 3.1 may be viewed as a vector-valued analogue of Sigma-Delta quantization. For perspective, we point out related work on quantization of finite frames with complex alphabets on a lattice [5], hexagonal modulators for power electronics [29], and dynamical systems motivated by error diffusion in digital halftoning [1, 2, 3].

The work in [24] constructed and studied stable analogues of Sigma-Delta quantization in the setting of fusion frames. The first order algorithms in [24] were stably implementable using very low-bit quantization alphabets. Unfortunately, the higher order algorithms in [24] required large quantization alphabets for stability to hold. Stable high order algorithms are desirable since quantization error generally decreases as the order of a Sigma-Delta algorithm increases, e.g., [14]. The main contribution of the current work is that we provide the first examples of stable high-order low-bit Sigma-Delta quantizers for fusion frames.

Our results achieve the following:

  • We construct stable th order fusion frame Sigma-Delta (FF) algorithms with quantization alphabets that use -bits per subspace , see Theorems 4.1 and 5.2. This resolves a question posed in [24].

  • We provide numerical examples to show that the FF algorithm performs well when implemented together with a version of Sobolev dual reconstruction.

2. Fusion frames and quantization

In this section, we provide background on fusion frames and quantization.

2.1. Fusion frames

Let be a collection of subspaces of and let be positive scalar weights. The collection is said to be a fusion frame for with fusion frame bounds if

If the bounds are equal, then the fusion frame is said to be tight. If for all , then the fusion frame is said to be unweighted. Given a fusion frame, the associated analysis operator is defined by

The problem of recovering a signal from fusion frame measurements is equivalent to finding a left inverse to the analysis operator. There is a canonical choice of left inverse which can be described using the synthesis operator and the frame operator.

The adjoint of the analysis operator is called the synthesis operator and is defined by The fusion frame operator is defined by It is well-known [11] that is a positive self-adjoint operator. Moreover, is a left inverse to since . This provides the following canonical reconstruction formula for recovering from fusion frame measurements

Although the canonical choice of left inverse is natural, other non-canonical left-inverses will be more suitable for the problem of reconstructing a signal from quantized measurements.

2.2. Norms and direct sums

The direct sum space arises naturally in the study of fusion frames. In the interest of maintaining simple notation, we use the norm symbol in different contexts throughout the paper to refer to norms on both Euclidean space and direct sum spaces, as well as operator norms on such spaces.

The following list summarizes different ways in which norm notation is used throughout the paper.

  • If then denotes the Euclidian norm.

  • If then .

  • If , then and

  • If then .

  • If then .

2.3. Quantization

Let and suppose that are subspaces associated with a fusion frame for . For each , let be a finite set which we refer to as a quantization alphabet, and let be an associated vector quantizer with the property that

(2.1)

Memoryless quantization is the simplest approach to quantizing a set of fusion frame measurements . Memoryless quantization simply quantizes each to . See [24] for basic discussion on the performance of memoryless quantization for fusion frames. This approach works well when the alphabets are sufficiently fine and dense, and is also suitable when the subspaces are approximately orthogonal. On the other hand, it is not very suitable for our sensor network problem which requires coarse low-bit alphabets and involves correlated subspaces . We will see that Sigma-Delta quantization is a preferable approach.

We will make use of the low-bit quantization alphabets provided by the following lemma. These alphabets use bits to quantize each subspace . For perspective, in the scalar-valued setting, it is known that stable quantizers of arbitrary order can be implemented using a 1-bit quantization alphabet to quantize each scalar-valued sample, [14]. The vector-valued alphabet in the following lemma provides a suitable low-bit analogue of this for fusion frames.

Lemma 2.1.

Let be an -dimensional subspace of . There exists a set in such that , and each is unit-norm , and

(2.2)

Moreover, if , then for every , there exists such that

(2.3)

For references associated to Lemma 2.1, see the discussion following Lemma 1 in [24].

3. Fusion frame Sigma-Delta quantization

Throughout this section we shall assume that are subspaces of and that each finite collection is an unweighted fusion frame for when . We also assume that is a set of vectors as in Lemma 2.1, and that is a vector quantizer satisfying (2.1). Observe that by (2.1) and (2.3) one has that for arbitrary with

(3.1)

Given , we shall investigate the following algorithm for quantizing fusion frame measurements .

Definition 3.1 (Fusion frame Sigma-Delta algorithm).

For each , fix operators , . Initialize the state variables .

The fusion frame Sigma-Delta algorithm (FF) takes the measurements as inputs and produces quantized outputs , by running the following iteration for

(3.2)
(3.3)

The algorithm (3.2), (3.3) may be applied to an infinite stream of inputs, but, in practice, the algorithm will usually be applied to a fusion frame of finite size and will terminate after finitely many steps. The operators must be chosen carefully for the algorithm (3.2), (3.3) to perform well. We shall later focus on a specific choice of operators in Section 5, but to understand its motivation, it is useful to first discuss reconstruction methods for the FF algorithm and to keep

general for the moment.

The fusion frame Sigma-Delta algorithm must be coupled with a reconstruction procedure for recovering from the quantized measurements . We consider the following simple reconstruction method that uses left inverses of fusion frame analysis operators. At step of the FF algorithm, one has access to the quantized measurements . Let denote the column vector with entries . Since is a fusion frame with analysis operator , let denote a left inverse of , so that holds for all . A specific choice of left inverse will be specified in Section 8, but for the current discussion let be an arbitrary left inverse. After step of the iteration (3.2), (3.3), we reconstruct the following from

(3.4)

We now introduce notation that will be useful for describing the error associated to (3.4) . Let denote the column vector with entries . Let denote the identity operator, and let denote the block operator with entries

(3.5)

Note that (3.3) and (3.5) can be expressed in matrix form as . Combining this and (3.4) allows the error to be expressed as

(3.6)

We aim to design the operators in the quantization algorithm and the reconstruction operator so that the error given by (3.6) can be made quantifiably small. We pursue the following design goals:

  • Select so that the iteration (3.2), (3.3) satisfies a stability condition which controls the norm of the state variable sequence .

  • Select and so that has small operator norm. This can be decoupled into separate steps. First, is chosen to ensure that operator satisfies an th order condition that expresses in terms of a generalized th order difference operator. Secondly, is chosen to be a Sobolev left inverse which is well-adapted to the operator .

For the above points, Section 4 discusses stability, Section 5 discusses the th order property, and Section 8 discusses reconstruction with Sobolev left inverses.

4. Stability

The following theorem shows that the fusion frame algorithm is stable in the sense that control on the size of inputs ensures control on the size of state variables . For perspective, the stable higher order fusion frame algorithm in [24] requires relatively large alphabets .

Theorem 4.1.

Let be subspaces of with . Suppose that a sequence with is used as input to the algorithm (3.2), (3.3).

Suppose that , and let

Suppose that satisfies and let

If for all , then the state variables in (3.2), (3.3) satisfy for all .

Proof.

Step 1. We begin by noting that the assumption is not vacuous. The condition directly follows from the assumption. If then automatically holds. For , we rewrite

which is strictly larger than .

Step 2. Next, we note that . By the definition of , it can be verified that holds if and only if

It follows that holds if and only if

Since and , the assumption implies that , as required.

Step 3. We will prove the theorem by induction. The base case holds by the assumption that . For the inductive step, suppose that and that holds for all .

Let , so that with . If then , as required. So, it remains to consider .

When , let . Combining the definition of and the fact that the quantizer is scale invariant with (3.1), we obtain that Thus,

(4.1)

Since , and , the definition of gives that

(4.2)

Recall, we aim to show . Let By (4.1) and (4.2), it suffices to prove

For that, we note that

and

Hence it only remains to show that .

Step 4. Consider the polynomial

Since , it can be verified that the polynomial has real roots . Since , one has that for all . In particular, . Moreover, it can be checked that

Thus, .

Step 5. Note that

Since holds by Step 4, it follows that , as required.

5. th order algorithms and feasibility

Classical scalar-valued th order Sigma-Delta quantization expresses coefficient quantization errors as an th order difference of a bounded state variable, e.g., see [14, 19]. In this section we describe an analogue of this for the vector-valued setting of fusion frames.

Let be the block operator defined by

(5.1)
Definition 5.1 (th order algorithm).

The fusion frame Sigma-Delta iteration (3.2), (3.3) is an th order algorithm if for every there exist operators that satisfy

(5.2)

and

(5.3)

Moreover, given , we say that is -feasible if the operators that define by (3.5) satisfy

(5.4)

The th order condition (5.2) should be compared with the scalar-valued analogue in equation (4.2) in [19], cf. [15]. For perspective, the condition (5.4) ensures that the stability result from Theorem 4.1 can be used. The th order conditions (5.2), (5.3) will later be used in Section 8 to provide control on the quantization error .

We now show that it is possible to select so that the low-bit fusion frame Sigma-Delta algorithm in (5.2), (5.3) is th order and -feasible with small .

We make use of the following sequences defined in [19]. The constructions have subsequently been improved in [25, 15], but we will work with the (suboptimal) first construction, as it allows for closed form expressions. Given , define the index set . Let be fixed and define the sequences and by

(5.5)

Next, define by

(5.6)

where is defined by if , and if . We will later use the property, proven in [19], that satisfies

(5.7)

Let and define the block operator using (3.5) with and and

(5.8)

In the following result, we prove that the fusion frame Sigma-Delta algorithm with operators (5.8) is th order and -feasible.

Theorem 5.2.

Fix . Given , if is sufficiently large and if the operators are defined by (5.5), (5.6), (5.8), then the fusion frame Sigma-Delta algorithm (3.2), (3.3) is an th order algorithm and is -feasible.

The proof of Theorem 5.2 is given in Section 7.

6. Background lemmas

In this section, we collect background lemmas that are needed in the proof of Theorem 5.2. The following result provides a formula for the entries of the block operator .

Lemma 6.1.

Fix . If is the block operator defined by (5.1) then is invertible and satisfies

(6.1)
Proof.

The proof proceeds by induction. For the base case , a direct computation shows that

For the inductive step, suppose that (6.1) holds. Using shows that , and that if then . If , then

(6.2)

The combinatorial identity , e.g., see page 1617 in [19], shows that In particular, (6.2) reduces to when . ∎

Lemma 6.2.

Fix , , and define , , by (5.5) and (5.6). If , then

(6.3)

Sketch of Proof. The result is contained in the proof of Proposition 6.1 in [19]. We provide a brief summary since [19] proves a more general result.

First, note that is an increasing sequence of strictly positive, distinct integers, which satisfies the requirements of Proposition 6.1 in [19]. The final sentence in step (i) of the proof Proposition 6.1 in [19] shows that

where is given by (6.1) in [19]. Moreover, the first two sentences in step (ii) in the proof of Proposition 6.1 in [19] give that where whenever . Finally, recalling the definition in (5.6) gives the desired conclusion. ∎

7. Proof of Theorem 5.2

In this section we prove Theorem 5.2.

Step 1. We first show that the operators defined by (5.5), (5.6), (5.8) satisfy (5.4) when is sufficiently large.

Note that is decreasing on and . Given , it follows from (5.7) that there exists so that implies

(7.1)

By (5.8) we have

(7.2)

Step 2. Define the block operator . Using (3.5), (5.8), Lemma 6.1, and it can be shown that

(7.3)

Let . Lemma 6.2 shows that if then . This shows that is banded and satisfies

(7.4)

Step 3. Recall that and let We next show that if and then

(7.5)

Since increases as increases, it follows that if then . Likewise, if then. Also, recall that by (7.1) we have . So, if then (7.4) implies that

Step 4. Next, we prove that

(7.6)

Suppose that satisfies . By (7.4), it follows that satisfies