Distributed Quantization for Sparse Time Sequences

10/21/2019
by   Alejandro Cohen, et al.
0

Analog signals processed in digital hardware are quantized into a discrete bit-constrained representation. Quantization is typically carried out using analog-to-digital converters (ADCs), operating in a serial scalar manner. In some applications, a set of analog signals are acquired individually and processed jointly. Such setups are referred to as distributed quantization. In this work, we propose a distributed quantization scheme for representing a set of sparse time sequences acquired using conventional scalar ADCs. Our approach utilizes tools from secure group testing theory to exploit the sparse nature of the acquired analog signals, obtaining a compact and accurate representation while operating in a distributed fashion. We then show how our technique can be implemented when the quantized signals are transmitted over a multi-hop communication network providing a low-complexity network policy for routing and signal recovery. Our numerical evaluations demonstrate that the proposed scheme notably outperforms conventional methods based on the combination of quantization and compressed sensing tools.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

07/03/2019

Serial Quantization for Representing Sparse Signals

Sparse signals are encountered in a broad range of applications. In orde...
07/22/2018

Hardware-Limited Task-Based Quantization

Quantization plays a critical role in digital signal processing systems....
05/23/2018

Robust one-bit compressed sensing with non-Gaussian measurements

We study memoryless one-bit compressed sensing with non-Gaussian measure...
07/17/2019

Deep learning scheme for microwave photonic analog broadband signal recovery

In regular microwave photonic (MWP) processing paradigms, broadband sign...
08/01/2019

Deep Task-Based Quantization

Quantizers play a critical role in digital signal processing systems. Re...
04/09/2018

Memoryless scalar quantization for random frames

Memoryless scalar quantization (MSQ) is a common technique to quantize f...
06/30/2019

Towards Ultra-low-power Realization of Analog Joint Source-Channel Coding using MOSFETs

Certain sensing applications such as Internet of Things (IoTs), where th...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Physical signals typically have continuous-valued amplitudes. In order to process these signals using digital hardware, they are quantized, namely, represented using a finite number of bits [gray1998quantization]. The conversion of an analog signal into a digital representation is carried out using analog-to-digital convertors, and consists of two steps: The signal is first sampled in time, resulting in a discrete time sequence which is then quantized, often by applying an identical uniform mapping to each sample, i.e., uniform scalar quantization [eldar2015sampling].

Conventional quantization theory considers the acquisition of a discrete time analog source into a digital form [gray1998quantization]. In some practical applications, such as sensor networks, multiple signals are acquired in distinct physical locations, while their digital representation is utilized in some central processing device, resulting in a distributed quantization setup. The recovery of a single parameter from the acquired signals was considered in [gubner1993distributed, lam1993design] and its extension to the recovery of a common source, known as the CEO problem, was studied in [berger1996ceo, oohama1998rate], see also [el2011network, Ch. 12]. Joint recovery of sources acquired in a distributed manner was studied in [shlezinger2019joint], which focused on sampling, while [saxena2006efficient, wernersson2009distributed] proposed non-uniform quantization mappings for the representation of multiple sources. M

ultivariate (vector) quantizers for arbitrary networks were considered in

[fleming2004network].

When restricted to using uniform ADCs, the accuracy of the resulting digital representation is limited, depending on the number of bits utilized [polyanskiy2014lecture, Ch. 23]. It was recently shown that the effect of this quantization error can be significantly reduced by accounting for a specific task [shlezinger2019hardware, shlezinger2018asymptotic, salamatian2019task], or the presence of a signal structure, as in [cohen2019serial], which considered scalar quantization of a sparse signal.

Fig. 1: Distributed quantization system illustration.

Sparse signals are encountered in a broad range of applications. Compressed sensing (CS) studies reconstruction of sparse signals from lower dimensional projections [eldar2012compressed]. Distributed CS was studied in [sarvotham2005distributed, baron2009distributed, do2009distributed, patterson2014distributed, feizi2010compressive], sparse recovery from quantized projections was considered in [jacques2013robust, boufounos20081, jacques2011dequantizing, gunturk2010sigma, kipnis2018single, boufounos2015quantization, saab2018quantization], while [shirazinia2014distributed, leinonen2018distributed] proposed vector quantization schemes for bit-constrained distributed CS. Despite the similarity, there is a fundamental difference between distributed quantization of sparse signals and distributed CS with quantized observations: In the quantization framework, the measurements are the sparse signals, while in CS the observations are a linear projection of the signals. Consequently, to utilize CS methods in distributed quantization, one must first have access to the complete signal in order to project it and then quantize, imposing a major drawback when acquiring time sequences. This motivates the study of distributed quantization schemes for sparse time sequences, which is the focus here. A distributed quantization system is illustrated in Fig. 1.

In this work we propose a distributed quantization scheme for jointly representing a set of individually observed jointly sparse sampled time sequences. Such setups consist of a set of sensors, each observing a sparse sequence in discrete time, and conveying its quantized observations to a centralized unit over a communication network, where it is used to formulate a digital representation of the observed signals. Our scheme is specifically designed to utilize scalar uniform ADCs, building upon our previous work on sequential quantization of sparse signals [cohen2019serial]. In particular, we show how the quantization system of [cohen2019serial], which utilized tools from secure group testing theory [cohen2016secure] to exploit sparsity in quantization, can be applied for distributed acquisition. Under a temporal joint-sparse model [baron2009distributed, Sec. 3.2], the proposed method achieves guaranteed accurate recovery while requiring a small number of bits for the overall representation. The resulting coding scheme, which operates over the binary field, allows improved reconstruction compared CS-based methods which project the real valued observations prior to quantization.

We first consider the case where each acquired signal is conveyed to the central unit via a direct link, representing, e.g., single-hop networks. We characterize the achievable distortion of the proposed scheme in the large signal size regime, showing that a given distortion level can be achieved with an overall number of bits which grows logarithmically in the number of samples. Then, we show how the technique can be extended to multi-hop networks, in which the quantized data must travel over multiple intermediate links to reach the central server. We formulate simplified network policies, dictating the behavior of each intermediate node, and prove that the performance characterization derived for single-hop networks also holds here, as long as there exists at least a single path to the central unit. Our numerical results demonstrate that the proposed scheme achieves substantially more accurate digital representations compared to combining utilizing distributed-quantized CS methods.

The rest of this paper is organized as follows: Section II introduces the system model. Section III details the proposed distributed quantization scheme, while Section IV provides simulation examples.

Throughout this paper, we use boldface lower-case letters for vectors, e.g., ; the th element of is written as . Matrices are denoted with boldface upper-case letters, e.g., , denotes its th element. Sets are denoted with calligraphic letters, e.g., . We use to denote the identity matrix, while and are the sets of real numbers and natural numbers, respectively.

Ii System Model

We consider distributed acquisition and centralized reconstruction of analog time sequences. The sequences, denoted are separately observed over the period , representing, e.g., sources measured at distinct physical locations. The signals are jointly sparse with joint support size [baron2009distributed]. We focus on two models for the joint sparse nature of :

Overall sparsity

Here, the ensemble of all signals over the observed duration is -sparse, namely, the set contains at most non-zero entries. This model, in which no structure assumed on the sparsity pattern of each signal, coincides with the general joint-sparse model of [baron2009distributed] without a shared component.

Structured sparsity

In the second model the signals are sparse in both time and space. Specifically, for each , the signal is -sparse, while for any , the set is -sparse. This setup is a special case of overall sparsity with with an additional structure which can facilitate recovery.

Each time sequence is encoded into a -bits codeword denoted . The encoding stage is carried out in a distributed manner, namely, each codeword is determined only by its corresponding time sequence and is not affected by the remaining sequences. The codewords are conveyed to a single centralized decoder over a network, possibly undergoing several links over multi-hop routes. We consider a binary network model, such that each link can be either broken or error-free. The centralized decoder maintains links with nodes. The network outputs, denoted , are collected by the decoder into a -bits vector , which is decoded into a digital representation of the acquired signals, denoted , as illustrated in Fig. 1. The system performance is measured by the MSE and the quantization rate .

We focus on the representation of time sequences, where each sample of the sparse source is observed in a different time instance. In order to avoid the need to store samples in analog, we require the acquisition to be carried out using serial ADCs, typically utilized by digital signal processors [eldar2015sampling]. Here, the th encoder operates on each sample independently, updating a register of bits, whose value upon the encoding of is denoted by . Once the complete vector time sequence is acquired, the encoder conveys the digital codeword . Note that both the encoders as well as the centralized decoder use bits for digital representation.

Our goal is to propose a distributed quantization system based on the above model. In particular, the distributed quantization scheme detailed in the following section consists of an encoding method, applied by each encoder; a decoding mapping, utilized by the central decoder; and a network behavior guidelines, namely, how the codewords are routed over the network.

Iii Distributed Quantization Scheme

In this section we detail the proposed distributed quantization scheme. We first consider a single-hop network in Subsection III-A, and incorporate the presence of a multi-hop network in Subsection III-B. A theoretical performance analysis and a discussion are provided in Subsections III-C-III-D, respectively.

Iii-a Single-Hop Networks

In a single-hop network each encoder has a direct error-free link to the centralized decoder. As mentioned above, we design our scheme to utilize serial scalar ADCs to acquire each incoming sample, avoiding the need to store in analog previous samples required when using, e.g., CS-based methods or vector quantization techniques. This is done in two steps. First, each encoder utilizes its own ADC, operating as a uniform scalar quantizer with resolution , to update a local register of bits. The relationship between and , as well as the remainig system parameters, are discussed in Subsection III-C. Once the acquisition of all time instances is complete, the encoders report the binary vector stored in their local register to the central decoder over the single-hop network. The decoder then uses the received codewords to jointly produce a digital representation of all the signals. The acquisition pipeline is illustrated in Fig. 2.

Fig. 2: Acquisition process in single-hop networks.

The encoding procedure at each encoder is based on the method proposed in our previous work [cohen2019serial], which combined scalar ADCs with group testing tools for serial quantization of sparse signals. The scheme of [cohen2019serial] applied the same ADC to each incoming input, and assigned to each ADC output a codeword taken from a code bin determined by the time instance, which is in turn combined with the previous codewords using logical operations. We identify that the associative nature of logical operations allows this scheme to be carried out in a distributed manner. Here, instead of using a different code bin for each time instance, we use a different bin for each user for each time instance, i.e., we utilize sub-binning. Using this modification, the scheme proposed in [cohen2019serial] can be applied for distributed quantization of jointly sparse signals, as detailed next.

Iii-A1 Codebook Generation

Each encoder maintains a codebook which is known to the central decoder. These codebooks can be generated offline, either by the central decoder or by each remote encoder individually, following a random binning strategy. Specifically, each codebook consists of binary sequences of length , drawn in an i.i.d. fashion from a Bernouilli distribution with parameter . The th encoder codebook is denoted by , . The codebook is divided into distinct subsets of equal size, referred to as sub-bins, denoted by , . Finally, each codebook contains the zero codeword denoted by , such that . The set of codebooks thus contains a total codewords.

Iii-A2 Encoder Structure

As depicted in Fig. 2, each incoming sample is first quantized using a scalar ADC with resolution , denoted , yielding a discrete value from the set , where . The encoder uses the discrete value as index to select a codeword from its th sub-bin, i.e., if then the codeword is chosen. Finally, the encoder updates a local -bits register whose value at time instance is via

(1)

where is the Boolean OR operator, and . After the sequence is acquired, is conveyed to the decoder.

Iii-A3 Decoder Structure

The decoder uses the received , which at the single hop case are given by and , to recover the sparse signals. To that aim, it first updates a single shared -bits register based on the network outputs via

(2)

To obtain a digital representation of the signals from , the decoder uses a maximum likelihood (ML) decoding scheme. To formulate the ML rule, let be the number of possible sets of non-zero entries in the set of signals. The value of depends on the nature of the joint-sparse signals. For example, for the general case of overall sparsity, , while for structured sparsity . For other jointly sparse models, such as joint sparsity with a common component [baron2009distributed, Ch. 3.2], different values of are used. Let denote the possible support for the non-zero entries of vectorization of the time sequences, namely, for a given , each element in is a pair indicating that . Following [cohen2019serial], the decoder implements the following steps:

  • For a given , the decoder recovers a collection of codewords , each one taken from a separate sub-bin, for which is most likely, namely,

    (3)

    The decoder looks for both the set of sub-bins as well as the selection of the codeword for each sub-bin, i.e., the selection of codeword index within ,

    , which maximize the conditional probability (

    3).

  • The decoder recovers from by setting its th entry, denoted , to be for each and for .

The main rationale of the proposed scheme is that it generates the codebooks such that the codewords utilized by each encoder at each time instance, which are determined by the quantized values , can be recovered from with high probability. This property, which is discussed in Subsection III-C, stems from the fact that the coding scheme is in fact based on group testing tools, and particularly, on secured group testing [cohen2016secure]. Note that the division of the codewords into per-user sub-bins allows the decoder to reduce the possible sets of codewords resulting in , thus decreasing the computational burden compared to searching over the complete set of codewords. In addition to its distributed nature, the proposed scheme can be applied over multi hop networks, as detailed in the sequel.

Iii-B Multi-Hop Multipath Networks

We now generalize our scheme to a multi-hop network, in which multiple directed links relate the distributed encoders and the centralized decoder. The intermediate nodes in the networks, which act as helpers or relays, can perform basic operations on their input from incoming links. For the sake of space and exposition, we consider a simplified model for this communication network, in which links are assumed to support -bits of information without errors, or result in a complete erasure. We also assume that the transmission is synchronized, i.e., the encoders and intermediate nodes all transmit in sync across their outgoing links, and that the network is acyclic. Note that despite its simplicity, this model is reminiscent of several network models used in the literature, e.g., [el2011network].

The operation of the encoders and the decoder in the multi hop setup is identical to that discussed for single hop networks in Subsection III-A. The only addition is in the network policy, as depicted in Fig. 3: At each intermediate node, we perform a Boolean OR operation of all incoming input vectors (which is the same mathematical operation performed by the encoder and decoders in Subsection III-A), and transmit the result length -vector on all outgoing links. The network outputs are collected in via (2).

Fig. 3: Acquisition process in multi-hop multipath Networks.

Clearly, the resulting bit sequence at the decoder is identical to the one in Subsection III-A, as long as there exist at least one path in the network from each encoder to the centralized decoder. Note that this is in contrast with the previous literature on distributed CS over networks, where it is typical to impose conditions on the network topology that guarantee a successful description [feizi2010compressive]. Consequently, the structures of the encoders and the decoder are invariant of whether the encoders communicate with the decoder directly or over multi-hop networks. Additionally, the scheme we propose is robust to link failures: as long as there exist at least one path from all encoders to the decoder, any number of link failures in the network still leads to the same received vector at the decoder, i.e., the coding scheme can achieve the min-cut max-flow bound of the network [el2011network, dantzig2003max]. The achievable performance of the scheme, whether applied over a single hop network or over multiple hops, is detailed in the following.

Iii-C Performance Analysis

Here, we analyze the achievable MSE of the proposed distributed quantization method. As discussed in Subsection III-A, when the decoder successfully identifies the utilized codewords, the digital representation is , resulting in the MSE

(4)

Note that (4) is determined by the distribution of , the quantization mapping , and its resolution . When represents a uniform mapping as in conventional ADCs, can be made arbitrarily small by increasing the internal parameter [polyanskiy2014lecture, Ch. 23]. The effect of on the quantization rate for which the decoder can achieve with high probability is stated in the following theorem:

Theorem 1.

The proposed distributed quantization scheme achieves the average MSE distortion in the limit with when the quantization rate satisfies the following inequality:

(5)

for some . The set depends on the type of joint sparsity: for overall sparsity, , while for structured sparsity .

Proof:

the proof follows similar arguments as in [cohen2019serial, Appendix A], and is thus omitted for brevity. ∎

Theorem 1 allows to determine what quantization rate should be configured to achieve a desired quantization error. In particular, one should first set to be the minimal value for which is not larger than the desired error, and then set the quantization rate to be larger than for some small . Theorem 1 then guarantees that, when is sufficiently large, a digital representation of the desired accuracy is achieved with high probability.

Iii-D Discussion

The proposed distributed quantization schemes has several practical advantages. First, it is designed to utilize conventional scalar ADCs, carrying out acquisition in a serial manner, as opposed to CS-based methods which require the complete time sequence to be available such that it can be projected and quantized. In addition to this practical benefit, our proposed scheme also achieves improved performance compared to CS schemes, as illustrated in Section IV.

Furthermore, our proposed method is extendable for scenarios in which the remote encoders are connected to the centralized decoder via a multi-hop network. As discussed in Subsection III-B, the presence of such a network does not affect the system operation or its achievable performance, and only requires a simplified network policy to be carried out by the intermediate network nodes. While our analysis assumes that each encoder has at least a single path to the decoder, it can be shown that the presence of missing paths for some encoders does not affect the recovery of the remaining signals. In particular, by treating the output of a broken link as the zero vector, if the th encoder has no path to the decoder, the recovery of remains intact, while

is estimated as being all zeros. Finally, we note that while the quantization system is specifically designed to exploit joint sparsity to improve the recovery accuracy, it is also applicable with large

. though the complexity increases. In particular, it has been shown in the group testing literature that such coding schemes are capable of accurate decoding, which in our case implies an MSE of , when grows in the order of [aldridge2017almost]. Furthermore, conventional group testing treats the encoding of binary data, while we consider that of different values , hinting that larger values of can be accurately recovered in the distributed quantization setup compared to standard group testing results. We leave the analysis of these conditions to future work.

Iv Numerical Evaluations

In this section we numerically evaluate the proposed distributed quantization method, compared to schemes based on distributed and quantized CS. To that aim, we consider a single hop network, and simulate time sequences. Each sequence consists of samples, following the overall sparsity model with support size

, where the non-zero indexes are generated uniformly, while their assigned values are randomized from an i.i.d. zero-mean unit variance Gaussian distribution.

In Fig. 4 we compare the MSE versus the quantization rate achieved by our proposed scheme to distributed CS methods with quantized observations. To guarantee that the used satisfies (5), we set , where is selected in the range . The ADC implements uniform quanization over . For distributed CS, each signal is compressed using i.i.d. zero-mean unit variance Gaussian projections into , where the integer in the range which minimizes the MSE is selected. Each set of projections is discretized using a uniform quanizer with support , where the resolution is selected, such that, each encoder uses a total of bits. While more advanced schemes combining distributed CS and vector quantization were proposed in [leinonen2018distributed], their complexity grows rapidly when , and thus we focus on conventional distributed CS with scalar quantization. The quantized values are aggregated by the central decoder, which recovers the set of signals using the quantized iterative hard thresholding (QIHT) method [jacques2013quantized] as well as fast iterative soft thresholding algorithm (FISTA) [beck2009fast]. We also evaluate the case where each sample is separately uniformly quantized and conveyed to the decoder without additional coding, modeling directly applying scalar ADCs for distributed acquisition.

Fig. 4: Achievable distortion versus quantization rate .

Observing Fig. 4, we note that the proposed distributed quantization scheme notably outperforms techniques based on distributed CS. In particular, our method is shown to improve substantially the accuracy of the overall digital representation as the quantization rate increases, while distributed quantized CS is demonstrated to meet an error floor around for FISTA and for QIHT. Standard uniform quantization, which is applicable only for as the ADCs must utilize at least one bit, is notably outperformed by the previous approaches, as it does not exploit the underlying sparsity.

In the study detailed in Fig. 4 we computed the achievable MSE for a given quantization rate. We note that Theorem 1 allows us to determine rigorously the quantization rate required to achieve a given MSE, as the latter is dictated by the quantization resolution . To demonstrate how the minimal quantization rate grows with the resolution , we compute in Fig. 5 the minimal rate versus for . The setup evaluated here consists of sequence of samples each, for both overall sparsity with as well as structured sparsity with the same overall sparsity level and . Observing Fig. 1, we note that structured sparsity allows to use lower quantization rates, i.e., fewer bits, to achieve the same level of distortion, due to the additional structure. We also note that the quantization rate grows slowly with , indicating that a minor increase in the quantization rate can allow the scheme to utilize ADCs of much higher resolution, while maintaining the guaranteed performance of Theorem 1.

Fig. 5: Quantization rate threshold versus the resolution .

V Conclusion

In this work we proposed a distributed quantization scheme designed to compactly and accurately represent a set of sparse time sequences. Our proposed method utilizes serial scalar ADCs, facilitating sequential acquisition while avoiding the need to store samples in analog, combined with coding schemes based on tools from group testing theory. We show how our approach can be naturally extended in the presence of multi hop networks, by introducing simplified policies on the intermediate nodes, and derive sufficient conditions on the quantization rate required to achieve a desired quantization resolution. Our numerical study demonstrates that our proposed method markedly outperforms schemes based on distributed and quantized CS, and illustrates how the presence of structured sparsity profiles can be exploited to utilize fewer bits.

References