Model-Aware Deep Architectures for One-Bit Compressive Variational Autoencoding

Parameterized mathematical models play a central role in understanding and design of complex information systems. However, they often cannot take into account the intricate interactions innate to such systems. On the contrary, purely data-driven approaches do not need explicit mathematical models for data generation and have a wider applicability at the cost of interpretability. In this paper, we consider the design of a one-bit compressive variational autoencoder, and propose a novel hybrid model-based and data-driven methodology that allows us not only to design the sensing matrix and the quantization thresholds for one-bit data acquisition, but also allows for learning the latent-parameters of iterative optimization algorithms specifically designed for the problem of one-bit sparse signal recovery. In addition, the proposed method has the ability to adaptively learn the proper quantization thresholds, paving the way for amplitude recovery in one-bit compressive sensing. Our results demonstrate a significant improvement compared to state-of-the-art model-based algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

12/10/2019

Deep One-bit Compressive Autoencoding

Parameterized mathematical models play a central role in understanding a...
11/30/2018

Deep Signal Recovery with One-Bit Quantization

Machine learning, and more specifically deep learning, have shown remark...
02/05/2021

LoRD-Net: Unfolded Deep Detection Network with Low-Resolution Receivers

The need to recover high-dimensional signals from their noisy low-resolu...
12/21/2020

Unfolded Algorithms for Deep Phase Retrieval

Exploring the idea of phase retrieval has been intriguing researchers fo...
04/28/2014

One-bit compressive sensing with norm estimation

Consider the recovery of an unknown signal x from quantized linear measu...
07/15/2020

1-Bit Compressive Sensing via Approximate Message Passing with Built-in Parameter Estimation

1-bit compressive sensing aims to recover sparse signals from quantized ...
01/04/2021

Discovering genetic networks using compressive sensing

A first analysis applying compressive sensing to a quantitative biologic...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In the past two decades, compressive sensing (CS) has shown significant potential in enhancing sensing and recovery performance in signal processing, occasionally with simpler hardware, and thus, has attracted noteworthy attention among researchers. CS is a method of signal acquisition which ensures the exact or almost exact reconstruction of certain classes of signals using far less number of samples than what is needed in the Nyquist sampling regime [4472240, eldar2012compressed]—where the signals are typically reconstructed by finding the sparsest solution of an under-determined system of equations using various available means.

In a practical setting, each measurement is to be digitized into finite-precision values for further processing and storage purposes, which inevitably introduces a quantization error. This error is generally dealt with as measurement noise possessing limited energy; an approach that does not perform well in extreme cases. One-bit CS is one such extreme case where the quantizer is a simple sign comparator and each measurement is represented using only one bit information [4558487, 5955138, 6418031, 6404739, 6178284, zhang2014efficient]. One-bit quantizers are not only low-cost and low-power hardware components, but also much faster than traditional scalar quantizers, accompanied by great reduction in the complexity of hardware implementation. Several algorithms have been introduced in the literature for efficient reconstruction of sparse signals in one-bit CS scenarios (e.g., see [4558487, 5955138, 6418031, 6404739, 6178284, zhang2014efficient, 8747470] and the references therein). A detailed discussion of such algorithms is provided in Sec II.

Notation:

We use bold lowercase letters for vectors and bold uppercase letters for matrices.

, and denote the vector/matrix transpose, and the Hermitian transpose, respectively. and are the all-one and all-zero vectors. denotes the -norm of the vector defined as . denotes the -th element of the vector . denotes the diagonal matrix formed by the entries of the vector argument . The operator denotes the element-wise vector inequality operator.

I-a Relevant Prior Art

One-bit compressive sensing is mainly concerned with the following data-acquisition model:

(1)

where denotes a -sparse source signal, is the sensing matrix, and denotes the quantization thresholds vector. In addition to the mentioned advantages of using one-bit ADCs for data-acquisition purposes, the use of one-bit information offers increased robustness to undesirable non-linearities in the data-acquition process. Furthermore, there exists strong empirical evidence that recovering a sparse source signal from only one-bit measurement can outperform its multi-bit CS counterpart [6418031, knudson2016one].

The current one-bit CS recovery algorithms typically exploit the consistency principle, which represents the fact that the element-wise product of the sparse signal and the corresponding measurement is always positive [4558487], i.e. . However, most of the existing literature on one-bit CS considers zero-level one-bit quantization thresholds (i.e., ) leading to a total loss of amplitude information during the data-acquisition process. Hence, by comparing the signal level with zero, one can only recover the direction of the source signal, i.e. , and not the amplitude information . In its most general form, any solution to the one-bit CS problem should: (i) satisfy the sparsity condition, i.e. with , and (ii) achieve consistency, i.e. . As mentioned above, most of the existing literature on the problem of one-bit CS recovery problem considers the case of . In such a case, the solution to the one-bit CS problem can be expressed as:

The above program is NP-hard and mathematically intractable [6418031]. However, there exist several powerful iterative algorithms to find (for the case of ) that rely on a relaxation of the -norm to its convex hull (i.e., using -norm in lieu of

-norm) to obtain an estimate of the support of the true source signal by restricting the feasible solutions to the unit-sphere, i.e.

.

In [4558487], the authors assume a zero-level quantization threshold and propose an iterative algorithm called renormalized fixed point iteration (RFPI) where a convex barrier function is used to enforce the consistency principle (as a regularization term in the objective function). A detailed analysis of the RFPI algorithm is provided in Sec. II. It is worth mentioning that in a traditional CS setting, one consider the under-sampled measurements (i.e., ), however, the over-sampling regime is beneficial and of paramount interest in a one-bit CS setting in that the use of one-bit ADCs provide a cheap and fast way to acquire measurements and to potentially go beyond the limitations of the traditional CS methods.

Figure 1:

General DNNs vs DUNs. DUNs appear to be an excellent tool for real-time signal processing and machine learning applications due to the smaller degrees of freedom required for training and execution.

Another such reconstruction algorithm can be found in [5955138], referred to as restricted step shrinkage (RSS), for which a nonlinear barrier function is used as the regularizer to enforce the consistency principle. Compared to RFPI algorithm, RSS has three important advantages: provable convergence, improved consistency, and feasible performance [Li2018]. Ref. [6418031] introduces a penalty-based robust recovery algorithm, called binary iterative hard thresholding

(BIHT), in order to enforce the consistency principle. Contrary to RFPI algorithm, BIHT exploits the knowledge of the sparsity level of the signal as input, and was shown to be more robust to outliers and have a superior performance than that of the RFPI method in some cases (at the cost of knowing the sparsity level of the source signal a priori). Both RFPI and BIHT, however, only consider a zero-level quantization threshold, as a result, the amplitude information is lost due to comparing the acquired signal with zero. In

[6404739] and [6178284], authors proposed modified versions of RFPI and BIHT, referred to as noise-adaptive renormalized fixed point iteration (NARFPI) and adaptive outlier pursuit with sign flips (AOP-f), that are more robust against bit flips in the measurement vector (that occur due to the presence of noise). More recently, the authors in [knudson2016one] considered the problem of one-bit CS signal reconstruction in a non-zero quantization thresholds setting that enables the recovery of the norm of the source signal, i.e. recovering . However, the proposed method in [knudson2016one]

still fails to accurately recover the amplitude information of the source signal, and does not offer a straight-forward apparently to design the quantization thresholds. In addition, there exist several variables in the above mentioned iterative algorithms that must be tuned either heuristically or using expensive computations (e.g., grid-search method) to achieve a high performance. In

[plan2012robust], the authors lay the ground work for a theoretical analysis of noisy one-bit CS problem, and propose a novel polynomial-time solver based on a convex programming approach for the problem of one-bit sparse signal recovery in a noisy setting.

Considering the above, it is of paramount importance to develop computationally efficient one-bit CS models that can incorporate non-zero quantization thresholds to allow for recovering the amplitude information. Additionally, the vast literature on the one-bit CS recovery problem, does not yet tap into the potential of the available data at hand (to improve the performance recovery). One can significantly benefit from a methodology that can facilitate not only incorporation of the domain knowledge on the problem (i.e., being model-driven), but also the available data at hand to go beyond the performance of the traditional sparsity aware signal processing techniques.

There has recently been a high demand for developing effective real-time signal processing algorithms that use the data to achieve improved performance [7780424, ILIADIS20189, 7447163, shlezinger2019hardware, liao2019deep, shlezinger2019viterbinet]

. In particular, the data-driven approaches relying on deep neural architectures such as convolutional neural networks

[7780424], deep fully connected networks [ILIADIS20189]

, stacked denoising autoencoders

[7447163], and generative adversarial networks [wu2019deep]

have been studied for sparse signal recovery in generic quantized CS settings. we note that, parameterized mathematical models discussed above play a central role in understanding and design of large-scale information systems and signal processing methods. However, they often cannot take into account the intricate interactions innate to such systems. On the contrary, purely data-driven approaches, and specifically deep learning techniques, do not need explicit mathematical models for data generation and have a wider applicability at the cost of interpretability. The main advantage of the deep learning-based approach is that it employs several non-linear transformations to obtain an abstract representation of the underlying data. Data-driven approaches, on the other hand, lack the interpretability and trustability that comes with model-based signal processing. They are particularly prone to be questioned further, or at least not fully trusted by the users, especially in critical applications. Furthermore, the deterministic deep architectures are generic and it is unclear how to incorporate the existing knowledge on the problem in the processing stage.

The advantages associated with both model-based and data-driven methods show the need for developing frameworks that bridge the gap between the two approaches.

The recent advent of the deep unfolding framework [chien2017deep, wisdom2017building, khobahi2019deep, hershey2014deep, wisdom2016deep, solomon2019deep] and the corresponding deep unfoling networks (DUNs) has paved the way for a game-changing fusion of models and well-established signal processing approaches with data-driven architectures. In this way, we not only exploit the vast amounts of available data, but also integrate the prior knowledge of the system model in the processing stage. Deep unfolding relies on the establishment of an optimization or inference iterative algorithm, whose iterations are then unfolded into the layers of a deep network, where each layer is designed to resemble one iteration of the optimization/inference algorithm. The resulting hybrid method benefits from low computational cost (in execution stage) of deep neural networks, and at the same time, from the versatility and reliability of model-based methods; thus, appears to be an excellent tool in real-time signal processing applications due to the smaller degrees of freedom required for training and execution (afforded by integration of the problem-level reasoning, or the model, see Fig. 1). A detailed analysis of the deep unfolding methodology for the problem of one-bit CS is provided in Sec. III.

I-B Contributions of the Paper

In this paper, we propose a novel hybrid model-based and data-driven methodology (based on DUNs) that addresses the drawbacks of both purely model-based (such as the discussed RFPI and BIHT algorithm) and purely data-driven approaches. The resulting methodology is far less data-hungry and assumes a slight over-parametrization of the system model as opposed to traditional deep learning techniques (with very large number of variables to be learned). In particular, the proposed method seeks to bridge the gap between the data-driven and model-based approaches in the one-bit CS paradigm, and to result in a specialized architecture for the purpose of sparse signal recovery from one-bit measurements. The contribution of this paper can be summarized as follows:
We propose a novel hybrid model-based and data-driven one-bit compressive variational autoencoding (VAE) methodology that can deal with the optimization of the sensing matrix , the one-bit quantization thresholds , and the latent-variables of the decoder module according to the underlying distribution of the source signal. Hence, such a methodology allows for quick adaptation to new data distributions and environments.
To the best of our knowledge, this is the first attempt in the one-bit CS paradigm that allows for joint optimization of the quantization thresholds and sensing matrix, also facilitating the recovery of the amplitude information of the source signal. We show that by using the proposed VAEs, one can significantly improve upon existing iterative algorithms and gain much higher accuracy both in terms of recovering the magnitude and the support of the underlying source signal.
The proposed methodology exhibits performance that goes beyond the traditional one-bit CS state-of-the-art and allows for designing sensing matrices that are distribution-specific. In conjunction to learning data-specific , the quantization thresholds can also be learned in a joint manner such that the learned parameters improve the signal reconstruction accuracy and speed.
We propose two generalized optimization algorithms that can be used as standalone algorithms for recovering the amplitude information of the source signal by utilizing non-zero quantization thresholds.

Organization of the Paper:

The remainder of this paper is organized as follows. In Sec. II, we discuss the general problem formulation and system model of the one-bit compressive sensing problem and propose two general algorithms that pave the way for incorporating non-zero quantization thresholds. The proposed one-bit compressive variational autoencoding methodology is presented in Sec. III. The loss function characterization and training method for the proposed VAEs are discussed at the end of Sec. III. In Sec. IV, we investigate the performance of the proposed methods through various numerical simulations and for various scenarios. Finally, Sec. V concludes the paper.

Ii System Model and Problem Formulation

In this paper, we are interested in a one-bit CS measurement model (i.e., the encoder module) with dynamics that can be described as follows:

(2)

where denotes the sensing matrix, is the quantization thresholds, and is assumed to be a -sparse signal. Having the one-bit measurements of the form (2), one can pose the problem of sparse signal recovery from one-bit measurements by solving the following non-convex program:

(3)

where the constraint in (3) is imposed to ensure a consistent reconstruction with the available one-bit information. Further note that the one-bit measurement consistency principle in (3) can be equivalently expressed as

(4)

where .

Let us first consider the scenario in which the quantization thresholds are all set to zero. In this case, the non-convex optimization problem can be further relaxed and expressed as a well-known non-convex -minimization program on the unit sphere [4558487]:

(5)

where the -norm acts as a sparsity inducing function. The intuition behind finding the sparsest signal on the unit-sphere (i.e., fixing the energy of the recovered signal) is two-fold. First, it reduces the feasible set of the optimization problem as the amplitude information is lost, and second, it avoids the the trivial solution of . By comparing the acquired data with non-zero quantization thresholds, the constraint defined in (4) not only reduces the feasible set of the problem by defining a set of hyper-planes where the signal can reside on, but also, implicitly exclude the trivial solution. There exists an extensive body of research on approximately solving the non-convex optimization problem (e.g., see [4558487, 5955138, 6404739, Plan2013, 6638799, zhang2014efficient], and the references therein). The most notable methods utilize a regularization term to enforce the consistency principle via a penalty term added to the -objective function, viz.

(6)

where is the penalty factor.

Among the numerous iterative algorithms available for tackling the optimization problem in (6), we plan to utilize and improve upon the state-of-the-art renormalized fixed-point iterations (RFPI) [4558487], and the Binary Iterative Hard Thresholding (BIHT) [6418031] algorithms as the starting point for our proposed model-driven one-bit compressive variational autoencoding methodology. Namely, in the subsequent sections, we use the mentioned algorithms as a base-line to design the decoder function of our one-bit CS VAE. In particular, we unfold the iterations of the two specialized algorithms onto the layers of a deep neural network in a fashion that each layer of the proposed deep architecture mimics the behavior of one iteration of the base-line algorithm. Next, we perform an end-to-end learning approach by utilizing the back-propagation method to tune the parameters of both the decoder and the encoder functions of the proposed one-bit compressive VAE.

Ii-a Renormalized Fixed-Point Iteration (RFPI)

The RFPI algorithm considers a one-bit CS data acquisition model where the quantization thresholds are all set to zero. With and , the RFPI algorithm utilizes the following regularization term to enforce the consistency constraint in (5):

(7)

where , and the function is applied element-wise on the vector arguments. Note that the function

can be expressed in terms of the well-known Rectifier Linear Unit (

ReLU) function extensively used by the deep learning research community, i.e. . Briefly speaking, the RFPI algorithm is a first-order optimization method (gradient-based) that operates as follows: given an initial point on the unit-sphere (i.e., ), the gradient step-size and a shrinkage thresholds (or equivalently the penalty term), at each iteration , the estimated signal is obtained using the following update steps:

(8a)
(8b)
(8c)
(8d)

After the descent in (8a)-(8b), the update step in (8c) corresponds to a shrinkage step. More precisely, any element of the vector that is below the threshold will be pulled down to zero (leading to enhanced sparsity). Finally, the algorithm projects the obtained vector on the unit sphere to produce the latest estimation of the signal. Note that the latter step is necessary due to the fact that a zero-threshold vector (i.e., ) is employed at the time of the data acquisition, and hence, the amplitude information is lost.

While effective in signal reconstruction, there exist several drawbacks in using the RFPI method. For instance, it is required to use the algorithm on several problem instances, while increasing the value of the penalty factor at each outer iteration of the algorithm, and to use the previously obtained solution as the initial point for tackling the recovery problem for any new problem instance. Moreover, it is not straight-forward how to choose the fixed step-size and the shrinkage threshold, that may depend on the latent-parameters of the system. In fact, it is evident that by carefully tuning the step-sizes and the shrinkage threshold , one can significantly boost the performance of the algorithm, and further alleviate the mentioned drawbacks of this method. In what follows, we extend the above iterations in a fashion that it allows for incorporating the non-zero quantization thresholds, and hence, enabling us to effectively recover the amplitude information of the source signal.

A.1. Extending the RFPI framework to non-zero quantization thresholds:
Recall that our focus is on the following encoding (measurement) model with an arbitrary threshold vector :

(9)

Therefore, the problem of one-bit CS signal recovery with a non-zero quantization threshold vector can be cast as:

(10)

Inspired by the regularization-based relaxation employed in (6), we relax the above program and cast it as follows:

(11)

The above optimization program can be solved in an iterative manner using slightly modified RFP iterations previously described in (8). The slight change presents itself in calculating the gradient of the regularization term, to account for the new measurement model with non-zero thresholds, as well as the exclusion of the projection step onto unit-sphere (8d). Accordingly, we propose the following new update steps at iteration :

The Proposed Generalized RFP Iterations:
(12a)
(12b)
(12c)

Note that in (8c) there exist an additional projection of the gradient onto the unit sphere through the term . However, by incorporating the non-zero thresholds vector, such a step is no longer required for the proposed generalized RFP iterations. In the rest of this paper, we refer to the iterations presented in (12) as Generalized RFPI (G-RFPI).

Ii-B Binary Iterative Hard Thresholding Algorithm (BIHT)

The BIHT algorithm is a simple, yet powerful, first-order iterative reconstruction algorithm for the problem of one-bit CS where the sparsity level is assumed to be known a priori. BIHT iterations can be seen as a simple modification of the iterative hard thresholding (IHT) algorithm proposed in [blumensath2009iterative]. Similar to the RFPI algorithm, the BIHT method considers a zero-level quantization threshold. However, in contrast to the RFPI algorithm, it exploits the knowledge of the sparsity level of the signal of interest. In other words, the BIHT algorithm is designed to tackle the following counterpart of :

(13)

where and as before. Note that the one-sided objective function above (also related to the hinge-loss) enforces the consistency principle previously introduced in (5), and that by solving the above optimization problem, we are working to achieve maximal consistency with the one-bit measurements . It is worth mentioning that one can also consider different objective functions, and not necessarily an objective, as long as it promotes the data consistency principle (e.g., norm). For a detailed analysis of different candidates for the objective function and their properties, see [blumensath2009iterative].

The BIHT iterations are described as follows. Let , and define . Given an initial point , the sparsity level , and one-bit measurements (or equivalently ), at the -th iteration, the BIHT algorithm updates the current estimate of the signal through the following steps:

(14a)
(14b)

where denotes the sub-gradient of the one-side objective function in , governs the fixed gradient step-size, and the projecton operator is defined such that it retains the largest elements (in magnitude) of the vector argument, and set the rest of the elements to zero.

The step (14a) can be interpreted as taking a descent step using the computed sub-gradient of the objective function (13), while the projection step in (14b) can be viewed as a projection of onto the support set of -sparse signals. Once the above iterations terminate either by fully satisfying the consistency principle (i.e., obtaining such that ), or by achieving a maximum number of iterations, the ultimate step to be taken is projecting the final estimate onto the unit-sphere, viz. . Note that this is in contrast to the RFPI algorithm as the BIHT iterations does not require a normalization step as in (8d) at each iteration.

B.1. Extending the BIHT framework to non-zero quantization thresholds:

The extension of the BIHT iterations to incorporate the non-zero thresholds vector is straight-forward. In the case of non-zero quantization thresholds, we cast the signal recovery problem as

(15)

where , and .

Similar to the steps we took in (9)-(12), and by employing some rudimentary algebraic operations, the proposed generalized update steps of the BIHT algorithm may be expressed as:

The Proposed Generalized BIHT Iterations:
(16a)
(16b)

with the exception that in the proposed generalized BIHT iterations, there is no need for the normalization of the obtained estimate of the signal after the update steps terminate. This is due to the fact that a non-zero quantization threshold vector is employed at time of the encoding, and hence, the amplitude information is not fully lost. In the rest of this paper, we refer to the above iterations as Generalized BIHT (G-BIHT) algorithm.

Although simple and powerful, the BIHT algorithm requires a careful choice of the gradient step-size for convergence, and there is no straight-forward method to properly choose the gradient step-size. On the other hand, it only utilizes a fixed step-size along all iterations. Hence, this motivates the development of a methodology through which one can design a decoder function that exploits adaptive gradient step-sizes, i.e. by considering a different step-size at each iteration, that can result in a significant improvement of the performance of the BIHT algorithm.

In the next section, we discuss a slight over-parametrization of the iterations of RFPI, G-RFPI, BIHT, and G-BIHT algorithms that paves the way for the design of our proposed one-bit compressive VAE and for jointly designing the parameters of the encoder function defined in (2) parametrized on the sensing matrix , the quanitzation thresholds , and the design of a set of decoder functions based on the discussed iterative optimization algorithms.

Iii The Proposed One-Bit Compressive Variational Autoencoding Approach

We pursue the design of a novel model-driven one-bit compressive sensing-based variational autoencoder deep architecture that facilitates the joint design of the parameters of both the encoder and the decoder module when one-bit quantizers with non-zero thresholds are employed in the data acquisition process (i.e., the encoding module) for a -sparse input signal .

In general terms, a variational AE is a generative model comprised of an encoder and a decoder module that are sequentially connected together. The purpose of an AE is to learn an abstract representation of the input data, while providing a powerful data reconstruction system through the decoder module. The input to such a system is a set of signals following a certain distribution, i.e. , and the output is the recovered signal from the decoder module . Hence, the goal is to jointly learn an abstract representation of the underlying distribution of the signals through the encoder module, and simultaneously, learning a decoder module allowing for reconstruction of the compressed signals from the obtained abstract representations. Therefore, an AE can be defined by two main functions: i) an encoder function , parameterized on a set of variables that maps the input signal into a new vector space, and ii) a decoder function parameterized on , which maps the output of the encoder module back into the original signal space. Hence, the governing dynamics of a general VAE can be expressed as

(17)

where denotes the reconstructed signal.

In light of the above, we seek to interpret a one-bit CS system as an VAE module facilitating not only the design of the sensing matrix and the quantization thresholds that best captures the information of a -sparse signal when one-bit quantizers are employed, but also to learn the parameters of an iterative optimization algorithm specifically designed for the task of signal recovery. To this end, we modify and unfold the iterations of the proposed G-RFPI algorithm defined in (12), and the GBIHT method defined in (16) onto the layers of a deep neural network and later use the deep learning tools to tune the parameters of the proposed one-bit compressive VAE.

Iii-a Structure of the Encoding Module

In its most general form, we define the encoder module of the proposed VAE based on our data-acquisition model defined in (2), as follows:

(18)

where denotes the set of learnable parameters of the encoder function, and , for a large ( was set to in numerical investigations). Note that we replaced the original sign function with a smooth differentiable approximation of it based on the hyperbolic tangent function. The reason for such a replacement is that the sign function is not continuous and its gradient is zero everywhere except at the origin, and hence, the use of it would cripple stochastic gradient-based optimization methods (later used in back-propagation method for deep learning). Fig. 2 plots the function for , also demonstrating that larger values of allow for better approximations of the original function.

Figure 2: The function as an approximation of the sign function for .

Iii-B Structure of the Decoding Module

In this part, we describe the different scenarios under which we pursue the design of our decoder function by using the RFPI, BIHT, and the suggested G-RFPI and G-BIHT iterations. In particular, we fix the total complexity of our decoding module by fixing the total number of iterations allowed for the mentioned optimization iterations. Next, we slightly over-parameterize each iteration/step of the mentioned algorithms to increase the per-iteration degrees-of-freedom of each method and to further account for the learnable latent variables in the system. Finally, we unfold the iterations of each algorithm onto the layers of a deep architecture such that each layer of the deep network resembles one iteration of the base-line algorithm. We then seek to learn the parameters of both the decoder and encoder function using the training tools already developed for deep learning. We consider the following cases to design our decoder function:
Learned RFPI (L-RFPI): We consider the RFPI iterations defined in (8) as our base-line but slightly over-parametrize its iterations by introducing a gradient step-size and a shrinkage thresholds vector for each iteration . This is in contrast to the original RFP iterations where a fixed gradient step-size , and shrinkage threshold were employed for all iterations. Hence, the proposed unfolded over-parametrized iterations are much more expressive. The decoder function will be parameterized on , and the encoder function will be parametrized on the set (note that ).
Learned BIHT (L-BIHT): We consider the unfolding of the iterations of the BIHT defined in (14) similar to the previous case and by introducing per-iteration gradient step-sizes in lieu of a fixed gradient-step size along all iterations. In this case, the decoder function will be parametrized on the set , while the set of parameters of the encoding module is ; both are to be learned.
Learned G-RFPI (LG-RFPI): We consider the unfolding of the proposed Generalized RFPI iterations in (12) in a non-zero quantization thresholds setting. We over-parameterize the iterations of the proposed G-RFPI by parametrizing the decoder function on the set , and this time, by parameterizing the encoder function on both the sensing matrix and the quantization thresholds vector, i.e. .
Learned G-BIHT (LG-BIHT): We consider the unfolding of the G-BIHT iterations defined in (16) in a similar manner, i.e. by parameterizing the decoder function on . However, similar to the previous case, we further parametrize the encoder function on the quantization thresholds vector in conjunction with the sensing matrix, i.e. .

Iii-C The Proposed One-Bit Compressive Variational Authoencoding Methodology

In the following, we describe the design of four novel deep architectures based on the above mentioned structures and discuss the governing dynamics of the proposed one-bit compressive sensing-based VAE.

C.1. L-RFPI-Based Compressive Autoencoding:
In this case, we consider the following parameterized encoder function:

(19)

As for the decoder function, and based on the RFPI iterations in (8), define as follows:

(20a)
(20b)
(20c)
(20d)

where represents the parameters of the function , and denotes the sparsity inducing shrinkage thresholds vector, and represents the gradient step-size at iteration . Next, we define the proposed L-RFPI composite decoder function as follows:

(21)

where represents the learnable (tunable) parameters of the decoder function, and is some initial point of choice. Note that we have over-parameterized the iterations of the RFPI algorithm by introducing the new variable at each iteration for the sparsity inducing step in (20b). Moreover, in contrast with the original RFPI iterations, we have introduced a new step-size at each step of the iteration as well (see Eq. (20c)). Therefore, the above decoder function can be interpreted as performing iterations of the original RFPI algorithm with an additional degrees of freedom (as compared to the base algorithm) expressed in terms of the set of the shrinkage thresholds and the gradient step-sizes , i.e. . As a reslt, the proposed decoder function is much more expressive than that of the iterations of RFPI algorithm.

Remark: Note that the above encoder and decoder function, once cascaded together, can be viewed as a deep neural network with layers, where the dynamics of the first layer is described by the encoder function defined in (19), and the governing dynamics of the succeeding layers is described by compuations of the form (20a)-(20d). Equivalently, such a deep architecture can be viewed as a computational graph with shared variables among the computation nodes, and thus, its parameters can be efficiently optimized by utilizing known deep learning tools such as back-propagation. Hence, the goal is to jointly learn the parameters of such a cascaded network (i.e., ) in an end-to-end manner by using the available data at hand coming from the underlying distribution of the source signal .

C.2. L-BIHT-Based Compressive Autoencoding:
Similar to the previous case, we consider the same encoding function parametrized only on the learnable sensing matrix in a zero quantization thresholds setting, i.e. . The governing equations for the decoder function in the case of the proposed Learned BIHT are as follows. We re-define as:

(22a)
(22b)

with , and where we have an added final layer , to renormalize the reconstructed signal as,

(23)

Therefore, similar to the previous case, the proposed L-BIHT-based decoder function is defined as:

(24)

We again observe the slight over-parametrization of the L-BIHT algorithm during the unfolding process. Namely, at each iteration we are introducing the per-iteration step-sizes to be learned which further enhances the performance of our iterations (see (22)). In this case, the decoder function is parameterized only on the gradient step-sizes, i.e. . The L-BIHT iterations have an additional degrees of freedom compared to that of the original BIHT iterations.

C.3. LG-RFPI-Based Compressive Autoencoding:
We consider the unfolding of iterations of the Learned Generalized RFPI method according to (12). As previously discussed, in the generalized iterations of both the RFPI and BIHT algorithms, the encoder module can be expressed as:

(25)

where , and represents the tunable vector of quantization thresholds. We follow a similar approach to the proposed L-RFPI-Based deep architecture and slightly over-parameterize the iterations in (12a)-(12c), leading to the design of the decoder function:

(26a)
(26b)
(26c)

where represents the parameters of the function , denotes the sparsity inducing thresholds vector, and represents the gradient step-size at iteration . Hence, the proposed decoder function can be represented in the same way as in (21), with . Note that by incorporating the non-zero quantization thresholds, there is no need for an additional normalization term at each iteration. The above iterations (comprising the decoder function) have the same degree of freedom as L-RFPI iterations—an additional model parameters compared to that of the base-line G-RFPI iterations. Also, note the additional degrees of freedom that the encoder function offers in terms of tunable quantization thresholds vector (in addition to the sensing matrix).

C.4. LG-BIHT-Based Compressive Autoencoding:
We consider an encoder function of the form (25), where denotes the learnable sensing matrix and arbitrary quantization thresholds. Additionally, we present an over-parameterization of the Genralized BIHT iterations (see Eqs. (16)) and consider the resulting unfolded network as the blueprint of our decoder. Namely, we define as:

(27a)
(27b)

where denotes the set of parameters of the function . Note that due to employing a non-zero thresholds vector, we do not need the additional normalization layer as in (23) for this case. Consequently, the decoder function can be expressed in a similar manner as in (24), with . These iterations, similar to L-BIHT case, have an additional degrees of freedom compared to that of the base-line G-BIHT iterations; whereas, the encoder function has an additional tunable parameters in terms of the one-bit quantization thresholds compared to that of the L-BIHT-based AE.

In the next section, we discuss the training process of the above proposed one-bit compressive autoencoders. Particularly, we formulate a proper loss function that facilitates the training of such unfolded deep architectures, and for each model, we seek to jointly learn the set of parameters of the entire network (i.e., the encoder and decoder function) in a end-to-end manner using the available deep learning techniques.

Iii-D Loss Function Characterization and Training Method

The output of an autoencoder is the reconstructed signal from the compressed measurements, i.e.

where and denotes the input and output of the AE, respectively. The training of an AE should be carried out by defining a proper loss function that provides a measure of the similarity between the input and the output of the AE. The goal is to minimize the distance between the input target signal and the recovered signal according to a similarity criterion. A widely-used option for the loss function is the output MSE loss, i.e., , and hence, the training loss of such a system can be formulated as:

that is to be minimized over and

. Nevertheless, in deep architectures with a high number of layers and parameters, such a simple choice of the loss function makes it difficult to back-propagate the gradients; in fact, the vanishing gradient problem arises. Therefore, for the training of the proposed AE, a better choice for the loss function is to consider the cumulative MSE loss of the layers. As a result, one can also feed-forward the decoder function for only

layers (a lower complexity decoding), and consider the output of the -th layer as a good approximation of the target signal. For training, one needs to consider the constraint that the gradient step-sizes , and the shrinkage thresholds must be non-negative. By parameterizing the decoder function on the step-sizes and the shrinkage step thresholds, we need to regularize the training loss function ensuring that the network chooses positive step sizes and shrinkage thresholds at each layer. With this in mind, we suggest the following loss function for training the proposed one-bit compressive AE. Let , and define the loss function for training as

(28)

where denotes the importance weight of choice for the output of each layer, , , and . Note that as the information flows through the network, one expects that as we progress layer by layer, the reconstruction shows improvement. A reasonable weighting scheme for designing the importance weights is to gradually increase the importance weights as we proceed through the layers. In this work, we consider a logarithmic weighting scheme, i.e. . Moreover, in training the autoencoders based on the BIHT algorithm, we exclude the last term in (28) as there is no shrinkage thresholds required for these models.

As for the training procedure, our numerical investigations showed that an incremental learning approach is most effective for training of the proposed networks. The details of the incremental learning method that we employed are as follows. During the -th increment round (for ), we seek to optimize the cost function by learning the set of parameters . At each round , we perform a batch learning with mini-batches of size . After finishing the -th round of training, the -th layer will be added to the network, and the objective function will be changed to . Next, the entire network will go through another batch-learning phase. Interestingly, in this method of training, the learned parameters from the -th round will be used as the initial values of the same parameters in the next round.

Iv Numerical Results

Figure 3: The performance of the proposed L-RFPI method compared to the RFPI algorithm for sparsity levels: (a) , (b) , (c) , and (d) .
Figure 4: The performance of the proposed LG-RFPI VAE and the proposed G-RFPI method in recovering the amplitude information of the - sparse signals for sparsity levels: (a) , (b) , (c) , and (d) .

In this section, we present various simulation results to investigate the performance of the proposed one-bit compressive VAEs and to further show the effectiveness of our training. For training purposes, we randomly generate -sparse signals of length , i.e. where the non-zero elements are sampled from . Furthermore, we fix the total number of layers of the decoder function to ; equivalent of performing only 30 optimization iterations of the form (20), (22), (26), and (27). As for the sensing matrix (to be learned), we assume . The results presented here are averaged over realizations of the system parameters. Similar to [4558487], we consider the case that , due to the focus of this study on one-bit sampling where usually a large number of one-bit samples are available, as opposed to the usual infinite-precision CS settings.

The proposed one-bit CS VAEs are implemented using the library [paszke2017automatic]. The Adam optimizer [kingma2014adam] with a learning rate of is utilized for optimization of parameters of the proposed deep architectures. Due to the importance of reproducible research, we have made all the codes implemented publicly available along with this paper.111The code is also available at: https://github.com/skhobahi/deep1bitVAE

As it was previously discussed in Sec. III-D, we employ an incremental batch-learning approach with mini-batches of size at each round , and a total number of epochs per round. For training of the the proposed VAEs that are based on the RFPI iterations, i.e., the L-RFPI and LG-RFPI deep architectures, we uniformly sample the sparsity level of the source signal from the set for each training point in the mini-batch. We evaluate the performance of the proposed methods on target signals with , as well as (which was not presented to the network during the training phase). Moreover, due to the fact that the BIHT method and the corresponding one-bit VAEs (L-BIHT and LG-BIHT) require the knowledge of the sparsity level of the source signal a priori, there is no need to train the network on various sparsity levels; i.e., the corresponding deep architectures can be trained for a particular . Hence, for the L-BIHT and LG-BIHT deep architectures, we train the network for source signals with , and evaluate the performace of the resulted networks on .

In the sequel, we refer to as the normalized version of the vector . In all scenarios, in order to have a fair comparison between the algorithms, the initial starting point of the optimization algorithms are the same.

Performance of the proposed L-RFPI VAE:
In this part, we investigate the performance of the proposed L-RFPI-based VAE in recovering the normalized source signal , i.e., recovering .

Fig. 3 illustrates Mean Squared Error (MSE) for normalized version of the recovered signal versus total number of optimization iterations , for , and for sparsity levels (a) , (b) , (c) , and (d) . We compare the performance of the proposed L-RFPI algorithm with the standard RFPI iterations in (8a)-(8d), in the following scenarios:
Case 1: The RFPI algorithm with a randomly generated sensing matrix whose elements are i.i.d. and sampled from , and fixed values for and .
Case 2: The RFPI algorithm where the learned is utilized, and the values for and are fixed as in the previous case.
Case 3: The RFPI algorithm with a randomly generated (same as Case 1), however, the learned shrinkage thresholds vector is utilized with a fixed step-size.
Case 4: The proposed one-bit L-RFPI VAE method corresponding to the iterations of the form (20a)-(20d), with learned , , and .

To have a fair comparison, we fine-tuned the parameters of the standard RFPI method (Case 1), i.e., the step-size and the shrinkage threshold , using a grid-search method. It can be seen from Fig. 3 that in all cases of , the proposed L-RFPI method demonstrates a significantly better performance than that of the RFPI algorithm (described in Case 1)—an improvement of times in MSE outcome. Furthermore, the effectiveness of the learned (Case 2), and the learned (Case 3) compared to the base algorithm (Case 1), are clearly evident, as both algorithms with learned parameters significantly outperform the original RFPI. Finally, although we trained the network for sparse signals, it still shows very good generalization properties even for (see Fig. 3 (a) and (d)). This is presumably due to the fact that the proposed L-RFPI-based VAE is a hybrid model-based data-driven approach that exploits the existing domain knowledge of the problem as well as the available data at hand. Furthermore, note that the proposed method achieves a high accuracy very quickly and does not require solving (6) for several instances as opposed to the original RFPI algorithm—thus showing great potential for usage in real-time applications.

Performance of the proposed LG-RFPI VAE :
Next, we investigate the performance of the proposed LG-RFPI VAE (see Eqs. (26a)-(26c)) and the G-RFPI algorihtm (see Eqs. (12a)-(12c)) that we specifically designed for incorporating arbitrary quantization thresholds at data acquisiton. We investigate the performance of the proposed method in both cases of recovering the amplitude information as well as the normalized signal.

Fig. 4 illustrates the MSE between the source signal and the recovered signal versus total number of optimization iterations , for , and for sparsity levels (a) , (b) , (c) , and (d) . Similar to the previous case, we consider the following scenarios:
Case 1: The proposed G-RFPI algorithm with a randomly generated sensing matrix and quantization threhsolds vector, whose elements are i.i.d. and sampled from , and fixed values for and .
Case 2: The proposed G-RFPI algorithm where the learned sensing matrix and quantization thresholds vector are utilized, and the values for and are fixed as in the previous case.
Case 3: The proposed one-bit LG-RFPI VAE method corresponsing to the iterations of the form (26a)-(26c), with the learned , , , and .

Note that the focus of this part is on recovering the amplitude information of the underlying -sparse signal by means of using arbitrary quantization thresholds. Although the RFPI method and the proposed L-RFPI VAE can only recover the normalized signal , we further provide the performance of the L-RFPI method (that significantly outperforms the RFPI method) in recovering the amplitude information for comparison purposes.

Figure 5: The performance of the proposed LG-RFPI VAE and the proposed G-RFPI method in recovering the normalized -sparse signals for sparsity level .
Figure 6: The performance of the proposed L-BIHT method compared to the base-line BIHT algorithm for sparsity levels: (a) and (b) .

It can be observed from Fig. 4 that the proposed G-RFPI algorithm with randomly generated sensing matrix and quantization thresholds (Case 1) provides good accuracy in recovering the amplitude information of the true signal for sparsity levels . This is in contrast to the RFPI algorithm and the corresponding L-RFPI VAE where the amplitude information is lost due to zero quantization thresholds. More precisely, the proposed G-RFPI algorithm outperforms the RFPI and the L-RFPI algorithm in terms of recovering the amplitude information of the signal. One can observe that even with a randomly generated quantization thresholds (i.e., without learning them), the proposed G-RFPI method achieve a significantly lower MSE in terms of recovering the amplitude information of the source signal as compared to the RFPI and the proposed L-RFPI method. Hence, the proposed G-RFPI method can be used as an stand-alone algorithm for one-bit compressive sensing settings with non-zero quantization thresholds, where both finding the direction of the source signal and the amplitude information is of great interest. Next, we explore the effect of learning the distribution-specific (data-driven) sensing matrix and the quantization thresholds (Case 2). It is evident from Fig. 4 that compared to the vanilla G-RFPI method, one can significantly achieve a lower MSE in terms of recovering the amplitude information by learning a proper sensing matrix and the quantization thresholds and utilizing them during the data-acquisition process. Finally, it can be seen from Fig. 4 that the proposed LG-RFPI VAE (Case 3) significantly outperforms its counterparts by achieving a much lower MSE very quickly. Moreover, the proposed LG-RFPI VAE shows strong generalization properties for unseen sparsity levels (see Fig. 4 (b) and (d)). The fact that such architectures show great performance in generalization is due to the model-driven nature of the proposed deep networks.

We conclude this part by comparing the performance of the proposed LG-RFPI, G-RFPI, and L-RFPI VAEs in recovering the normalized version of the signal . Fig. 5 illustrates the MSE between the normalized source signal and the recovered signal versus number of iterations , i.e. MSE(,), for a sparsity level of . It can be observed from Fig. 4 that the proposed methods outperform the standard RFPI iterations and achieve a high accuracy in recovering . Moreover, the proposed L-RFPI VAE shows a slightly better performance than that of the LG-RFPI method. This is presumably due to the fact that the L-RFPI iterations and the corresponding deep architecture are specifically designed and tuned for recovering the normalized source signal while the proposed G-RFPI and LG-RFPI algorithms are designed for recovering the amplitude information of the source signal. Nevertheless, the MSE difference between the LG-RFPI and L-RFPI methods in recovering is negligible, and hence, in a non-zero quantization thresholds setting, it is beneficial to use the proposed LG-RFPI VAE as it shows significant improvement in the performance of recovering the amplitude information while maintaining a high performance in recovering as well.

Performance of the proposed L-BIHT VAE:
In this part, we investigate the performance of the proposed L-BIHT VAE, and compare our results with the standard BIHT algorithm. Note that similar to the RFPI method and the proposed L-RFPI VAE, the BIHT algorithm considers at the time of data acquisition. Hence, we investigate the performance of the proposed method in recovering the normalized source signal, i.e. . In particular, we provide the simulation results for the following cases:
Case 1: The BIHT algorithm with a randomly generated sensing matrix whose elements are i.i.d. and sampled from , and fixed value for .
Case 2: The BIHT algorithm with a randomly generated (same as Case 1); however, learned gradient step-sizes are used at each iteration .
Case 3: The BIHT algorithm where the learned is utilized and the value for the step-size is fixed as in Case 1.
Case 4: The proposed one-bit L-BIHT VAE method corresponding to the iterations of the form (22a)-(22b), with the learned and .

Figure 7: The performance of the proposed G-BIHT and the corresponding LG-BIHT VAE in recovering the amplitude information of the signal for sparsity levels: (a) , and (b) .

Fig. 6 demonstrates the MSE between normalized source signal , and the recovered signal versus the number of optimization iterations , for signals with sparsity levels (a) and (b) . Note that for learning the parameters of the proposed L-BIHT algorithm, we trained the corresponding deep architecture on the sparsity level , and we check the generalization performance of the learned parameters for the case of . It can be seen from Fig. 6 that in both cases of the proposed L-BIHT algorithm demonstrates a significantly better performance than that of the standard BIHT algorithm (Case 1). Moreover, the effectiveness of the learned step-sizes (Case 2), and the learned sensing matrix (Case 3) compared to the base-line vanilla BIHT algorithm (Case 1) are evident. In particular, the learned step-sizes (Case 2) results in a fast descent while the learned (Case 3) leads to a lower MSE compared to Case 2. In addition, we provided the performance of the standard RFPI algorithm for comparison purposes. It can be seen from Fig. 6 that the BIHT algorithm with and without the learned parameters achieves a better accuracy in recovering the direction of the source signal compared to the RFPI method. Also, a comparison between Fig. 6 (a) and Fig. 3 (b) reveals the fact that the proposed L-BIHT VAE demonstrates a far better performance than that of the proposed L-RFPI VAE. This is due to the fact that the BIHT algorithm and the corresponding proposed L-BIHT VAE, exploits the knowledge of the sparsity level of the source signal (note the mapping function used in (22a) and (14b)). One can further observe that even for the unseen case of , the proposed method generalizes very well and maintains its accuracy. This is due the model-driven nature of the proposed L-BIHT VAE architecture. It is worth mentioning that it can be observed from Fig. 6 that the proposed L-BIHT method converges very fast (in 10 iterations), achieving a high accuracy—making it a great candidate for real-time applications. Of course, the trade-off between using the L-RFPI and L-BIHT is implicit in the knowledge of the sparsity level of the signal. For applications where is known beforehand, the proposed L-BIHT can be used in that it shows higher accuracy compared to the other methods. However, the L-RFPI methodology is more flexible as it does not require knowing the sparsity level of the signal a priori.

Figure 8: The performance of the proposed G-BIHT and the corresponding LG-BIHT VAE in recovering the normalized signal, i.e. , for sparsity levels: (a) , and (b) .

Performance of the proposed LG-BIHT VAE:
Finally, we investigate the performance of the proposed G-BIHT method (see Eqs. (16a)-(16b)) and the corresponding one-bit compressive LG-BIHT VAE (see Eqs. (27a)-(27b)) that are specifically designed to handle non-zero quantization thresholds . In particular, we are interested in evaluating the performance of the proposed methods in recovering the amplitude information of the source -sparse signal. Hence, for this part, we check the MSE between the true signal , and the recovered signal from the G-BIHT and LG-BIHT methods for each iteration . In addition, we provide the results for recovering the direction of the source signal as well. Specifically, we provide the simulation results for the following cases:
Case 1: The proposed G-BIHT algorithm with a randomly generated sensing matrix and quantization thresholds vector where the elements of both are i.i.d. and sampled from , and fixed value for .
Case 2: The proposed G-BIHT algorithm, where the learned sensing matrix and quantization thresholds are utilized and the values for are fixed as in the previous case.
Case 3: The proposed one-bit LG-BIHT VAE method corresponding to the iterations of the form (27a)-(27b), with learned , and .

Fig. 7 illustrates the MSE between the true signal and the recovered signal versus optimization iteration for sparsity levels (a) and (b) . We further provide the numerical results for the proposed LG-RFPI VAE and the proposed G-RFPI iterations for comparison. It can be seen from Fig. 7 that the proposed G-BIHT algorithm with randomly generated latent-variables (Case 1) significantly outperforms its G-RFPI counterpart, and achieves a high accuracy very quickly. On the other hand, the proposed LG-RFPI still achieves a lower MSE compared to the vanilla G-RFPI method. In addition, a comparison between the performance of the proposed G-BIHT algorithm with learned and (Case 2) and the proposed LG-RFPI VAE and vanilla G-BIHT (Case 1) reveals the effectiveness of the learned parameters and the power of the proposed G-BIHT algorithm. Namely, by utilizing only the learned and and by using a fixed step size for the G-BIHT algorithm, one can achieve a superior performance than that of the LG-RFPI (where all of the learned variables are in use) and the vanilla G-BIHT method. Finally, it can be observed from 6(a)-(b) that the proposed LG-BIHT algorithm (Case 3) significantly outperforms the other methods as it achieves a much lower MSE very quickly, specifically, compared to the proposed LG-RFPI VAE. The superior performance of the LG-BIHT algorithm and the corresponding LG-BIHT VAE is due the fact that we are exploiting the knowledge of the sparsity level present in the signal. As discussed before, if the sparsity level is known a priori, it is beneficial to use either the G-BIHT algorithm (when one do not wish to perform any learning) or the proposed LG-BIHT methodology. It is worth mentioning that similar to the previously investigated methods, the proposed LG-BIHT generalizes very well for (see Fig. 7(b)) even though the sparsity level was not revealed to the network during the training phase.

Fig. 8 demonstrate the MSE between the direction of the source signal, i.e. , and the recovered direction versus optimization iteration , for sparsity levels of (a) and (b) . It can be seen from Fig. 8 that the proposed LG-BIHT method outperforms both the LG-RFPI method, and furthermore, it achieves a similar MSE to that of the proposed L-RFPI method. However, the convergence of LG-BIHT is much faster than that of the L-RFPI method. Furthermore, the proposed L-BIHT algorithm still achieves a superior performance than that of the other methods both in terms of convergence speed and accuracy. This is presumably due to the fact that the L-BIHT method is specifically designed and learned to have a high accuracy in finding normalized true signal .

V Conclusion

In this paper, we considered the problem of one-bit compressive sensing and proposed a novel hybrid model-driven and data-driven variational autoencoding scheme that allows us to jointly learn the parameters of the measurment module (i.e., the sensing matrix and the quantization thresholds) and the latent-variables of the decoder (estimator) function, based on the underlying distribution of the data. In broad terms, we proposed a novel methodology that combines the traditional compressive sensing techniques with model-based deep learning—resulting in interpretable deep architectures for the problem of one-bit compressive sensing. In addition, the proposed method can handle the recovery of the amplitude information of the signal using the learned and optimized quantization thresholds. Our simulation results demonstrated that the proposed hybrid methodology is superior to the state-of-the-art methods for the problem of one-bit CS in terms of both computional efficiency and accuracy.

References