LoRD-Net: Unfolded Deep Detection Network with Low-Resolution Receivers

The need to recover high-dimensional signals from their noisy low-resolution quantized measurements is widely encountered in communications and sensing. In this paper, we focus on the extreme case of one-bit quantizers, and propose a deep detector entitled LoRD-Net for recovering information symbols from one-bit measurements. Our method is a model-aware data-driven architecture based on deep unfolding of first-order optimization iterations. LoRD-Net has a task-based architecture dedicated to recovering the underlying signal of interest from the one-bit noisy measurements without requiring prior knowledge of the channel matrix through which the one-bit measurements are obtained. The proposed deep detector has much fewer parameters compared to black-box deep networks due to the incorporation of domain-knowledge in the design of its architecture, allowing it to operate in a data-driven fashion while benefiting from the flexibility, versatility, and reliability of model-based optimization methods. LoRD-Net operates in a blind fashion, which requires addressing both the non-linear nature of the data-acquisition system as well as identifying a proper optimization objective for signal recovery. Accordingly, we propose a two-stage training method for LoRD-Net, in which the first stage is dedicated to identifying the proper form of the optimization process to unfold, while the latter trains the resulting model in an end-to-end manner. We numerically evaluate the proposed receiver architecture for one-bit signal recovery in wireless communications and demonstrate that the proposed hybrid methodology outperforms both data-driven and model-based state-of-the-art methods, while utilizing small datasets, on the order of merely ∼ 500 samples, for training.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

11/30/2018

Deep Signal Recovery with One-Bit Quantization

Machine learning, and more specifically deep learning, have shown remark...
11/27/2019

Model-Aware Deep Architectures for One-Bit Compressive Variational Autoencoding

Parameterized mathematical models play a central role in understanding a...
06/10/2021

SignalNet: A Low Resolution Sinusoid Decomposition and Estimation Network

The detection and estimation of sinusoids is a fundamental signal proces...
12/10/2019

Deep One-bit Compressive Autoencoding

Parameterized mathematical models play a central role in understanding a...
12/10/2018

Signal Recovery From 1-Bit Quantized Noisy Samples via Adaptive Thresholding

In this paper, we consider the problem of signal recovery from 1-bit noi...
01/31/2020

Data-Driven Factor Graphs for Deep Symbol Detection

Many important schemes in signal processing and communications, ranging ...
04/08/2021

A Design Space Study for LISTA and Beyond

In recent years, great success has been witnessed in building problem-sp...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Analog-to-digital conversion plays an important role in digital signal processing systems. While physical signals take values in continuous-time over continuous sets, they must be represented using a finite number of bits in order to be processed in digital hardware [eldar2015sampling]. This operation is carried out using analog-to-digital converters (ADCs), which typically perform uniform sampling followed by a uniform quantization of the discrete-time samples. When using high-resolution ADCs, this conversion induces a minimal distortion, allowing to effectively process the signal using methods derived assuming access to the continuous-amplitude samples. However, the cost, power consumption and memory requirements of ADCs grow with the sampling rate and the number of bits assigned to each sample [walden1999analog]. Consequently, recent years have witnessed an increasing interest in digital signal processing systems operating with low-resolution ADCs. Particularly, in multiple-input multiple-output (MIMO) communication receivers, which are required to simultaneously capture multiple analog signals with high bandwidth, there is a growing need to operate reliably with low-resolution ADCs [andrews2014will]. The most coarse form of quantization is reduction of the signal to a single bit per sample, which may be accomplished via comparing the sample to some reference level, and recording whether the signal is above or below the reference. One-bit acquisition allows using high sampling rates at a low cost and low energy consumption. Due to such favorable properties of one-bit ADCs, they have been employed in a wide array of applications, including in wireless communications [jeon2018one, rao2020massive, 8683876], radar signal processing [ameri2019one, jin2020one, xi2020bilimo], and sparse signal recovery [xiao2019one, khobahi2019model].

The non-linear nature of low-resolution quantization makes symbol detection a challenging task. This situation is significantly exacerbated in practical one-bit communication and sensing where the channel is to be estimated in conjunction with symbol detection. A

coherent symbol detection task is concerned with recovering the underlying signal of interest from the one-bit measurements assuming the channel state information (CSI) is known at the receiver. On the other hand, the more difficult task of blind symbol detection, which is the focus here, carries out recovery of the underlying transmitted symbols when CSI is not available.

Two main strategies have been proposed in the literature to facilitate operation with low-resolution ADCs: The first designs the overall acquisition system in light of the task for which the signals are acquired. For instance, MIMO communication receivers acquire their channel output in order to extract some underlying information, e.g., symbol detection. As the analog signals are not required to be recovered from their digital representation, one can design the acquisition system to reliably infer the desired information while operating with low resolution ADCs [shlezinger2018hardware, salamatian2019task, shlezinger2019deep, shlezinger2020learning, neuhaus2020task]. Such task-based quantization systems rely on pre-quantization processing, which requires dedicated hardware in the form of hybrid receiver architectures [gong2019rf, ioushua2019family] or unique antenna structures [wang2019dynamic, shlezinger2020dynamic], which are configured along with the quantization rule.

An alternative approach to task-based quantization, which does not require additional configurable analog hardware and is the focus of the current work, is to recover the desired information from the distorted coarsely discretized representation of the signal in the digital domain. The main benefit of schemes carried out only in the digital domain is their simplicity of implementation, as they do not require to introduce modifications to the quantization system and circumvent the need for adding pre-quantization analog processing hardware. In the context of MIMO systems, various methods have been proposed in the literature for channel estimation and signal decoding from quantized outputs, including model-based signal processing methods as surveyed in [liu2019low]

, as well as model-agnostic systems based on machine learning and data-driven techniques

[zhang2020deep, klautau2018detection, balevi2019one, balevi2019two, balevi2020autoencoder, kim2019machine, nguyen2020svm, nguyen2020linear].

Most existing model-based detection algorithms require coherent operation, i.e., they rely on prior knowledge of the CSI and other system parameters. Among these works are the near-Maximum Likelihood (nML) detector proposed for one-bit MIMO receivers in [choi2016near], the linear receivers studied in [risi2014massive, jacobsson2015one], and the message passing based detectors considered in [ivrlac2007mimo, mo2014channel]. The fact that such approaches require accurate CSI led to several works specifically dedicated to CSI estimation in the presence of low-resolution ADCs. These include [choi2016near, mezghani2018blind], which studied maximum-likelihood estimation for recovering the CSI in the presence of one-bit data, the works in [li2017channel, jacobsson2017throughput], which developed linear estimators for CSI estimation purposes in one-bit MIMO systems, and [mo2017channel] which focuses on sparse channels and utilizes one-bit sparse recovery methods for CSI estimation. However, all these strategies inevitably induce non-negligible CSI estimation error, which may notably degrade the accuracy in signal detection based on the estimated CSI.

Over the past several years, data-driven methods, and specifically deep neural networks (DNNs), have attracted unprecedented attention from research communities across the board. The advent of low-cost specialized powerful computing resources and the continually increasing amount of massive data generated by the human population and machines, along with new optimization and learning methods, have paved the way for DNNs and machine learning-based models to prove their effectiveness in many engineering areas, such as computer vision and natural language processing

[lecun2015deep]. DNNs learn their mapping from data in a model-agnostic manner, and can thus facilitate non-coherent (blind) detection.

Previously proposed DNN-aided symbol detection techniques for communication receivers can be divided based on their receiver architectures; namely, those that utilize conventional machine learning architectures for detection, including [farsad2017detection, corlay2018multilevel, liao2019deep], and schemes combining DNNs with model-based detection methods, such as the blind DNN-aided receivers proposed in [shlezinger2019viterbinet, shlezinger2020deepsic, shlezinger2020data, he2019model] and the coherent detectors of [samuel2019learning, takabe2019trainable], see also surveys in [balatsoukas2019deep, farsad2020data]. In the context of one-bit DNN-aided receivers, previous works to date focus mainly on the first approach, i.e., applying conventional DNNs for the overall detection task. Among these works are [zhang2020deep, balevi2019two] and [klautau2018detection], which applied generic DNNs for channel estimation in one-bit MIMO receivers. The application of conventional architectures for symbol detection was studied in [balevi2019one, kim2019machine] and [nguyen2020svm], while [balevi2020autoencoder]

showed that autoencoders can facilitate the design of error correction codes for communications with one-bit receivers. Recently, the authors in

[nguyen2020linear] considered the problem of symbol detection for a one-bit massive MIMO system and proposed a linear estimator module based on the Bussgang decomposition technique combined with a model-driven neural network.

The vast majority of the aforementioned works on learning-aided one-bit receivers rely on conventional DNN architectures. Such DNNs require a massive amount of training samples and must be trained on data from the same (or a similar) statistical model as the one under which they are required to operate, imposing a major challenge in dynamic wireless communications. In fact, the use of generic black-box DNNs is mostly justified in applications where a satisfactory description of the underlying governing dynamics of the system is not achievable, as is the case in computer vision and natural language processing fields. As surveyed above, this is not the case in the field of one-bit MIMO systems. This gives rise to the need that is bridging the gap between data-driven and model-based approaches in this context, and moving towards specialized deep learning models for signal processing techniques in one-bit MIMO systems—which is the aim of this work.

In this paper, we develop a hybrid model-based and data-driven system which learns to carry out blind symbol detection from one-bit measurements. The proposed architecture, referred to as LoRD-Net (Low Resolution Detection Network), combines the well-established model-based maximum-likelihood estimator (MLE) with machine learning tools through the deep unfolding method [hershey2014deep, monga2019algorithm, khobahi2020unfolded, agarwal2020deep, khobahi2020deep, naimipour2020upr] for designing DNNs based on model-based optimization algorithms. To derive LoRD-Net, we first formulate the MLE for the task of symbol detection from one-bit samples. Next, we resort to first-order gradient-based methods for the MLE computation, and unfold the iterations onto layers of a DNN. The resulting LoRD-Net learns to carry out MLE-approaching symbol detection without requiring CSI.

Applying conventional gradient-based optimization methods requires knowledge of the underlying system parameters, i.e., full CSI. Hence, a typical approach to unfold such a symbol detection algorithm would be to estimate the unknown parameters from training, and substitute it into the unfolded network [he2019model]. We show that instead of estimating the unknown system parameters, it is preferable to learn an alternative channel which allows the receiver to detect the symbols reliably. Surprisingly, we demonstrate that the alternative channel learned by LoRD-Net is in general not the true channel. Based on this observation, we propose a two-stage training procedure, comprised of learning the proper optimization process to unfold, followed by an end-to-end training of the unfolded DNN.

The proposed LoRD-Net has thus the following properties:

  1. Compared to the vanilla MLE symbol detector, our model does not need to estimate the channel separately.

  2. Owing to its hybrid nature, it has low computational cost in operation and is highly scalable, facilitating much faster inference as compared to its black-box data-driven and model-based counterparts.

  3. The proposed deep architecture is interpretable and has far fewer parameters compared to existing black-box deep learning solutions. This follows from the incorporation of domain-knowledge in the design of the network architecture (i.e., being model-based), allowing LoRD-Net to train with much fewer labeled samples as compared to existing data-driven one-bit receivers.

We verify the above characteristics of LoRD-Net in an experimental study, where we show that training of the proposed LoRD-Net architecture can be performed with far fewer samples as compared to its data-driven counterparts, and demonstrate substantially superior performance compared to existing model-based and data-driven algorithms for symbol detection in massive MIMO channels with one-bit ADCs.

The rest of the paper is organized as follows. In Section II, we present the considered system model and the corresponding MLE formulation. In Section III, we derive LoRD-Net by unfolding the first-order gradient iterations associated with the MLE computation, and present its two-stage training procedure. Section IV provides a detailed numerical analysis of LoRD-Net applied to MIMO communications. Finally, Section V concludes the paper.

Throughout the paper, we use the following notation. Bold lowercase and bold uppercase letters denote vectors and matrices, respectively. We use

, , and , and to denote the transpose operator, the diagonal matrix formed by the entries of the vector argument, the sign operator, and the natural logarithm, respectively. The symbol represents the Hadamard product, while and are the all-one and all-zero vectors/matrices. The -th entry of the vector is , and is the -norm of ; is the -ary Cartesian product of a set , and denotes the cone of symmetric positive definite matrices.

Ii System Model and Preliminaries

In this section, we discuss the considered system model. We focus on one-bit data acquisition and blind signal recovery. We then formulate the MLE for this problem, which is used in designing the LoRD-Net architecture in Section III.

Ii-a Problem Formulation

We consider a low-resolution data-acquisition system which utilizes one-bit ADCs. By letting denote the received signal, the discrete output of the ADCs can be written as , where denotes the vector of quantization thresholds, and is the sign function, i.e., if and otherwise. The received vector is statistically related to the unknown vector of interest according to the following linear relationship:

(1)

where denotes additive Gaussian noise with a covariance matrix of the form with diagonal entries

representing the noise variance at each respective dimension, and

is the channel matrix. We assume that the elements of the unknown vector are chosen independently from a finite alphabet . This setup represents low-resolution receivers in uplink multi-user MIMO systems, where is the symbols transmitted by the users, and is the corresponding channel output, as illustrated in Fig. 1.

Figure 1: System model illustration.

The overall dynamics of the system are thus compactly expressed as:

(2)

In the sequel, we refer to as the system parameters. Note that the above system model can be modified using conventional transformations to accommodate a complex-valued system model.

Our main goal is to perform the task of symbol detection, i.e., recover , from the collected one-bit measurements . We focus on blind (non-coherent) recovery, namely, the system parameters , i.e., the channel matrix and the covariance of the noise, are not available to the receiver. Nonetheless, the receiver has access to a limited set of labeled samples , representing, e.g., pilot transmissions. The quantization thresholds of the ADCs, i.e., the vector , are assumed to be fixed and known. While we do not consider the selection of in the following, we discuss in the sequel how its optimization can be incorporated into the detection method.

Ii-B Maximum Likelihood Recovery

To understand the challenges associated with blind low-resolution detection, we next discuss the MLE for recovering from . In particular, the intuitive model-based approach is to utilize the labeled data to estimate the system parameters , and then to use this estimation to compute the coherent (non-blind) MLE. Therefore, to highlight the limitations of this strategy, we assume here that the system parameters are fully known at the receiver. Let

(3)

represent the log-likelihood objective for a given vector of one-bit observations , where is proven in [8683876]. The coherent MLE is then given by

(4)

Although the MLE in (4) has full accurate knowledge of the parameters , its computation is still challenging. The main difficulty emanates from solving the underlying optimization problem in the discrete domain, implying that the MLE requires an exhaustive search over the discrete domain , whose computational complexity grows exponentially with . A common strategy to tackle the discrete optimization problem in (4) is to relax the search space to be continuous. This results in the following relaxed unconstrained MLE rule:

(5)

The optimization problem in (5) is convex due to the log-concavity of , and thus can be solved using first-order gradient optimization. In particular, the gradient of the negative log-likelihood function with respect to the unknown vector can be compactly expressed as [8683876]:

(6)

where is a non-linear function defined as , in which the operator denotes the element-wise division operation, is the derivative of

, that is given by the negative probability density function of a standard Normal distribution, and

is the semi-whitened version of the one-bit matrix .

As obtained via (5) is not guaranteed to take values in , the final estimate of the symbols is obtained by applying a projection operator to . This operator maps the continuous input vector onto its closest lattice point on the discrete set , i.e.,

(7)

Tackling a discrete program via continuous relaxation, as done in (5), is subject to an inherent drawback. As a case in point, one can only expect to provide an accurate approximation of the true MLE if the real-valued vector is very close to the discrete valued MLE . In such a case, the MLE is obtained by projecting into the lattice points in . However, this is not the case in many scenarios, and specifically, when the noise variance in each respective dimension is high. In other words, it is not necessarily the case that the minimizer of the objective function on the continuous domain (5) is close to the MLE, which takes values in the discrete set . Note that utilizing the true system parameters will only lead to optimal estimates when considering the original discrete problem (4). In fact, one can no longer necessarily argue that the true system parameters are optimal choices for in the relaxed MLE. This insight, which is obtained from the computation of the coherent MLE, is used in our derivation of the blind unfolded detector in the following section.

Iii Proposed Methodology

In this section, we present the proposed Low Resolution Detection Network, abbreviated as LoRD-Net. We begin with a high-level description of LoRD-Net in Subsection III-A. Then, we present the unfolded architecture in Subsection III-B and discuss the training procedure in Subsection III-C. Finally, we provide a discussion in Subsection III-D.

Iii-a High-Level Description

As noted in the previous section, the intuitive approach to blind symbol detection is to utilize the labeled data to estimate the true system model , and then to recover the symbol vector from using the MLE. Nonetheless, the coherent MLE (4) is computationally prohibitive, while its relaxed version in (5) may be inaccurate. Alternatively, one can seek a purely data-driven strategy, using the data to train a black-box highly-parameterized DNN for detection, requiring a massive amount of labeled samples. Consequently, to facilitate accurate detection at affordable complexity and with limited data, we design LoRD-Net via model-based deep learning [shlezinger2020model], by combining the learning of a competitive objective, combined with deep unfolding of the relaxed MLE.

Learning a competitive objective refers to the setting of the unknown system parameters . However, the goal here is not to estimate the true system parameters, but rather the ones for which the solution to the relaxed MLE coincides with the true value of . This system identification problem can be written as

(8)

where is the relaxed MLE (5). The optimization problem (8) yields a surrogate objective function , or equivalently, a set of system parameters , referred to as a competitive objective to the true . An illustration of such a competitive objective obtained for the case of is depicted in Fig. 2.

Figure 2: An illustration of the relation between the optimal point of a competitive objective function (dashed blue line) and the true MLE obtained by an exact maximization of the log-likelihood objective function (solid black line) over the discrete set as well as an approximation of the MLE obtained by a maximization of the log-likelihood objective function over the continuous space , when the true transmitted symbol is .

The main difficulty in solving (8) stems from the fact that is not differentiable with respect to the system parameters . We overcome this obstacle by applying a differentiable approximation of , or equivalently, an algorithm that approximates the operator specific to our problem. Since can be computed by first-order gradient methods, we design a deep unfolded network [monga2019algorithm] to compute the relaxed MLE in manner which is differentiable with respect to . The usage of deep unfolding allows not only to learn a competitive objective via (8), but also results in accurate inference with a reduced number of iterations compared to model-based first-order gradient optimization. Furthermore, the unfolded network utilizes a relatively small amount of trainable parameters, thus enabling learning from small amounts of labeled samples.

Iii-B LoRD-Net Architecture

We now present the architecture of LoRD-Net, which maps the low resolution into an estimated . For given system parameters whose learning is detailed in Subsection III-C based on the competitive objective rationale described above, LoRD-Net is obtained by unfolding the iterations of a first-order optimization of the relaxed MLE (5). Our derivation thus begins by formulating the first-order methods to iteratively solve (5) for a given .

Let be a parametrized operator defined as , where is a positive-definite weight matrix and denotes the set of parameters of the operator . Such a linear operator can be used to model a first-order optimization solver by considering a composition of mappings of the form:

(9)

where is an initial point, is the set of parameters of the overall mapping . The mapping (9) is differentiable with respect to the system parameters , and its local weights . For a fixed number of iterations , the resulting function is thus differentiable with respect to the set of parameters and its input (unlike the original operator). Therefore, it can now be used as a differentiable approximation of , which allows for a training (optimization) over the set of its parameters based on the gradient-based training algorithms and the back-propagation technique.

Following the deep unfolding framework [monga2019algorithm], the function can be implemented as a

-layer feed-forward neural network, where the initial point

and the one-bit samples constitute the input to the network, and with trainable parameters that are given by . By (6), the -th layer computes:

(10)
(11)

where the overall dynamics of the LoRD-Net is given by:

(12)

Each vector in (10) represents the input to the -th layer (or equivalently, the output of the previous iteration), with being the input of the entire network (which represents the initial point for the optimization task). Upon the arrival of any new one-bit measurement , the recovered symbols are obtained by feed-forwarding through the layers of LoRD-Net. In order to obtain discrete samples, the output of LoRD-Net is projected into the feasible discrete set , viz.

(13)

An illustration of LoRD-Net is depicted in Fig. 3.

We note that one can also propose an alternative architecture, derived by applying the projection operator at the output of each layer, i.e., by defining

. Such a setting corresponds to the unfolding of a projected gradient descent method. However, our numerical investigations have consistently shown that such an architecture suffers from the vanishing gradient problem during training and a significant degradation in performance. As a result, we implement LoRD-Net while applying the projection operator once on the output of the network, and only during inference, as discussed above.

Figure 3: An illustration of LoRD-Net, where trainable system parameters and unfolded weights are highlighted in red and green colors, respectively.

In principle, one can fix for some , for which (12) represents steps of gradient descent with step size . In the unfolded implementation, the weights are tuned from data, allowing to detect with less iterations, i.e., layers. As a result, once LoRD-Net is trained, i.e., its weight matrices and the unknown system parameters are learned from data, it is capable of carrying out fast inference, owing to its hybrid model-based/data-driven structure. Furthermore, the number of iterations is optimized to boost fast inference in the training procedure, as detailed in the following.

Iii-C Training Procedure

Herein, we present the training procedure for LoRD-Net. In particular, our main goal is to perform inference of the unknown system parameters based on the rationale detailed in Subsection III-A, i.e., to obtain a competitive objective. The learning competitive objective is used to tune the weights of the unfolded network . Accordingly, we present a two-stage training procedure for LoRD-Net (12). Once the training of the LoRD-Net is completed, it carries out symbol detection from one-bit information without requiring the knowledge of system parameters .

Iii-C1 Training Stage 1 - Learning a Competitive Objective

The first stage corresponds to learning the unknown system parameter . However, as formulated in (8), we do not seek to estimate the true values of the channel matrix and noise covariance , but rather learn the surrogate values which will facilitate accurate detection using the relaxed MLE formulation. We do this by taking advantage of two propertities of LoRD-Net: The first is the differentiability of the unfolded architecture with respect to , which facilitates gradient-based optimization optimization; The second is the fact that for , LoRD-Net essentially implements steps of gradient descent with step size over the convex objective (5), and is thus expected to reach its maxima.

Based on the above properties, we fix a relatively large number of layers/iterations for this training stage, and fix the weights to . Under this setting, the output of LoRD-Net represents an approximation of the relaxed MLE for a given parameter , denoted , i.e., we have that

(14)

We refer to the setting using in this stage as the basic optimization policy. Note that as the number of layers grows large, the above approximation becomes more accurate. Hence, by substituting (14) into (8) and replacing with the corresponding outputs of LoRD-Net, we formulate the loss measure of the first training stage of LoRD-Net as:

(15)

Owing to the differentiable nature of with respect to , we recover based on (15

) using conventional gradient-based training, e.g., stochastic gradient descent with backpropagation, as detailed in our numerical evaluations description in Section 

IV

Iii-C2 Training Stage 2 - Learning the Unfolded Weights

Having learned the unknown system parameters in Stage 1, we turn to tuning the parameters of LoRD-Net, i.e., the set . We note that in Stage 1, the rationale was to use the basic optimization policy with a large number of layers , exploiting the insight that under this setting, LoRD-Net effectively implements conventional gradient descent. However, once Stage 1 is concluded and is learned, it is preferable to reduce the number of layers compared to that used in Stage 1, thus exploiting the ability of the unfolded network to carry out faster inference compared to their model-based iterative counterparts by learning the weights applied in each iteration [gregor2010learning, monga2019algorithm]. Consequently, the first step in this stage is to set a number of layers to a value which can potentially be smaller than that used in the first training stage, and then optimize the weights according to the following criterion:

(16)

Generally speaking, in order for a first-order optimizer (LoRD-Net in this case) to provide a descent direction at each iteration (layer), the pre-conditioning matrices must be positive-semidefinite so that each iteration does not reverse the gradient direction. To incorporate this requirement into LoRD-Net training, we re-parameterize the pre-conditioning matrices by writing and performing the traning over the matrices . The resulting two-stage training algorithm is summarized as Algorithm 1.

Input: Labeled data
1 Stage 1 Init: Fix (large) , step-size , and weights . Initialize ;
Optimize via (15) ;
  // Stage 1
2 Stage 2 Init: Fix (small) . Initialize ;
3 Set the trainable parameters to ;
Optimize according to (16) ;
  // Stage 2
Output: LoRD-Net parameters
Algorithm 1 Training LoRD-Net

When the network is properly trained, LoRD-Net is expected to carry out learned and accelerated first-order optimization, tuned to operate even in channel conditions for which such an approach does not yield the MLE for the true channel.

Iii-D Discussion

LoRD-Net is a data-driven acquisition system based on unfolding first-order gradient optimization methods, designed for low-resolution MIMO receivers operating without analog processing. Its model-awareness enables the receiver to learn to accurately infer from smaller training sets compared to conventional DNN architectures applied to such setups, as suggested, e.g., in [balevi2019one], giving rise to the possibility of tracking block-fading channel conditions via online training, as in [shlezinger2019viterbinet]. Furthermore, LoRD-Net differs from previously proposed deep unfolded MIMO receivers as surveyed in [balatsoukas2019deep] in two key aspects: First, LoRD-Net is particularly designed for one-bit observations, being derived from the iterative optimization formulation which arises from such setups. Second, previous unfolded MIMO receivers either assumed prior knowledge of the channel parameters, as in [samuel2019learning], or alternatively, utilize external modules to directly estimate the CSI as in [he2019model]. LoRD-Net exploits the fact that, for its unfolded relaxed convex optimization algorithm to yield the desired MLE, an alternative channel parameters, which differ from the true , should be estimated. Consequently, the training procedure of LoRD-Net does not aim to recover the true CSI, but the one which yields a competitive objective which facilitates symbol detection, thus accounting for the overall system task.

The proposed training procedure detailed in Algorithm 1

carries out each training stage once in a sequential manner. This strategy can be extended to optimizing the hyperparameters and the weights in an alternating fashion, i.e. repeating the stages multiple times, while using the learned

in Stage 2 in the Stage 1 that follows. Alternatively, the hyperparameters and the weights can be learned jointly in an end-to-end manner, by optimizing (16) with respect to both and simultaneously. The main requirement for carrying out these training strategies compared to that detailed in Subsection III-C is that the same number of layers should be used when learning both and , while when these stages are carried out once sequentially, it is preferable to use large at Stage 1 and a smaller value, which dictates the number of learned weights, in Stage 2. Furthermore, our numerical evaluations show that training once in a two-stage fashion via Algorithm 1 yields similar and sometimes even improved performance compared to learning both and simultaneously in a one-stage manner, as well as when alternating between these two stages, as demonstrated in Section IV.

A possible extension of the training procedure is to account for ADCs with more than one bit, as well as allow LoRD-Net to optimize the quantization thresholds in light of the overall symbol recovery task. While accounting for multi-level ADCs is a rather simple extension achieved by reformulating the objective function (3), optimizing the quantization thresholds requires modifying the overall training strategy. The challenge here is that modifying results in different one-bit measurements . In a communication setup, in which periodic pilots are transmitted, one can envision gradual optimization of between consecutive pilot sequences, using their corresponding one-bit observations to further optimize LoRD-Net. The study of LoRD-Net with multi-level ADCs and optimized thresholds is left for future work.

Iv Numerical Study

In this section, we numerically evaluate LoRD-Net111The source code is available at: https://github.com/skhobahi/LoRD-Net., and compare its performance with state-of-the-art model-based and data-driven methodologies. As a motivating application for the proposed LoRD-Net, we focus on the evaluation of LoRD-Net for blind symbol detection task in one-bit MIMO wireless communications. In the following, we first detail the considered one-bit MIMO simulation settings in Subsection IV-A, after which we evaluate the receiver performance, compare LoRD-Net to alternative unfolded architectures, and numerically investigate its training procedure in Subsections IV-B, IV-C, and IV-D, respectively. .

Iv-a Simulation Setting

We consider an up-link one-bit multi-user MIMO scenario as in (2). We focus on a single cell in which a base station (BS) equipped with antenna elements serves single-antenna users. Specifically, we consider two cases of and , i.e., a and a MIMO channel setup. The transmitted symbols of the users, represented by the unknown vector , are randomized in an independent and identically distributed (i.i.d.) fashion from a BPSK constellation set . The projection mapping is thus , where the function is applied element-wise on the vector argument. In the sequel, we assume that while the channel matrix , representing the CSI, is not available at the BS, the noise statistics are known and are fixed to . Accordingly, our goal is to utilize LoRD-Net to recover the transmitted symbols from the one-bit measurements. Note that the proposed methodology can carry out the task of symbol detection even for the case in which the noise statistics is unknown.

Channel Models: We evaluate LoRD-Net under two channel models: (i) i.i.d. Rayleigh fading channels, where ; and (ii) the COST-2100 massive MIMO channel [flordelis2019massive]. The COST-2100 channel model is a realistic geometry-based stochastic model which accounts for prominent characteristics of massive MIMO channels, and is considered to be an established benchmark for evaluating MIMO communication systems. We generate the channel matrices for the COST-2100 model for a narrow-band indoor scenario with closely-spaced users at  GHz band, where the BS is equipped with a uniform linear array (ULA) that has omni-directional receive antenna elements. The one-bit ADC operation uses zero thresholds, i.e.

. We define the signal-to-noise ratio (

) as:

(17)

Benchmark Algorithms: As LoRD-Net combines both model-based and data-driven inference, we compare its performance with state-of-the-art model-based and data-driven methodologies in a one-bit MIMO receiver scenario. In particular, we use the following benchmarking detection algorithms:

  • The model-based nML proposed in [choi2016near]. The nML algorithm is based on a convex relaxation of the conventional ML estimator, and requires the exact knowledge of the channel parameters . We set the number of iterations of the nML algorithm to , and the step-size is chosen using a grid search method to further improve the performance of the nML, while the remaining parameters are those reported in [choi2016near].

  • The data-driven Deep Soft Interference Cancellation (DeepSIC) methodology proposed in [shlezinger2020deepsic], with five learned interference cancellation iterations. DeepSIC is channel-model-agnostic and can be utilized for symbol detection in non-linear settings such as low-resolution quantization setups. Unlike LoRD-Net, which is designed particularly for observations of the form (2) where is unknown, DeepSIC has no prior knowledge of neither the channel model nor its parameters.

LoRD-Net Setting: The LoRD-Net receiver is implemented with layers. Recall that the first training stage of the LoRD-Net is concerned with finding a competitive objective by carrying out the training of the network over the unknown set of channel parameters . Unless otherwise specified, we focus on the case where only is unknown, and the correlation matrix of the noise is available.

During the first training stage, we set , and recover based on the objective (15) using the Adam stochastic optimizer [kingma2014adam] with a constant learning rate of . Next, we carry out the training of the LoRD-Net during the second stage according to the objective function defined in (16) and over the set of trainable parameters , using the Adam optimizer with a learning rate of , and a mini-batch of size . We consider the learning of diagonal pre-conditioning matrices (unfolded weights) during the second training stage. The network is trained for epochs during the first training stage, and epochs during the second training stage, with the same value of used in both stages.

Iv-B Receiver Performance

Here, we evaluate the performance of the proposed LoRD-Net, comparing it to the aforementioned benchmarks as well as examining its dependence on the number of training samples . In particular, we numerically evaluate the bit-error-rate (BER) performance versus SNR using different training sizes , for both and channel configurations. For DeepSIC, we use only , while the nML recever of [choi2016near] operates with perfect CSI, i.e., with full accurate knowledge of . All data-driven receivers are trained for each SNR separately, using a dataset corresponding to that specific SNR value.

The results are depicted in Figs. 4(a) and 4(b) for a channel configuration under the Rayleigh fading and COST-2100 channel models, respectively. Furthermore, the BER performance for a configuration under both channel models are illustrated in Fig. 5(a) for the Rayleigh fading channel, and in Fig. 5(b), for the COST-2100 channel model. Based on the results presented in Figs. 4 and 5, one can observe that LoRD-Net significantly outperforms the competing model-based and data-driven algorithms and achieves improved detection performance under both simulated channels, as well as both MIMO configurations.

(a) Rayleigh fading
(b) COST-2100 massive MIMO
Figure 4: BER performance versus SNR over a channel configuration.
(a) Rayleigh fading
(b) COST-2100 massive MIMO
Figure 5: BER performance versus SNR over a channel configuration.

In particular, the nML algorithm, which is designed to iteratively approach the MLE using ideal CSI (prior knowledge of the channel matrix), is notably outperformed by LoRD-Net. Such gains by LoRD-Net, which learns to compute the MLE from data without requiring CSI, compared to the model-based nML algorithm, demonstrate the benefits of learning a competitive objective function combined with a relaxed deep unfolded optimization process. Specifically, the results depicted in Figs. 4-5 illustrate that one can significantly improve the receiver performance by learning a new channel matrix upon which the learned competitive objective function admits optimal points near the true symbols. The learning of the competitive objective function is possible due to the hybrid model-based/data-driven nature of LoRD-Net, and the fact that it is derived based on the unfolding of first-order optimization techniques. From a computational complexity point-of-view, the depicted performance of the nML algorithm in Figs. 4-5 is achieved by employing iterations of a first-order optimization algorithm, while LoRD-Net uses only layers/iterations—exhibiting a significant reduction in the computational cost during inference as compared to the nML algorithm.

Comparing LoRD-Net to DeepSIC illustrates that LoRD-Net benefits considerably from its model-aware architecture. The fact that LoRD-Net is particularly tailored to the one-bit system model of (2) allows it to achieve improved accuracy, even in the case of training with small amounts of data. For instance, for the MIMO Rayleigh fading channel (see Fig. 4(a)), LoRD-Net trained with samples, achieves BER of at SNR of dB, while DeepSIC trained with the same dataset requires SNR as high as dB to achieve such an error rate. Considering Fig. 4(b), a similar behavior is observed in the COST-2100 channel, for a BER of . A similar performance gain for LoRD-Net can be observed in a configuration; see Fig. 5. Furthermore, it can be observed that the LoRD-Net still outperforms the DeepSIC methodology, even when trained on times less training samples. In particular, for the channel setup considered in this part, the total number of trainable parameters of LoRD-Net is merely . For comparison, DeepSIC, which uses and trains a multi-layer fully-connected network for each user at each interference cancellation iterations, consists here of over trainable parameters. Such a reduction in the number of parameters allows for achieving substantially improved performance with much smaller training points, as observed in Figs. 4-5. Finally, we note that the small number of trainable parameters of LoRD-Net shows its potential for online or real-time training, as proposed in [shlezinger2019viterbinet]. This can be achieved by using periodic pilots with minimal overhead on the communication, while inducing a relatively low computational burden in its periodic retraining.

So far, we have investigated the performance of the proposed LoRD-Net for scenarios with known noise statistics, and unknown (i.e., ). Next, we investigate the detection performance of LoRD-Net when both the channel and noise covariance matrices are not available, i.e., we set and carry out the training according to the proposed two stage methodology. Specifically, we consider the learning of a diagonally structured in addition to the channel matrix for this scenario. Fig. 6 demonstrates the BER versus performance of LoRD-Net under both channel models, when trained using a dataset of size . The performance of LoRD-Net for the case of is further provided for comparison purposes. Observing Fig. 6, one can readily conclude that the proposed network can successfully perform the task of symbol detection also when in unknown. Furthermore, it can be observed that a small gain in performance is achieved for both channel models when as compared to the case of

, which is presumably due to the careful addition of more degrees of freedom in learning a competitive surrogate model.

Iv-C Performance of Competing Deep Unfolded Architectures

In this part, we compare the performance of the proposed LoRD-Net with alternative deep unfolding-based architectures tailored for the problem at hand. Recall that the architecture of LoRD-Net uses trainable parameters which are shared among the different layers, as illustrated in Fig. 3. Thus, LoRD-Net is comprised of a relatively small number of trainable parameters, and uses a two-stage learning method to train the shared parameters, representing the competitive model, and the iteration-specific weights, encapsulating the first-order optimization coefficient. Nonetheless, the conventional approach for unfolding first-order optimization techniques is to over-parameterize the iterations, and then, train in an end-to-end manner using a one-stage training procedure discussed earlier. Therefore, to numerically evaluate the proposed unfolding mechanism of LoRD-Net, we next compare it to two conventional unfolding based benchmarks derived from the relaxed MLE:

  • Benchmark 1: An over-parameterized deep unfolded architecture obtained by setting the computational dynamics for the th layer as:

    (18)

    Here, are the trainable parameters of the th layer, and .

  • Benchmark 2: Here, we again use the unfolded architecture given in (18), while limiting the number of trainable parameters by constraining the rank of the learned matrices. In particular, we set and , where denotes the set of trainable parameters of the th layer of the unfolded network. The dimension controls the rank of the resulting weight matrices , and thus the number of trainable parameters.

Comparing (18) with the corresponding dynamics of LoRD-Net in (10), we note that the channel matrix , the pre-conditioning matrices , and the noise covariance matrix are now absorbed into the per-layer trainable matrices and . Accordingly, these unfolded benchmarks, which follow the conventional approach for unfolding optimization algorithms, are less faithful to the underlying model. These benchmarks also differ from LoRD-Net in their number of trainable parameters. In particular, Benchmark 1 with layers has trainable parameters, while Benchmark 2 has weights, which can be controlled by the setting of the hyperparameter . For comparison, LoRD-Net has trainable parameters for the case of and diagonally structured pre-conditioning matrices, while for the case of with a diagonally structured pre-conditioning matrix and noise covariance matrix it has trainable parameters.

We evaluate the performance of the proposed LoRD-Net compared to the unfolded benchmarks in the following simulation setup. We consider train all the considered network using a dataset of size , while the highly-parameterized Benchmark 1 is also trained using samples. For Benchmark 2, we set . All architectures have layers and their performance are evaluated on the same testing dataset of size . The unfolded benchmarks are trained in the conventional end-to-end fashion. The channel model is a Rayleigh fading channel. Foror the considered scenario above, the LoRD-Net admits a total of trainable parameters, while Benchmark 1 ha a total of (approximately times more parameters than LoRD-Net), while Benchmark 2 has trainable parameters.

Fig 7 depicts the BER versus of LoRD-Net compared to the unfolded benchmarks. We observe in Fig. 7 that the proposed LoRD-Net significantly outperforms the conventional unfolding based benchmarks,indicating the gains of the increased level of domain knowledge Incorporated in to the architecture of LoRD-Net and its two stage training procedure. It is also observed that the performance of Benchmark 1 increases with more training samples. Interestingly, for a small training set of samples, Benchmark 2, which is obtained by imposing a rank constraint on the trainable parameters of Benchmark 1, achieves improved performance over Benchmark 1, due to its notable reduction in the number of trainable parameters.

Figure 6: BER versus for both channel models and a training size of . The performance of the proposed LoRD-Net is provided for both scenarios of training over (i.e., known noise statistics ), and over corresponding to unknown channel matrix and noise statistics.
Figure 7: BER versus of LoRD-Net compared to the unfolded benchmark for a Rayleigh fading channel model.

Iv-D Training Analysis

In this part, we analyze the performance of the proposed two-stage training procedure described in Subsection III-C. The training aspects of LoRD-Net are numerically evaluated for the Rayleigh channel model detailed before.

Following our insight on the ability of LoRD-Net to accurately train with small datasets, we begin by evaluating the performance of the LoRD-Net versus the training data size . For this study, we generate training datasets of size and evaluate the performance of LoRD-Net using test samples. Fig. 8 depicts the BER achieved for each training size , for  dB. We can observe from Fig. 8 that the performance of the LoRD-Net improves across all values, where the improvements are most notable for . Interestingly, it may be concluded from Fig. 8 that LoRD-Net is capable of accurately and reliably performing the task of symbol detection without CSI with as few as samples. The ability of LoRD-Net to train with very few training samples (compared to the black-box DNN models for one-bit MIMO receivers[balevi2019one, zhang2020deep], as well as the DeepSIC architecture), stems from its incorporation of the domain-knowledge in designing the LoRD-Net architecture. This in turn leads to far fewer trainable parameters requiring much less training samples for optimizing the network.

Figure 8: BER versus training size for the Rayleigh fading channel.

Next, we analyze the performance and the effect of the two stage training methodology detailed in Algorithm 1 on the detection performance of the LoRD-Net architecture. Recall that the first training stage is concerned with finding a competitive objective function through an optimization of LoRD-Net over the unknown system parameters , while the second training stage tunes the positive definite preconditioning matrices to accelerate the convergence of the LoRD-Net to the optimal points of the obtained competitive objective. To numerically evaluate the performance of the training methodology, we set dB, and generate a training dataset of size and a testing dataset of size . Then, we compare performance of Algorithm 1 with two other competing training procedures:

One-Stage Training: Here, the weights and the unknown system parameters are jointly learned in a single stage. The objective of this one stage training procedure for LoRD-Net is

(19)

Alternating Training: This procedure is concerned with training the network by alternating between the two optimization problems (15) and (16) consecutively with respect to each training epoch. Here, the network is trained over alternations, corresponding to a total of training epochs. Namely, at each epoch index , we update the variables

for odd

and update for even .

Before we proceed with the evaluation results, we provide some useful connections to notions widely used in the deep learning literature. Generally speaking, the performance of a statistical learning framework and its training procedure is evaluated using its generalization gap and testing error. The generalization gap of a model can be defined as the difference between the training and testing errors. Specifically, a model with smaller generalization gap and smaller testing error is highly favourable. Furthermore, a higher generalization gap may indicate that the network has over-fitted to the data, and hence, it does not generalize well. For two models with the same generalization gap, the one with lower testing error is favourable. Fig. 9 depicts the BER versus the training epoch for both the training and testing dataset. We first note that the proposed two stage training method outperforms all other competing procedures and it assumes a significantly lower testing error as compared to other algorithms. Interestingly, one can observe that the proposed methodology has successfully closed the generalization gap as the testing and training error are very close to each other. On the other hand, the other two training procedures admits very large generalization gap indicating the fact that their utilization has resulted in an over-fitting of the network to the data. Furthermore, it can be observed from Fig. 9 that the major improvement of the detection accuracy of the LoRD-Net is taking place during the first training stage when finding a competitive objective function, i.e., epochs , where a slight improvement in the BER is achieved during the second stage, i.e., .

Figure 9: BER versus the training epoch number of LoRD-Net, Rayleigh fading channel, dB.

The success of the proposed two stage training procedure in closing the generalization gap compared to the one stage training procedure is presumably due to the fact that the two-stage training approach leads to an implicit regularization on the model capacity limiting the total number of parameters used during the entire training procedure. On the contrary, the one stage training procedure allows the neural network to use its full capacity leading to an over-fitting and a larger generalization gap, as observed in Fig. 9.

(a) Rayleigh fading
(b) COST-2100 Massive MIMO
Figure 10: BER performance of LoRD-Net after completing training stages 1 and 2 versus the layer/iteration number for (a) the Rayleigh fading channel, and (b) the COST-2100 massive MIMO channel, with dB.

As discussed in Subsection III-C, the second training stage allows LoRD-Net to achieve fast inference, i.e., accelerated convergence to the optimal points of the competitive objective function. To illustrate this behavior, we perform a per-layer BER evaluation of LoRD-Net, exploiting the interpretable model-based nature of the LoRD-Net, in which each layer represents an unfolded first-order optimization iteration, and thus its output can be used as an estimate of the transmitted symbols. Figs. 10(a) and 10(b) depict the BER versus the layer/iteration number of LoRD-Net at the completion of training stages 1 and 2, for the Rayleigh fading channel and the COST-2100 channel model, respectively. We observe in Fig. 10 that the convergence of LoRD-Net after the completion of the first training stage is slow and requires at least layers/iterations to converge. Interestingly, we note from Fig. 10 that the second training stage indeed results in an acceleration of the convergence of LoRD-Net via learning the best set of pre-conditioning matrices for the problem at hand in an end-to-end manner. In particular, after the completion of the second training stage, LoRD-Net can accurately and reliably recover the symbols with as few as layers. This observation hints that one can consider further truncation of the LoRD-Net after the training to reduce the computational complexity while maintaining its superior performance.

In order to quantify the quality of the learned competitive objective in closing the gap between the discrete optimization problem and its continuous version, we further provide the per-iteration performance of the nML algorithm and the LoRD-Net algorithm which operate with perfect CSI. For this scenario, LoRD-Net utilizes the true , and is thus optimizer only over the weights while employing the exact channel model . It is observed from Figs. 10(a)-10(b) that learning a new surrogate model for the continuous optimization problem at hand is indeed highly beneficial and admits a far superior performance in recovering the transmitted symbols. The analysis provided in Fig. 10 further supports the rationale behind the proposed two-stage training methodology, and the fact that the second training stage results in an acceleration of the underlying first-order optimization solver (i.e., achieving a much faster descent per step) upon which the layers of the LoRD-Net are based.

V Conclusion

In this work, we introduced LoRD-Net, which is a hybrid data-driven and model-based deep architecture for blind symbol detection from one-bit observations. The proposed methodology is based the unfolding of first-order optimization iterations for the recovery of the MLE. We proposed a two-stage training procedure incorporating the learning of a competitive objective function, for which the unfolded network yields an accurate recovery of the transmitted symbols from one-bit noisy measurements. In particular, owing to its model-based nature, LoRD-Net has far fewer trainable parameters compared to its data-driven counterparts, and can be trained with very few training samples. Our numerical results demonstrate that the proposed LoRD-Net architecture outperforms the state-of-the-art model-based and data-driven symbol detectors in multi-user one-bit MIMO systems. We also numerically illustrate the benefits of the proposed two-stage training procedure, which allows to train with small training sets and infer quickly, due to its interpretable model-aware nature.

References