I Introduction
One key feature of new generation of cellular networks is their efficient use of frequency band and energy. To achieve this goal, they use various techniques such as waterfilling, appropriate precoding and beamforming. In most of these techniques, the Channel State Information (CSI) should be available at the transmitter side (CSIT). In TDD systems, UpLink (UL) and DownLink (DL) frequencies are equal, so we can use channel reciprocity and simply infer DL channel by observing UL channel. In Frequency Division Duplexing (FDD) systems, however, DL channel and UL channel have different frequencies so we can’t use channel reciprocity to infer DL channel. The most commonly used solution is that the user (receiver) first measures (estimates) the DL channel, and then sends its information back to the transmitter. This solution has two major disadvantages: delay and overhead. If the delay is greater than coherence time, the actual DL channel is different from what has been feedbacked by the user. In addition, in new generations of mobile networks the transmitter has a large number of antennas. For example, for a fourthgeneration transmitter with 64 antennas, the need to learn DL channel (pilot transmission and feedback data) consumes large portion of the transmitter’s traffic [1]. This overhead is very large and is one major challenges of LTE networks [1]. These challenges have such important affect on the network performance that despite some significant advantages of FDD system such as continuous transmission [2], in recent years, Time Division Duplexing (TDD) has attract more attentions.
To eliminate the need for the feedback (and so its associated overhead and delay), there are several studies that aim to infer DL channel by observing UL channel in FDD systems. DLCSI estimation methods in [3, 4, 5] are based on the assumption that the difference between dominant angle of arrival (AOA) in UL and dominant angle of departure (AOD) in DL is small and directional properties of UL and DL are correlated. For example, [6]
through a great deal of measurements shows that with probability of about 81 percent, this difference in smaller than 4.5 degrees. Therefore, by having dominant AOA in UL, the dominant AOD in DL can be obtained and used for purposes like beamforming.
References [7, 8, 9, 10] are based on covariance matrix due to channel matrix slow variations. In [7] a transformation matrix is used to convert UL covariance matrix to DL covariance matrix. [9]
is based on the concept of dictionary learning and it has two phases: training and exploitation. In training phase, they make a dictionary with corresponding DL and UL covariance matrices (by changing user location, they have constructed the dataset of different input and output pairs). In exploitation phase, by observing UL covariance matrix, DL covariance matrix is constructed by interpolation of stored dictionary with various methods.
In [11, 12, 13] by taking into account multipath structure of the channel, they extract paths of signal independent of frequency and hence can infer channel response in any desired frequency band. For example, in [11], the authors consider 4 parameters for every path (path attenuation, path length, an independent phase shift for modeling reflection and angle of arrival of path) and then they have tried to estimate these parameters using the ULCSI. The resulted model, is then used for prediction of the DLCSI.
AOABased methods are often not usable in cases where accurate response of channel is required and often used only for beamforming. In path extractionbased methods, we can obtain the accurate channel response at any desired frequency, but they often based on assumptions that may not practically possible. For example, path attenuation is considered independent of frequency [11]. This assumption is only true if difference between DL and UL frequencies is small. In [13], to consider path attenuation dependency on frequency, limited feedback is used and verified that deriving DL channel, with the assumption of frequency independent path attenuation, is not very accurate. Covariancebased algorithms also depend on different environmental factors such as correlation between antennas. When antenna correlation is poor, [14] showed that it is not appropriate to use correlation based methods.
In recent years, artificial intelligence has revolutionized human life, so that some called it the fourth industrial revolution. One of the leading areas of artificial intelligence and machine learning is deep learning which has been very successful in many cases such as machine vision, speech recognition and object detection and in some cases even exceeded human performance
[15]. Deep Learning have been also used in physical layer communications [16, 17, 18, 19, 20, 21]. [16]considered communication system at physical layer as an autoencoder and designed and endtoend system that optimizes transmitter and receiver simultaneously in one process.
[17]used a variational GAN to capture stochastic model of channel and learns its probability density function (PDF). In
[18] authors used an adversarial network to model the channel inputoutput conditional probability. In [20], a super resolution network cascaded with a denoising autoencoder used to estimate channel response based on some known pilots. CsiNet which introduced in
[21], to perform limited CSI feedback in FDD systems, encodes channel response at one side (user) and decodes received feedback with an decoder in other side (base station).Motivated by such applications, in this paper, we propose a method based on deep neural networks that predict the DLCSI based on the past ULCSI measurements. Use of the deep networks enables us to expand the search space of the environment propagation model (beyond the the current mathematical models) and therefore it can capture more on how ULCSI should be transferred to DLCSI.
In essence, the core idea of our scheme is that the way that the channel affects the transmitted signal (regardless that it is UL or DL) is related to the structure of the environment the signal is propagating in (e.g., the objects which are in the environment, the materials that they are made up, their shape). By knowing this fact that both DL and UL channels share the same propagating environment (assuming of course no sudden change in the environment), we use datadriven approach to extract environment information from UL channel response to a latent domain and then transfer derived environment information from the latent domain to the DL channel, Fig. 1.
To achieve this goal we use two types of Deep Networks: Convolutional Neural Networks (CNNs) and a specific type of Generative Adversarial Networks (GANs) called Boundary equilibrium GAN (BEGAN) which is based on Mean Square Error (MSE). For training and testing we use simulated data of Extended VehicularA (EVA) and Extended Typical Urban (ETU) models. Our results verifies the effectiveness of our schemes.
It worth also mentioning that if we want to fully characterize a DL channel of a multipleinput multipleoutput (MIMO) system, we should characterize a 4dimensional space, i.e., we should find out what is the channel effect (both on amplitude and phase of the signal) between 1) each transmit antenna and 2) each receive antenna, for 3) each of the subcarriers in our frequency range and 4) for each time slot. In most of the previous studies the prediction of DL matrix is investigated in terms of the MIMO channel matrix and their aim was not to determine the channel effect in timefrequency domain. In this work, instead of looking high level at the transmitter and receiver antennas and give one value for each pair, we focus on one transmitreceive antenna pair and predict DL channel over a block of time and frequency. The results can be further extended to the MIMO case but we will not discuss that in this paper.
The rest of this paper is organized as follows. In section II, we will describe CNNs and GANs as tools we used to predict DL channel. Section III provides detailed discussion on prediction problem. Section IV contains two ̵approaches for solving DL prediction problem: direct approach and generative approach. In section V, we explain implementation details of networks. Simulation results also are provided. Section VI draws a conclusion on this paper.
Ii background
In following subsections we briefly discuss about two special types of deep neural networks which we used in this paper to predict DL channel.
Iia Convolutional neural networks
One of the interesting neural network structures used widely in artificial intelligence (AI) community is Convolutional Neural Network (CNN). CNNs could have many hidden layers and usually they are one of the three type of convolutional layers, pooling layers and fully connected layers. CNN is a powerful tool specially in analyzing 2D data like images. It is mainly due to the structure of the convolutional layer which computes the output by convolving filter (kernel) weights with the input image (data). Value of each point in the output image is equal to crosscorrelation of filter and corresponding area in the input image [22]
. After applying convolution operation on the input data, an activation function will applied. Output of the activation function will pass through the next layer as the input.
IiB Generative adversarial networks
Generative adversarial networks (GANs) are One of the most powerful generative models that captures data distribution [23]
. They are based on game theory and consist of two networks: generator and discriminator which are trained simultaneously. Considering a noise vector
as the input, (typically with normal distribution) generator tries to create images similar to real ones while discriminator tries to distinguish generated images from real ones. Training of GANs is a twoplayer minimax game. Generator tries to maximize error probability of discriminator (this means discriminator assigns high probability of being real to generated images) while discriminator tries to minimize probability of being real to generated images. GANs are hard to train and nonconvergence or instability is their main problem
[24].Many different structures have been recently proposed for new implementation of GANs. In the original GAN, output of discriminator is a positive number between 0 and 1. This number represent the probability that the discriminator input image is infact a real image (not a generated image). Such discriminator is the most common type of discriminator network in GANs literature. For first time in Energybased GAN (EBGAN) [25], an autoencoder used as the discriminator. In EBGAN, the discriminator objective is to maximize reconstruction error of generated images while minimizing it for real ones. EBGAN generator’s structure is similar to decoder part of discriminator. Using an autoencoder as discriminator makes training easier, faster and more stable. Boundary equilibrium GANs (BEGANs) [26] are improved version of EBGANs and use same structure but BEGANs aim to match autoencoder loss distribution instead of matching data distribution directly. To training such networks an equilibrium is
(1) 
where and are the autoencoder reconstruction loss when it gets a real image and a generated image, respectively. represents the expectation operation. In (1), has an inverse relation with the diversity of the generated images, meaning that if is set to a larger number, the generator creates less diverse images.
In BEGAN, and denote the discriminator and generator loss or objective functions respectively, and they defined as
(2) 
where is difference between the reconstruction loss of real images and the reconstruction loss of generated images which is scaled with parameter . introduced to maintain (1) and based on proportional control theory, [26] suggests that should be updated using last equation in (2) and is its learning rate. Visual inspection is typically the only way to determine convergence in GANs but [26] also defines a convergence measure as
(3) 
Iii Problem Definition and Formulation
Suppose a basestation (transmitter) and a user (receiver) in the network. To increase the network spectral efficiency (by techniques like waterfilling and beamforming in the case of multiple antennas), the basestation needs to know DLCSI. When a user sends its data on UL channel, for example if it uses OFDM method, it allocates some of its subcarriers and timeslots for pilots transmission. Using this pilots, the basestation can estimate ULCSI, but assuming an FDD system UL and DL CSI reciprocity does not hold. So, one way to get that information is to first send pilot in downlink and after user estimate the DLCSI, it sends DLCSI over the feedback link. Such scheme leads to high over head in the system. Eliminating the feedback, we should find a way to derive DLCSI using ULCSI, which is only available information about the environment at the basestation.
To better describe the problem, consider a block of timefrequency between a pair of transmitterreceiver antennas, Fig. 2. Assuming a grid over this block, knowledge of the channel state information is equivalent to having information about what is the effect of channel (on both amplitude and phase of the transmitted signal) over each cell of the grid (i.e., we should know a complex value for each grid cell)
This block itself consists of two main portions of UL and DL: a) The first rows (subcarriers) of frame are assigned to UL and b) the next rows (subcarriers) are assigned to DL. Columns of our frame represent different time slots (how the channel effect changes over time). By considering this structure, we can say that the problem at hand is that we have the ULCSI information over the subcarriers and time slots (part 1) and want to predict DLCSI in subcarriers and next time slots (part 2). It worth mentioning that to make the model realistic and causal, we only use the past ULCSI information for DLCSI prediction (and not the ULCSI that is measured at the same timeslots of DLCSI).
To solve DL prediction problem, most of the previous studies are based on first considering a mathematical channel model for the environment. For example, the multipath model for channel modeling is defined as
(4) 
where is the channel response over particular frequency of . In (4), it is assumed that there are distinct paths in the environment where is path attenuation and is a frequencyindependent phase shift that captures reflection and attenuation of the signal along that path.
In a machine learning terminology, the common approach is that they first consider a parametric model for the environment and then use ULCSI to estimate the parameters of the model. Having the resulted complete model, the DLCSI can be predicted.
Incorrect assumption about the parametric model and/or incorrect derivation of the parameters both lead to loss of some parts of ULCSI information and consequently resulting low accuracy of DLCSI prediction. Furthermore, when parametric models are estimated, often some simplifying assumptions should be considered that may not be true in some cases or even violated. For example, as mentioned in section I, in (4), is assumed to be constant for UL and DL, but as some studies, [13], suggest that this assumption is not always correct.
To avoid forcing incorrect latent domain and problem of simplifying assumptions, in this work, we do not use any specific parametric model and instead use datadriven approaches to discover underlying structure of data without any prior model assumption. More details of the proposed scheme is presented in Section IV.
Iv Proposed scheme
In this section, we first explain how CSI information can be considered as an image, then we present the two approaches which are proposed for DL channel prediction.
Iva CSI as an Image
Looking back at Fig. 2, CSI is a 2D complex matrix with size of , where is the number of subcarriers and is the number of time slots in the timefrequency block.
Recently many advanced techniques have been proposed for analyzing image data using neural networks. Image data are in fact 2D real matrices with one or more channels. For example the color images are 2D images with 3 channels (for red, green and blue components). To use imagebased techniques in this work we have considered the 2D CSI matrix as an image . For example in Fig. 3, heat map of CSI absolute values, plotted for a sample FDD frame.
If we use only absolute values, phase information will be lost. So to solve this problem, we consider complex values CSI matrix as a real values matrix with 2 channels. There are two choices to put complex values as 2 channel of an image: put absolute values in first channel and phase values in second channel or put real values in first channel and imaginary values in second channel. We selected the second approach to prevent the problem of phase wrapping that may happen for the phase information.
So in the rest of this paper wherever that we use ”image” term, it is refer to the CSI matrix that is considered as a image with real values in the first channel and imaginary values in the second channel.
IvB UpLink to DownLink Knowledge Transfer
In this paper, to extract environment information from UL and then transfer the derived knowledge back to DL domain, two approaches are introduced based on deep neural networks: direct approach and generative approach. In following subsections we explain each of this two approaches in details.
IvB1 Direct Approach
As mentioned before in UL to DL transfer, we have two steps: first encode environment information from ULCSI to a latent domain, second transfer and decode derived latent domain model into DLCSI. In direct approach we use a network to accomplish both of this two steps in a single process as one deep network (Fig. 4).
In this approach, we feed ULCSI in subcarriers and past time slots (part 1 in Fig. 2) as the input and network tries to predict DLCSI in subcarriers and next time slots (part 2 in Fig. 2) as the output.
As discussed in Section II, CNNs are one of the most successful tools in analyzing image data, so in this work we design an specific convolutional neural network to implement direct approach and using the designed model to predict DL CSI. The details of the network structure will be discussed in Section V.
IvB2 Generative Approach
We still have similar desired input and output: considering Fig 2, part 1 as input and part 2 as output. The difference is that we do not directly learn the UL to DL relation, instead, we consider the whole timefrequency block (a matrix of size
) as the image that we want to learn, i.e., we want to learn the joint distribution between different pixels of the complete CSI image. Knowing the joint distribution of the whole frame, we also know joint distribution of the UL section and DL section (Part 1 and part 2). Using ULDL joint distribution, we can predict DLCSI when we have the ULCSI. It is clear that capturing such joint distribution is a very complicated task (considering the size of the CSI matrix).
As briefly described in Section II, researchers in the AI filed, recently proposed GAN structure as a very successful tool for estimating joint distribution of the input data, specially when we are dealing with images. After training a GAN with a set of images, it can generate images that are very similar to the real images. One interesting application of GANs is in image completion, i.e., given a corrupted image (like when one part of the image is missing) the GAN tries to find the missing part. Several schemes have been proposed for image completion. The core idea of them is that first GAN tries to generate an image that resembles (w.r.t some metric) to the corrupted image and then use the generated image to predict the missing part.
Motivated by the success of GANs and image completion schemes (and since we are able to consider the CSI matrix as an image) we should be able to use similar network structure to find the joint distribution of the CSI and then use that model to predict DLCSI from ULCSI.
The steps of our proposed scheme can be summarized as:

Training Phase: First train a GAN with CSI images of the whole timefrequency block. After complete learning, the generator network is capable of getting a random vector as input and creating images which are very similar to real CSI images, Fig. 5.
It worth mentioning that from different types of GANs, we first selected the most common structure which called Deep Convolutional Generative Adversarial Network (DCGAN) [27]. Although DCGANs are able to produce similar images like our CSI images, during the completion phase, we didn’t get desired results and MSE of prediction was relatively high. Additionally, given a ULCSI frame, with different initializations of the input vector we got very different predictions of DLCSI.
To solve this problem, in this work, we have used BEGAN (described in Section IIB). As discussed there BEGANs are designed based on the MSE error and have better convergence property.

Completion Phase:
In this step we want to predict DLCSI. The idea is that we consider the timefrequency block that only has the ULCSI as the corrupted image, then we use different GANBased image completion algorithms to complete the missing part (DLCSI). As we treat the prediction task as completing a corrupted image, we name the prediction phase, completion phase, Fig. 6.
More accurately, in completion phase, vector (input of the generator) is initialized with a random state. Then we update
using gradient descent so that the generated image and the corrupted image are getting more similar (a loss function is reduced). After several iterations, the generated image is considered equal to complete real image and desired output (DLCSI) will be derived. In this work we have used two different loss functions and tried image completion using both methods.
Contextual Loss
Contextual loss is defined as distance between known part of image and its corresponding part in generated image. If we define mask as
(5) Then the contextual loss will be
(6) where x is the image that we want to complete and is the generated image.
Contextual Loss + Perceptual Loss
Reference [28] first used such loss function for image completion. If we use only contextual loss, final completed image may seem artificial (have different structure compared to real data) so one use discriminator loss of generated image as a new term in total loss and call that perceptual loss because it is a sense of being real. So in BEGAN
(7) and,
(8) where is a hyper parameter to control how much emphasis is put on perceptual respect to contextual loss during gradient descent. Its default value in BEGAN is 0.01.
V Implementation and simulation results
In the following, we discuss the details of the implementation and the simulation results. The source code of the implementation can be found at https://github.com/safarisadegh/UL2DL
Va Dataset Generation
To evaluate performance of our proposed schemes described, we have used Vienna LTEA Downlink link level simulator [29] to simulate multipath fading channels. Two 3GPP fading models were simulated for a singleinput singleoutput (SISO) channel: Extended Vehicular A (EVA) and Extended Typical Urban (ETU) and we used speed of 50 km/h to take into account Doppler effect. Our simulated timefrequency frames had size of 7214 (72 subcarriers in 14 time slots equivalent to 6 resource blocks in a 1ms subframe) based on standard LTE FDD frame size. As for simulations, we select the first 36 subcarriers over the first 7 time slots as the UL channel, and the second 36 subcarriers over the second 7 time slots as the DL channel.
Number of simulated frames that were created independently was 40K (35K for train and 5K for test). Sample of simulated frames are shown in Fig. 7 while their real and imaginary parts are shown in two separate subfigures. For convenience in the rest of paper, we will show only absolute values of frames.
VB Network Structure
In this section we explain networks structure that we have used for DLCSI prediction for each of the direct and generative approaches.
VB1 Direct Approach
As mentioned before, CNN is used to implement direct approach. The designed network only contains convolutional layers (there is no pooling or fully connected layers) that results in lower training and testing complexity. We aimed to design our network as simple as possible thus it has only 5 hidden layers. It has totally about 12K learnable parameters that is very small compared to a few million parameters that have typical deep networks. activation function is used in all layers of network except output layer ( activation function is also tested but resulted in better predictions). Network structure is shown in Fig. 8.
We used Xavier method [30] to initialize network parameters. For optimization, we used Adam optimizer [31]
. Except first two layers which have symmetric padding, we used zero padding in other layers.
VB2 Generative Approach
To adapt network structure with our image size (), we used main BEGAN [26] structure with some modifications. Network structure is shown in Fig. 9
. CSI values also normalized to their maximum value and during training procedure a zero mean normal noise with decaying variance added to input values to improve network regularization. Other settings are similar to
[26].During the training phase we also faced the mode collapse problem. In mode collapse, generator generates one or limited set of images. It is a common problem in training GANs and as mentioned in [26] can be seen in BEGANs. Reference [26] suggests that decreasing the start learning rate can help the network recover from the mode collapse issue but in our case it doesn’t help. The solution we used was weight sharing between generator and decoder part of discriminator.
VC Simulation results
VC1 Direct Approach
After we trained our CNN on EVA and ETU datasets, we used trained CNN to predict DLCSI. Some samples of the predicted DLCSI and their corresponding ground truth (matrices of shapes ) are shown in Fig. 10. The actual and predicted DLCSI are depicted as surface (solid face colors) and meshgrid plots, respectively.
Despite good prediction quality, as can be seen in Fig 10, sometimes we have relatively high errors at the edge subcarriers, this is due to the structure of convolutional layers. One way to correct the edges is to train the network on larger images than and then look at the middle block.
To see the performance of the predicted DLCSI, let’s consider a pair of a transmitter (basestation) and a receiver (user). Also assume that the user and the signaling need to be simple so the user is not able to estimate the channel and feed it back to the server. In such settings if the basestation wants to send data to the user, it needs to precompensate the transmitted data.
To perform the precompensation, the basestation needs to know the DLCSI but since there is no feedback, it should predict that. The procedure therefore will be that the transmitter first measures the ULCSI and then using the proposed scheme, it predicts the DL channel state in the next time slot. Having a prediction of the DL channel it can precompensate the signal.
To examine such settings, we have simulated many channel realizations (both UL and DL channels). For each case, we assume that the basestation only knows the UL channel and it uses that information to predict DL channel. Assuming that the basestation wants to send QPSK modulated signal, before transmission it divides the signal by what it has predicted for the DL channel. The precompensated signal is then transmitted through the downlink (and thus will be multiplied by the actual realization of the DL channel). Therefore, if we have a good prediction of downlink they will cancel out each other. Constellations of the received symbols are shown in Fig. 11 for EVA and ETU channel models. As can be seen, the received constellations are well concentered around the QPSK points verifying high prediction accuracy.
VC2 Generative Approach
We repeat the above studies to see the performance of the second proposed scheme.
First, we trained BEGAN on EVA and ETU datasets to generate images like complete frequencytime CSI block. Some BEGAN generated images are shown in Fig. 12. Note that these are images of size (the whole timefrequency block) not just the DLCSI.
During the completion phase, the trained BEGAN is used to determine a complete CSI image that matches the corrupted CSI images (ULDCI is known and other parts are missing) in the known parts. The DLCSI part of the resulted generated CSI image is then considered as the DLCSI. Some completion examples using contextual loss are shown in Fig. 13 (different losses doesn’t have notable visual difference so we do not include them here, the numerical result is reported at the end though). In Fig. 13, the actual timefrequency block of size is depicted as a surface (solid face colors) and the generated image is shown as a meshgrid. The DLCSI is the subcarriers 36 to 72 and time slots 7 to 14.
We have also tested the consistency of the DLCSI image prediction, meaning that we fixed the ULCSI part and then executed the image completion algorithm with different initialization of the vector to produce the complete image. We remind that one of the main problems we faced in DCGAN structure was high difference between completed images (for a fixed corrupted image) for different initialization of vector. As seen in Fig 14, this problem is solved by using BEGAN. The ground truth is shown as a surface and the generated images are depicted as meshgrids (there are five generated images but as they are very close they are not easily differentiable).
To see the performance of this approach, we followed the same procedure of the direct approach and simulated the constellation map of the received signal when we perform signal precompensation using the predicted DLCSI. Resulted constellations are shown in Fig. 15 for EVA and ETU channel models.
VC3 Comparable results
As the last section, we present the comparative results between the performance of different proposed DLCSI estimation methods. As for the comparison metric, we have used Normalized MSE (NMSE) which is defined:
(9) 
where is the ground truth DLCSI, and is the predicted values.
Dataset  Error  

CNN  BEGAN  
Contextual  Contextual+Perceptual  
EVA  0.0102  0.0638  0.0632 
ETU  0.0376  0.0297  0.0308 
As can be seen in Table I, performance of the CNN (direct approach) is better on EVA dataset compared to BEGAN (generative approach), while on ETU dataset BEGAN has relatively better results. In BEGAN, using contextual loss only has better result on ETU dataset while its performance is worse than using contextual+perceptual loss on EVA dataset (however on both datasets their performance is almost close).
Vi Conclusion
In this paper, we have proposed two datadriven approaches to predict DLCSI from ULCSI in FDD systems: direct approach and generative approach. Both of the proposed approaches try to use ULCSI to determine a latent model that represents the environment propagation properties. The latent model is then used to predict DLCSI. To determine the latent model, we have used Convolutional Neural Network (CNN) and Generative Adversarial Network (GAN) architecture for the direct and generative approaches, respectively. Our simulation results on EVA and ETU channel models demonstrate efficiency of both direct and generative approaches for UL2DL prediction.
References
 [1] H. Ji, Y. Kim, J. Lee, E. Onggosanusi, Y. Nam, J. Zhang, B. Lee, and B. Shim, “Overview of fulldimension MIMO in LTEadvanced pro,” IEEE Communications Magazine, vol. 55, no. 2, pp. 176–184, 2017.
 [2] P. W. Chan, E. S. Lo, R. R. Wang, E. K. Au, V. K. Lau, R. S. Cheng, W. H. Mow, R. D. Murch, and K. B. Letaief, “The evolution path of 4G networks: FDD or TDD?,” IEEE Communications Magazine, vol. 44, no. 12, pp. 42–50, 2006.
 [3] H. Almosa, R. Shafin, S. Mosleh, Z. Zhou, Y. Li, J. Zhang, and L. Liu, “Downlink channel estimation and precoding for FDD 3D Massive MIMO/FDMIMO systems,” in Wireless and Optical Communication Conference (WOCC), 2017 26th, pp. 1–4, IEEE, 2017.
 [4] N. Palleit and T. Weber, “Prediction of frequency selective SIMO channels,” in Personal Indoor and Mobile Radio Communications (PIMRC), 2011 IEEE 22nd International Symposium on, pp. 1428–1432, IEEE, 2011.
 [5] K. I. Pedersen, P. E. Mogensen, and F. Frederiksen, “Jointdirectional properties of uplink and downlink channel in mobile communications,” Electronics Letters, vol. 35, no. 16, pp. 1311–1312, 1999.
 [6] K. Hugl, K. Kalliola, and J. Laurila, “Spatial reciprocity of uplink and downlink radio channels in FDD systems,” Proc. COST 273 Technical Document TD (02), vol. 66, p. 7, 2002.
 [7] A. A. Esswie, M. ElAbsi, O. A. Dobre, S. Ikki, and T. Kaiser, “A novel FDD massive MIMO system based on downlink spatial channel estimation without CSIT,” in Communications (ICC), 2017 IEEE International Conference on, pp. 1–6, IEEE, 2017.
 [8] H. Xie, F. Gao, S. Jin, J. Fang, and Y.C. Liang, “Channel estimation for TDD/FDD massive MIMO systems with channel covariance computing,” IEEE Transactions on Wireless Communications, vol. 17, no. 6, pp. 4206–4218, 2018.
 [9] A. Decurninge, M. Guillaud, and D. T. Slock, “Channel covariance estimation in massive MIMO frequency division duplex systems,” in Globecom Workshops (GC Wkshps), 2015 IEEE, pp. 1–6, IEEE, 2015.
 [10] T. Asté, P. Forster, L. Fety, and S. Mayrargue, “Downlink beamforming avoiding DOA estimation for cellular mobile communications,” in IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS SPEECH AND SIGNAL PROCESSING, vol. 6, pp. VI–3313, INSTITUTE OF ELECTRICAL ENGINEERS INC (IEE), 1998.
 [11] D. Vasisht, S. Kumar, H. Rahul, and D. Katabi, “Eliminating channel feedback in nextgeneration cellular networks,” in Proceedings of the 2016 ACM SIGCOMM Conference, pp. 398–411, ACM, 2016.
 [12] D. Hu and L. He, “Channel Estimation for FDD Massive MIMO OFDM Systems,” in Vehicular Technology Conference (VTCFall), 2017 IEEE 86th, pp. 1–5, IEEE, 2017.
 [13] Y. Han, T.H. Hsu, C.K. Wen, K.K. Wong, and S. Jin, “Efficient downlink channel reconstruction for FDD transmission systems,” in Wireless and Optical Communication Conference (WOCC), 2018 27th, pp. 1–5, IEEE, 2018.
 [14] Y. Han, J. Ni, and G. Du, “The potential approaches to achieve channel reciprocity in FDD system with frequency correction algorithms,” in Communications and Networking in China (CHINACOM), 2010 5th International ICST Conference on, pp. 1–5, IEEE, 2010.
 [15] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, p. 436, 2015.
 [16] T. O’Shea and J. Hoydis, “An introduction to deep learning for the physical layer,” IEEE Transactions on Cognitive Communications and Networking, vol. 3, no. 4, pp. 563–575, 2017.
 [17] T. J. O’Shea, T. Roy, and N. West, “Approximating the Void: Learning Stochastic Channel Models from Observation with Variational Generative Adversarial Networks,” arXiv preprint arXiv:1805.06350, 2018.
 [18] T. J. O’Shea, T. Roy, N. West, and B. C. Hilburn, “Physical Layer Communications System Design OvertheAir Using Adversarial Networks,” arXiv preprint arXiv:1803.03145, 2018.
 [19] Z. Qin, H. Ye, G. Y. Li, and B.H. F. Juang, “Deep learning in physical layer communications,” arXiv preprint arXiv:1807.11713, 2018.
 [20] M. Soltani, A. Mirzaei, V. Pourahmadi, and H. Sheikhzadeh, “Deep LearningBased Channel Estimation,” arXiv preprint arXiv:1810.05893, 2018.
 [21] C.K. Wen, W.T. Shih, and S. Jin, “Deep Learning for Massive MIMO CSI Feedback,” IEEE Wireless Communications Letters, 2018.
 [22] V. Dumoulin and F. Visin, “A guide to convolution arithmetic for deep learning,” arXiv preprint arXiv:1603.07285, 2016.
 [23] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, pp. 2672–2680, 2014.
 [24] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training gans,” in Advances in Neural Information Processing Systems, pp. 2234–2242, 2016.
 [25] J. Zhao, M. Mathieu, and Y. LeCun, “Energybased generative adversarial network,” arXiv preprint arXiv:1609.03126, 2016.
 [26] D. Berthelot, T. Schumm, and L. Metz, “BEGAN: boundary equilibrium generative adversarial networks,” arXiv preprint arXiv:1703.10717, 2017.
 [27] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015.

[28]
R. A. Yeh, C. Chen, T.Y. Lim, A. G. Schwing, M. HasegawaJohnson, and M. N. Do, “Semantic Image Inpainting with Deep Generative Models.,” in
CVPR, vol. 2, p. 4, 2017.  [29] C. Mehlführer, J. C. Ikuno, M. Šimko, S. Schwarz, M. Wrulich, and M. Rupp, “The Vienna LTE simulatorsEnabling reproducibility in wireless communications research,” EURASIP Journal on Advances in Signal Processing, vol. 2011, no. 1, p. 29, 2011.
 [30] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256, 2010.
 [31] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
Comments
There are no comments yet.