Many researches have been done on PolSAR image classification, and breakthrough benefits from the development and application of deep convolutional neural networks(DCNN). As we all know, PolSAR data are usually expressed by coherent matrices or covariance matrices which contain amplitude and phase information in complex number form. However, a general real-valued DNN loses significant phase information when it is applied to interpret PolSAR data directly. 
converts a complex-valued coherent or covariance matrix into a normalized 6-D real-valued vector for PolSAR data classification, while ignoring important phase information. Different from direct conversion of complex number into a real number, some other strategies are introduced. Besides the coherency matrix extended to the rotation domain, Chenet al.  also take the null angle and roll-invariant polarimetric features as input to extract ample polarimetric features. Liu et al.  propose a novel polarimetric scattering coding method for gaining more polarimetric features in classification. However, their operations are all in the real number domain.
Instead, in order to make full use of PolSAR data information, some complex-valued DNN models are proposed. Inspired by the application of complex-valued convolutional neural network (CV-CNN), Zhang et al
proposed the application of CV-CNN on PolSAR data classification and obtained a great success. This is the beginning of CV-CNN to classify PolSAR data. Besides retaining information, CV-CNN has the strengths of faster learning and converenge
. In addition, deep learning is a data-driven approach. However, the labeled samples are extremely deficient in PolSAR data. Thus, unsupervised or semi-supervised networks are used for the classification of PolSAR data, for example, deep convolutional autoencoder. Meanwhile, GAN is able to expand data. It can learn the potential distribution of actual data and generate fake data that has the same distribution with actual data. With the successful application in many fields (the generation of natural images and Neural Dialogue and so on), the GAN architecture has received increasing attention in recent years. In order to further solve the deficiency of labeled data, it is advisable to combine GAN architecture and semi-supervised learning. Therefore, in this paper, we propose a complex-valued GAN framework.
Our novel model has three advantages: 1) The complex-valued neural network complies with the physical mechanism of the complex numbers, and it can retain amplitude and phase information of PolSAR data; 2) GAN extended to complex number field can expand PolSAR samples, which have similar distribution with actual samples. Increased samples can improve the classification performance of PolSAR data. 3) Besides labeled data, unlabeled data are also used to update model parameters by semi-supervised learning and improve network performance to a certain extent.
2 Semi-Supervised Complex-Valued Gan
2.1 Network Architecture
The data generated by general real-valued GAN is different from PolSAR data in feature and distribution. Therefore, we extend real-valued GAN to the complex number domain and propose a complex-valued GAN. Figure 1 illustrates the framework of our model, and it is composed by Complex-valued Generator and Complex-valued Discriminator. This framework consists of complex-valued full connection, complex-valued deconvolution, complex-valued convolution, complex-valued activation function and complex-valued batch normalization, which are represented by ”CFC”, ”CDeConv”, ”CConv”, ”CA” and ”CBN”, respectively. In addition, a complex-valued network also makes full use of the amplitude and phase features of PolSAR data.
In the Complex-valued Generator, after a serious of complex-valued operations, two randomly generated vectors shown as the green block and blue block are translated into a complex-valued matrix, which has the same shape and distribution with PolSAR data. In the Complex-valued Discriminator, we use complex-valued operations to extract complete complex-valued features, which are in the form of a pair. Then we concatenate the real part and imaginary part of the last feature to the real domain for final classification. In the training processing, generated fake data, labeled and unlabeled actual data are used to alternately train this complex-valued GAN by semi-supervised learning, and until the network can effectively identify the authenticity of input data and achieve correct classification.
2.2 Complex-Valued Operation Mask
For simplifying the calculation, we choose the algebraic form to express a complex number. In the algebraic form, the numbers in real part and imaginary part are real numbers with one dimension. We use and to denote two complex numbers, the multiplication and addition are redefined as follows:
To indicate the complex-valued operation mentioned in detail, a complex-valued operation mask is proposed, as shown in Figure 2. The green and the blue block represents the real and imaginary part, respectively. This mask can make some complex number calculations, whose input data (, ), the weight (, ) and output data (, ) are consisted of a real part and an imaginary part. Therefore, this type of operation can be decomposed to four traditional real operations, one addition operation and one subtraction operation. Each complex-valued operation in our network complies with this mask. The same expression and physical mechanism of data and network parameters in favor of obtaining full data features used for classification.
2.3 Complex-Valued Batch Normalization
Batch normalization has been widely used in deep neural networks for unifying data and accelerate convergence rate. In addition, complex-valued batch normalization can stabilize the performance of GANs. However, scanty training samples and less batch sizes restrict the effect of batch normalization.
In order to address this issue, a novel batch normalization is proposed in this paper. The expectation and covariance matrices are replaced by constantly updated average expectation and covariance matrices, so that they hold all sample information in training proceeding. The following formulation shows the normalization of the tth batch x :
where and represent the average expectation and covariance matrix from to batches, which is computed as follows:
where denotes the length of state remembered, and is equal to . The square root of a Matrix of 2 times 2 is computed:
This operation can translate the data mean to 0 and variance to 1. Ultimately, we use the following computing to denote complex-valued batch normalization:
where and are defined as two parameters to reconstruct the distribution.
2.4 Semi-Supervised Learning
In this complex-valued GAN, for further utilizing features of unlabeled data, we use semi-supervised learning to optimize network with a classifier of softmax. The output of generator (G) is a dimensional vector , where from to
are the probability of first K classes and
is the probability of input image being fake. In order to optimize the generator (G) and discriminator (D), we define the loss function as follows:
where , and represent classification loss of labeled samples, unlabeled samples, and generated samples, respectively. Therefore, classification losses of labeled and generated samples are easily acquired. However, the classification loss of unlabeled samples is not easy to express because of inexplicit ground truth. With this inevitable problem, the output probability of softmax is operated as follows:
where denotes the max value in
, and logistic regression as a binary classification is utilized. When the output approaches 1, the probabilityaccordingly, the facticity of data is discriminated. By this deduction, unlabeled data can also be used to update our network model.
In our experiments, two benchmarks data sets of Flevoland and San Francisco are used. In order to verify the effectiveness of our method, our model is compared with complex-valued convolutional neural network (CV-CNN) and real-valued convolutional neural network (RV-CNN), they have similar configurations with our Complex-valued Discriminator. The overall accuracy (OA), average accuracy (AA), and Kappa coefficient are used to measure the performance of all the methods.
3.1 Experiments on Standard Data Set
We use a coherent matrix T, which is a conjugate symmetrical complex value matrix and follows complex Wishart distribution, to express all information of the corresponding pixel on PolSAR images. In Flevoland data, 0.2%, 0.5%, 0.8%, 1.0%, 1.2%, 1.5%, 1.8%, 2.0%, 3.0%, 5.0% labeled data in each of 15 categories are randomly selected as training data, and the remained labeled data for testing. In addition, 10% unlabeled samples are used to train our semi-supervised complex-valued GANs. In San Francisco data, we randomly chose 10, 20, 30, 50, 80, 100, 120,150, 200, 300 labeled data in each of the 5 categories for training and 10% data, no matter whether labeled, as actual samples.
The parameters of all experiments in this paper are set as follows: the patch size is , the learning rate is 0.0005, and the optimization method is Adam with and . Figure 3 and Figure 4 show the change of OA, AA, and Kappa with the sample ratio in two data sets. In Flevoland data, the results verified the superiority of our new network with less labeled samples, and this law especially obvious when training samples less than 3.0%. This same advantage also is shown in San Francisco data, especially if numbers of training data less than 50. In order to exhibit the contributions of our model on each category, we list all test accuracy of Flevoland data with 0.8% sampling ratio and of San Francisco data with 10 labeled training samples in Table 1. In Flevoland data, we can find that accuracies of different categories have generally improved especially for the fifteenth category, which has the least training samples and achieves increase of 65.1% and 33.17% compare to the real-valued and complex-valued neural networks in accuracy, respectively. In San Francisco data, comparing to the complex-valued neural network, complex-valued GAN further improves classification accuracy than the real-valued neural network, especially for Developed, Low-Density Urban and High-Density Urban with the increase of 44.7%, 220.9%, 187.4%.
3.2 Generated Data Analysis
In order to analyze the effectiveness of our complex-valued GAN, we discuss the similarity of actual and generated data in appearance and distribution. Take Flevoland data for example, we randomly select 100 pcolors of the real part in diagonal elements of T, as shown in Figure 5. We can clearly find that generated data have high similarity with actual data. Based on the known data distribution of T matrix, we further count the distribution of actual and generated data in Figure 6. For actual data, the real and imaginary part statistic histograms of shown in (a1) and (a2) and of in (a3) and (a4). (b1) - (b4) represent the corresponding statistic histograms of generated and . We can find the high similarity of generated data with actual data.
In this paper, a complex-valued GAN is proposed to classify PolSAR data. Nearly all operations are extended to the complex number field, and this model obeys the physical meaning of PolSAR data and holds complete phase and amplitude feature. To the best of our knowledge, this is the first time that complex-valued data is generated by a network, and the generated data is similar to actual complex-valued data in appearance and distribution The complex-valued GAN is alternately trained with generated data, labeled data and unlabeled data by semi-supervised learning. With the utilization of unlabeled and generated samples features, our complex-valued semi-supervised GAN obtains obviously precede over other models especially when labeled samples are insufficient. It opens up a new way for our researches on solving the problem of lacking complex-valued samples.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton,
“Imagenet classification with deep convolutional neural networks,”in NIPS, 2012, pp. 1097–1105.
-  Yu Zhou, Haipeng Wang, Feng Xu, and Ya-Qiu Jin, “Polarimetric sar image classification using deep convolutional neural networks,” IEEE Geoscience and Remote Sensing Lett., vol. 13, no. 12, pp. 1935–1939, 2016.
-  Si-Wei Chen and Chen-Song Tao, “Polsar image classification using polarimetric-feature-driven deep convolutional neural network,” IEEE Geoscience and Remote Sensing Lett., vol. 15, no. 4, pp. 627–631, 2018.
-  Xu Liu, Licheng Jiao, Xu Tang, Qigong Sun, and Dan Zhang, “Polarimetric convolutional network for polsar image classification,” IEEE Trans. Geosci. Remote Sens., 2018.
-  Nitzan Guberman, “On complex valued convolutional neural networks,” arXiv preprint arXiv:1602.09046, 2016.
-  Zhimian Zhang, Haipeng Wang, Feng Xu, and Ya Qiu Jin, “Complex-valued convolutional neural network and its application in polarimetric sar image classification,” IEEE Trans. Geosci. Remote Sens., vol. PP, no. 99, pp. 1–12, 2017.
-  T Nitta, “On the critical points of the complex-valued neural network,” in Neural Information Processing, 2002. ICONIP’02. Proceedings of the 9th International Conference on. IEEE, 2002, vol. 3, pp. 1099–1103.
-  Jie Geng, Jianchao Fan, Hongyu Wang, Xiaorui Ma, Baoming Li, and Fuliang Chen, “High-resolution sar image classification via deep convolutional autoencoders,” IEEE Geoscience and Remote Sensing Lett., vol. 12, no. 11, pp. 2351–2355, 2015.
-  Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, “Generative adversarial nets,” in NIPS, 2014, pp. 2672–2680.
-  Alec Radford, Luke Metz, and Soumith Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint:1511.06434, 2015.
-  Jiwei Li, Will Monroe, Tianlin Shi, Sébastien Jean, Alan Ritter, and Dan Jurafsky, “Adversarial learning for neural dialogue generation,” arXiv preprint:1701.06547, 2017.
Nathaniel R Goodman,
“Statistical analysis based on a certain multivariate complex gaussian distribution (an introduction),”The Annals of mathematical statistics, vol. 34, no. 1, pp. 152–177, 1963.