Inference in the complex domain is a fundamental task in both signal processing 
and machine learning. Among the approaches proposed over the years, complex-valued neural networks (CVNNs) are gaining a large interest [3, 4, 5, 6]
, as they promise to replicate the recent breakthroughs in (real-valued) deep learning to complex-valued problems, such as forecasting and control of complex signals. Working in the complex domain, however, poses a range of unique problems arising from the properties of complex algebra. Foremost among them is the design of complex activation functions3, 7]. Several works end up using naive split formulations, wherein the real and imaginary parts of the activation are processed independently, with a loss in terms of expressiveness .
In  we proposed a different approach, where the activation functions are learned in the complex domain via a simple mono-dimensional parameterization. The idea, based on the concept of kernel activation functions (KAFs) originally developed in  for the real domain, is to model each function as an independent one-dimensional kernel model, whose mixing weights are adapted through back-propagation, while the dictionary of the kernel matrix is fixed in advance by sampling the complex plane. Despite the empirical performance shown in  on multiple benchmarks problems, in this paper we argue that the expressiveness of each KAF, as defined in , is still limited when working in the complex domain. In particular, very recently it was shown that the standard formulation of complex-valued kernel methods (which is also adopted in the KAF) is insufficient to model a large set of signals, because more than a single kernel is needed to model the statistics of a complex signal [10, 11]. This leads to the concept of pseudo-kernels and to widely linear kernel methods.
Contribution of the paper: in this paper we combine the ideas of  and  and we propose a widely linear KAF (WL-KAF) model, a non-parametric activation function defined directly in the complex domain with no constraints on its expressiveness (as opposed to ). We experiment with different choices for the kernel and pseudo-kernel, showing definite improvements on a series of image classification benchmarks in the complex domain, with higher accuracy and faster convergence during optimization.
Organization of the paper: in Sections 2 and 3 we recall the formulation of CVNNs and complex-valued activation functions. Section 4 describes the proposed WL-KAF. Then, we empirically validate its performance in Section 5, before concluding in Section 6 with some remarks on future lines of research.
2 Complex-valued neural networks
A CVNN is defined analogously to its real-valued counterpart as the composition of layers :
where is the input to the network. Each layer is composed of an adaptable linear projection followed by an element-wise nonlinearity :
are a matrix and a vector that contain (complex-valued) adaptable parameters. While we focus on feedforward networks, we note that by replacing (2) with more elaborate formulations one can obtain complex equivalents of other types of NNs, e.g., convolutional or recurrent networks [3, 4, 6]. Given training pairs we train the network by minimizing a regularized loss:
where all adaptable parameters are collected in ,
is a loss function, anda real-valued scalar (chosen by the user) weighting the regularization term. An example of complex loss is the squared one:
3 Complex-valued activation functions
As we stated in the introduction, the design of in the complex domain is more challenging when compared to the real-valued one, mostly due to Liouville’s theorem .111We only consider the choice of for the hidden layers, while the choice of the activation function in the outer layer depends on the task (see also Section 5). It is common for example to work in a split fashion :
where is a single (scalar) activation, are the real and imaginary components of , and a generic real-valued activation function. Alternative approaches involve phase-amplitude functions acting on the magnitude of the activations, e.g. :
where is the phase of . As mentioned in Section 1, other authors have also proposed the use of fully complex trigonometric functions, or different variants of the ReLU (commonly used in the real-valued case) . We refer to  for a more general overview on the topic. Generally speaking, none of these approaches clearly outperform the others in practice, making it an open research field.
3.1 Kernel activation functions
In  we proposed to alleviate the problem of designing complex activation functions by learning their shape directly in the complex domain. To this end, we model each activation function (separately for every neuron) with a small number of complex-valued adaptable parameters, representing the linear coefficients in a kernel-based expansion. To introduce the model, we start by sampling the complex space uniformly around , with a resolution chosen by the user, as shown pictorially in Fig. 1. The resulting elements will form our dictionary. Given this fixed dictionary, a kernel activation function (KAF) in the complex domain is defined as:222 also considers a split version of the standard KAF. We focus here on the fully complex extension.
where is a valid kernel function over complex inputs, is a column vector containing the kernel values computed between and the dictionary , and the parameters are adapted independently for every neuron, together with the linear weights in (2), via standard back-propagation. Fixing the dictionary in advance allows for an extremely efficient (vectorized) implementation of (7) .
The choice of can leverage over a large body of literature on complex reproducing kernel Hilbert spaces [16, 17]. In particular, in  we performed experiments with a complex-valued extension of the classical Gaussian kernel:
where is a hyper-parameter, and the independent kernel proposed in :
4 Proposed widely linear KAF
The key motivation for this paper is that the model in (7) is limited in the kind of complex-valued function it can approximate, an observation first made in . To see this, note that one can express the complex function in terms of a kernel method with two outputs, namely, the real and imaginary parts , . According to the theory of vector-valued kernel methods , the corresponding kernel is now matrix-valued and the output can be written as:
where we now have four column vectors corresponding to the four outputs of the kernel , and two sets of linear weights and . Substituting (7) into (11) shows that (7) forces the constraints and , limiting the expressiveness of the overall model. A solution to this is the adoption of widely linear kernel methods .
Following this, we propose an extension of the complex-valued KAF adopting widely linear kernels, that we term widely linear KAF (WL-KAF):
where , and is called the ‘pseudo-kernel’. is the complex conjugate of . The model in (12) does not impose the previously discussed limitations, and it can be shown that:
Depending on the choice of the kernel and pseudo-kernel, the resulting model has a larger amount of expressiveness compared to the standard one. In the context of KAFs and CVNNs, the model has two additional properties to its favor. Firstly, as we will see shortly, since the dictionary is fixed the kernel and pseudo-kernel can generally share a large amount of computation, making the modification extremely cheap in terms of speed. Secondly, the use of widely linear models does not increase the number of adaptable parameters, since in our case we are only adapting the mixing coefficients . Following , in the experiments we consider two different choices for the kernel and pseudo-kernel.
Case 1: if we assume that the real and imaginary parts of are independent, the off-diagonal blocks in (11) cancel and we are left with:
In this case we use (10) with two separate parameters for and . More specifically, both bandwidths in our experiments are initialized following the rule of thumb taken from , but are subsequently adapted via back-propagation independently for every neuron.
Case 2: in the case where the real and imaginary parts are not assumed independent, we can exploit the theory of separable kernels and mixed effect regularizers introduced for vector-valued kernels . In our case we obtain, for an hyper-parameter chosen by the user :
with all the kernels and being real-valued in output, and . As before, one can exploit different Gaussian kernels as in (10), letting the different bandwidths adapt via back-propagation.
5 Experimental evaluation
|Proposed WL-KAF (Case 1)|
|Proposed WL-KAF (Case 2)|
Test accuracy (mean and standard deviation) for the complex-valued image classification benchmarks (see main discussion for the preprocessing phase). First two rows are taken from. The best results for each dataset are highlighted in bold.
We evaluate the two proposed WL-KAFs on a series of complex-valued image classification benchmarks extended from . We consider four problems:
MNIST,333http://yann.lecun.com/exdb/mnist/ composed of images belonging to ten digit classes.
Fashion MNIST (F-MNIST) : a variant of MNIST where classes are clothing items, with the same dimensionality and size as MNIST.
Extended MNIST (EMNIST) : we use the ‘Digits’ extension, having thousand images of handwritten digits.
Latin OCR : an OCR problem of handwritten Latin characters extracted from manuscripts of the Vatican secret archives. There are images and classes.
To convert these to complex-valued problems, we adopt the procedure from 
and preprocess each image with a fast Fourier transform (FFT), then rank the coefficients of the FFT in terms of significance (by considering their mean absolute value), keeping only themost significant coefficients as input to the models.
The results in  are taken as a baseline, to which we add two CVNNs of the same dimensionality as  (three hidden layers of neurons each) exploiting the proposed WL-KAF. We use a dictionary by sampling points on each axis equispaced in . For the case 2 in (18), as in , we use , , and the Gaussian kernel in (10) for the two kernels. As stated before, in all cases the kernel bandwidth in (10) is initialized with the rule of thumb in 
and then adapted independently for every kernel via backpropagation. The KAFs are applied only to intermediate layers, while the output
of the last linear projection is fed to a softmax-like function to compute the class probabilities:
We minimize a regularized cross-entropy over the training data, where the amount of regularization is found through grid search as in . We use a version of the Adagrad algorithm on random mini-batches of images to perform optimization. We further employ an early stopping procedure, stopping the optimization whenever the accuracy computed over the validation split of the dataset is not improving for iterations of optimization.
The results of the experiments are provided in Table 1. “Real-valued NN” is a NN having the same dimensionality as the others, but treating real and imaginary parts of the input vector as separate inputs. “KAF” is the KAF in (7) using the independent kernel in (9). As can be seen, CVNNs with the proposed WL-KAFs can achieve in all cases a superior performance, without introducing additional parameters compared to the standard complex-valued KAF. This increase in performance translates to faster convergence, an example of which (on the Latin OCR dataset) is shown in Fig. 2.
In this paper we proposed a new model for learning activation functions for complex-valued neural networks. The model extends the idea of kernel activation functions (KAFs), by incorporating recent ideas from the field of widely linear kernel approximation. Compared to the standard KAF, the widely linear KAF does not require additional trainable parameters while possessing increased flexibility. On a set of complex-valued image classification benchmarks, it achieves better accuracy in all problems while at the same time being faster in terms of optimization. Future work will consider a formal analysis of the generalization properties of the proposed KAFs, and their evaluation in more elaborate complex benchmarks. For the latter, we plan a more comprehensive evaluation of kernels over complex spaces, along with the definition of proper strategies for finding complex hyperparameters (e.g., complex-valued learning rates in the optimization procedure).
-  P. J. Schreier and L. L. Scharf, Statistical signal processing of complex-valued data: the theory of improper and noncircular signals, Cambridge University Press, 2010.
-  A. Hirose, Complex-valued neural networks: theories and applications, vol. 5, World Scientific, 2003.
-  N. Guberman, “On complex valued convolutional neural networks,” arXiv preprint arXiv:1602.09046, 2016.
-  C. Trabelsi, O. Bilaniuk, D. Serdyuk, S. Subramanian, J. F. Santos, S. Mehri, N. Rostamzadeh, Y. Bengio, and C. J. Pal, “Deep complex networks,” 35th International Conference on Machine Learning (ICML), 2018.
-  S. Scardapane, S. Van Vaerenbergh, A. Hussain, and A. Uncini, “Complex-valued neural networks with non-parametric activation functions,” IEEE Transactions on Emerging Topics in Computational Intelligence, 2019, in press.
Izhak Shafran, Tom Bagby, and RJ Skerry-Ryan,
“Complex evolution recurrent neural networks (cernns),”in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 5854–5858.
-  M. Arjovsky, A. Shah, and Y. Bengio, “Unitary evolution recurrent neural networks,” in 33rd International Conference on Machine Learning (ICML), 2016, pp. 1120–1128.
-  H. Leung and S. Haykin, “The complex backpropagation algorithm,” IEEE Transactions on Signal Processing, vol. 39, no. 9, pp. 2101–2104, 1991.
-  S. Scardapane, S. Van Vaerenbergh, S. Totaro, and A. Uncini, “Kafnets: kernel-based non-parametric activation functions for neural networks,” Neural Networks, vol. 110, pp. 19–32, 2019.
-  R. Boloix-Tortosa, J. J. Murillo-Fuentes, I. Santos, and F. Pérez-Cruz, “Widely linear complex-valued kernel methods for regression,” IEEE Transactions on Signal Processing, vol. 65, no. 19, pp. 5240–5248, 2017.
-  R. Boloix-Tortosa, J. J. Murillo-Fuentes, F. J. Payán-Somet, and F. Pérez-Cruz, “Complex Gaussian processes for regression,” IEEE Transactions on Neural Networks and Learning Systems, 2018.
T. Kim and T. Adalı,
“Approximation by fully complex multilayer perceptrons,”Neural Computation, vol. 15, no. 7, pp. 1641–1666, 2003.
-  K. Kreutz-Delgado, “The complex gradient operator and the CR-calculus,” arXiv preprint arXiv:0906.4835, 2009.
-  T. Nitta, “An extension of the back-propagation algorithm to complex numbers,” Neural Networks, vol. 10, no. 8, pp. 1391–1415, 1997.
-  G. M. Georgiou and C. Koutsougeras, “Complex domain backpropagation,” IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol. 39, no. 5, pp. 330–334, 1992.
-  I. Steinwart, D. Hush, and C. Scovel, “An explicit description of the reproducing kernel Hilbert spaces of Gaussian RBF kernels,” IEEE Transactions on Information Theory, vol. 52, no. 10, pp. 4635–4643, 2006.
-  P. Bouboulis and S. Theodoridis, “Extension of Wirtinger’s calculus to reproducing kernel Hilbert spaces and the complex kernel LMS,” IEEE Transactions on Signal Processing, vol. 59, no. 3, pp. 964–978, 2011.
-  M. A. Alvarez, L. Rosasco, and N. D. Lawrence, “Kernels for vector-valued functions: A review,” Foundations and Trends® in Machine Learning, vol. 4, no. 3, pp. 195–266, 2012.
-  H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms,” arXiv preprint arXiv:1708.07747, 2017.
-  G. Cohen, S. Afshar, J. Tapson, and A. van Schaik, “EMNIST: an extension of MNIST to handwritten letters,” arXiv preprint arXiv:1702.05373, 2017.
D. Firmani, P. Merialdo, E. Nieddu, and S. Scardapane,
“In Codice Ratio: OCR of handwritten latin documents using
deep convolutional networks,”
11th International Workshop on Artificial Intelligence for Cultural Heritage (AI*CH 2017). CEUR Workshop Proceedings, 2017, pp. 9–16.
P. Bouboulis, S. Theodoridis, C. Mavroforakis, and L. Evaggelatou-Dalla,
“Complex support vector machines for regression and quaternary classification,”IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 6, pp. 1260–1274, 2015.
-  H. Zhang and D. P. Mandic, “Is a complex-valued stepsize advantageous in complex-valued gradient learning algorithms?,” IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 12, pp. 2730–2735, 2016.