1 Introduction
Innovative engineering always looks for smart solutions that can be deployed on the territory for both civil and military applications and, at the same time, aims at creating adequate instruments to support developers all along the development process so that correct software can be deployed. Modern technological solutions imply a vast use of sensors to monitor an equipped area and collect data, which will be then mined and analyzed for specific purposes. Classic examples are smart buildings and smart cities [1, 2].
Sensor integration across multiple platforms can generate vast amounts of data that need to be analyzed in realtime both by algorithmic means and by human operators. The nature of this information is unpredictable a priori, given that sensors are likely to encounter both naturally variable conditions in the field and disinformation attempts targeting the network protocols.
This information needs to be transmitted through a distributed combat cloud with variable but limited bandwidth available at each node. Furthermore, the protocol has to be resistant to multiple node failures.
The scaling of the information distribution also benefits from a pure feedforward nature, since the need for bidirectional communication scales poorly with the likely network latency and information loss, both of which are considerable in practical scenarios [3, 4]. This requirement puts our desired adaptive system into the wider framework of recent highly scalable feedforward algorithms that have been inspired by biology [5].
2 Linear sensor encodings
Linear encoding of sensor information has a disadvantage in that it cannot make certain optimizations, such as highly efficient Hoffmanlike encodings on the bit level. On the other hand, it is very robust when it encodes continuous data, since it is isometric. This means that we will not see large disruptions in the sample distance and makes linear encodings highly suitable for later machine learning analysis and human observation. This isometry also makes the encoding resistant to noisy data transfers, which is essential in order to achieve efficient network scaling of realtime data.
The advantage of a possible nonlinear encoding is further diminished if we consider uncertainty in our data distribution estimate. A small error in our knowledge can cause a large inefficiency in the encoding and large losses for lossy compression. For linear encodings all these aspects are limited, especially considering the easy use of regularization methods.
The advantage of linear encodings is that they possess a particular set of series of useful properties. To start with, if our hidden layer Y forms an orthonormal basis of the input layer we can represent the encoding as :
(1) 
Here
is the variance
in the input space, is the variance of each component of and is the squared error of the encoding. This is obvious if we add the excluded variables and consider a single data point:(2) 
and
(3) 
where is the error for data point I . Summing both sides and dividing by number of data points and we get:
(4) 
3 PCA in networks
The problem of encoding in node networks is usually considered from the perspective of neural networks. We will keep this terminology to retain the vocabulary predominant in literature. A recent review of current algorithms for performing principal component analysis (PCA) in a node network or neural network is
[6]. We will proceed here with deriving PCA in linear neural networks using a new simple notation, that we will later use to illustrate the new algorithms.Assume inputs are normalized so that they have zero mean. In this case, each output can be described as , where
is the input vector and
is the weights of the neuron and
is the index of the input in the training data. The outputs form a basis of the input space and if and for all , then the basis is orthonormal.Let us first consider the simple case of a single neuron. We would like to maximize the variance on training data , where we define , given an input matrix formed by placing column wise listing of all the presented inputs with the constraint . Expanding:
(5) 
where is the correlation matrix of our data, using the assumtions that inputs have zero mean. The derivative is given by
(6) 
Note that the vector above describes the gradient of the variance in weight space. Taking a step of fixed length along the positive direction of this gradient derives the Hebb rule:
(7) 
(8) 
Since we have no restrictions on the length of our weight vector, this will always have a component in the positive direction of . This unlimited growth of the weigth vector is easily limited by normalizing the weight vector after each step by dividing by length, . If we thus restrict our weight vector to unit length and note that C is a positive semidefinite matrix we end up with a semidefinite programming problem:
(9) 
subject to
(10) 
It is thus guaranteed, except if we start at an eigenvector, that gradient ascent converges to the global maximum, i.e. the largest principal component. Alternatives to weight normalization is to subtract the
component of the gradient explicitly, where is the unit vector in the direction of . In this case we would calculate:(11) 
For a stepbased gradient ascent we can not assume will be kept constant in the step direction. We can instead use the closely related
(12) 
The difference is that the overcompensates for the component if and vice versa. This essentially means that will converge towards 1.
(13) 
(14) 
The derivative orthogonal to the constraint can be calculated as follows:
(15) 
This means that we have an optimum if
(16) 
Since is a scalar, is an eigenvector of
with eigenvalue
. Equation 16 gives thatThis is learning algorithm is equivalent to Oja’s rule [7].
3.1 Generalized Hebbian Algorithm
The idea behind the generalized Hebbian algorithm (GHA) [8] is as follows:

1. Use Oja’s rule to get

2. Use deflation to remove variance along

3. i := i +1

4. Go to step 1
Subtraction of the dimension projects the space into the subspace spanned by the remaining principal components. The target function for all eigenvectors not eliminated by this projection, while = 0 in the eliminated direction . Repeating the algorithm after this step guarantees that we will get the largest remaining component at each step. The GHA requires several steps to calculate the smaller components and uses a specialized architecture.The signal needs to pass through neurons in order to calculate the th principal component and uses two different types of neurons to achieve this.
We define information as the square variance of the transmitted signal and seek encodings that will attempt to maximize the transmitted information. In other words, the total transmitted variance by a linear transform is equal to the variance of data projected onto a subspace of the original input space. The variance in this subspace plus the square error of our reconstruction is equal to the variance of the input.
Summarizing, minimizing the reconstruction error of our encoding is equivalent to maximizing the variance of the output. This is complementary and not antagonistic to the concept of sparse encodings disentangling the factors of variation [9].
3.2 Distributed PCA
Principal component analysis is the optimal linear encoding minimizing the reconstruction error, but still leaves room open for improvement. Can we do better? In PCA, as much as information as possible is put in each consecutive component. This leaves the encoding vulnerable to the loss of a node or neuron, potentially losing a majority of the information as a result.
The PCA subspace remains the optimal subspace in this sense regardless the vectors chosen to span it. Thus, any rotation the orthonormal basis is also an optimal linear encoding.
Theorem 3.1
There exists an encoding of the PCA space such that the information along each component is equivalent, . This encoding minimizes the maximum possible error of any combination components.
Proof
Starting from the eigenvectors , we can rotate any pair of vectors in the plane spanned by these vectors. As long as orthogonality is preserved, the sum of the variance in the dimensions spanned by these vectors is constant. Expressed as an average:
(17) 
Now for this to be true and if not all variances are identical there has to exist a pair of indices and such that . We can then find a rotation in the plane spanned by these vectors such that .
This simple algorithm can be repeated until .
In matrix form this can be formulated as:
(18) 
Orthonormal basis:
(19) 
(20) 
(21) 
(22) 
This seems like a promising candidate for a robust linear encoding and future work will further explore the possibility for calculating these using Hebbian algorithms. For the moment, we will instead focus on the eigenvectors to the correlation matrix used in regular PCA.
3.3 Simple Hebbian PCA
We propose a new method for calculating the PCA encoding in a single time step and using a single weight matrix .
For use in distributed transmission systems an ideal algorithm should process only local and explicitly transmitted information in terms of and from its neighbors. In other words, each node possesses knowledge about its neighbors’ transmission signal, but not their weights or other information. The Simple Hebbian PCA is described in pseudocode in algorithm 1.
3.3.1 Convergence property
The first principal component can be calculated as . This step is equivalent to Oja’s algorithm.
Let be the index of the largest eigenvector calculated so far. The known eigenvectors of the correlation matrix have corresponding eigenvalues .
We can now calculate component .
Lemma 1
(23) 
has for a maximum at, where and
Proof
We have an optimum if the gradient lies in the direction of the constraint , i.e.
(24) 
for some constant k.
(25) 
Which further simplifies to
(26) 
where we define as the resulting matrix of the above parenthesis.
To reach an optimum we seek
(27) 
where is some scalar.
is symmetric and real. Hence, the eigenvectors span the space . is a sum of symmetric matrices. Consequently is symmetric with the same number of orthogonal eigenvectors. As we see in equations 28 and 29, every eigenvector of is an eigenvector of , with eigenvalue if and if . Since are ordered by definition, is the largest eigenvalue of .
is symmetric with positive eigenvalues. As a result is positive semidefinite. For this reason the maximization problem
(30) 
(31) 
forms another convex optimization problem and gradient ascent will reach the global optimum, except if we start our ascent at an eigenvector where
. For random starting vectors the probability of this is zero.
The projection of the gradient onto the surface created by weight normalization follows , i.e. even for steps not in the actual direction of the unconstrained gradient the step lies in a direction of positive gradient.
This algorithm has some degree of similarity to several existing algorithms, namely the RubnerTavan PCA algorithm [10], the APEXalgorithm [11] and their symmetric relatives [12]. In contrast to these, we only require learning of a single set of weights per node and avoid the weight set for connections within each layer.
4 Conclusions
We have proposed algorithm, Simple Hebbian PCA, and proof that it is able to calculate the PCA in a distributed fashion across nodes. It simplifies existing network structures by removing intralayer weights, essentially cutting the number of weights that need to be trained in half.
This means that the proposed algorithm has an architecture that can be used to organize information flow with a minimum of communication overhead in distributed networks. It automatically adjusts itself in realtime so that the transmitted data covers the optimal subspace for reconstructing the original sensory data and is reasonably resistant to data corruption.
In future work we will provide empirical results of the convergence properties. We also seek to derive symmetric versions of our algorithm that uses the same learning algorithm for each node, or in an alternative formulation, that uses symmetric intralayer connections.
Eventually we also strive toward arguing for biological analogies of the proposed communication protocol as way of transmitting information in biological and neural networks.
References
 [1] K. Khanda, D. Salikhov, K. Gusmanov, M. Mazzara, and N. Mavridis, “Microservicebased iot for smart buildings,” in 31st International Conference on Advanced Information Networking and Applications Workshops, AINA 2017 Workshops, Taipei, Taiwan, March 2729, 2017, pp. 302–308, 2017.
 [2] D. Salikhov, K. Khanda, K. Gusmanov, M. Mazzara, and N. Mavridis, “Jolie good buildings: Internet of things for smart building infrastructure supporting concurrent apps utilizing distributed microservices,” in Selected Papers of the First International Scientific Conference Convergent Cognitive Information Technologies (Convergent 2016), pp. 48–53, 2016.
 [3] T. Soyata, R. Muraleedharan, J. Langdon, C. Funai, S. Ames, M. Kwon, and W. Heinzelman, “Combat: mobilecloudbased compute/communications infrastructure for battlefield applications,” vol. 8403, pp. 84030K–84030K–13, 2012.
 [4] C. Kruger and G. P. Hancke, “Implementing the internet of things vision in industrial wireless sensor networks,” in Industrial Informatics (INDIN), 2014 12th IEEE International Conference on, pp. 627–632, IEEE, 2014.
 [5] L. Johard and E. Ruffaldi, “A connectionist actorcritic algorithm for faster learning and biological plausibility,” in 2014 IEEE International Conference on Robotics and Automation, ICRA 2014, Hong Kong, China, May 31  June 7, 2014, pp. 3903–3909, IEEE, 2014.
 [6] J. Qiu, H. Wang, J. Lu, B. Zhang, and K.L. Du, “Neural network implementations for pca and its extensions,” , vol. 2012, 2012.
 [7] E. Oja, “Simplified neuron model as a principal component analyzer,” Journal of mathematical biology, vol. 15, no. 3, pp. 267–273, 1982.

[8]
T. D. Sanger, “Optimal unsupervised learning in a singlelayer linear feedforward neural network,”
Neural networks, vol. 2, no. 6, pp. 459–473, 1989.  [9] Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, pp. 1798–1828, Aug. 2013.
 [10] J. Rubner and P. Tavan, “A selforganizing network for principalcomponent analysis,” EPL (Europhysics Letters), vol. 10, no. 7, p. 693, 1989.
 [11] S. Kung and K. Diamantaras, “A neural network learning algorithm for adaptive principal component extraction (apex),” in International Conference on Acoustics, Speech, and Signal Processing, pp. 861–864, IEEE, 1990.
 [12] C. Pehlevan, T. Hu, and D. B. Chklovskii, “A hebbian/antihebbian neural network for linear subspace learning: A derivation from multidimensional scaling of streaming data,” Neural computation, 2015.