SUSI: Supervised Self-Organizing Maps for Regression and Classification in Python

03/26/2019 ∙ by Felix M. Riese, et al. ∙ 10

In many research fields, the sizes of the existing datasets vary widely. Hence, there is a need for machine learning techniques which are well-suited for these different datasets. One possible technique is the self-organizing map (SOM), a type of artificial neural network which is, so far, weakly represented in the field of machine learning. The SOM's unique characteristic is the neighborhood relationship of the output neurons. This relationship improves the ability of generalization on small datasets. SOMs are mostly applied in unsupervised learning and few studies focus on using SOMs as supervised learning approach. Furthermore, no appropriate SOM package is available with respect to machine learning standards and in the widely used programming language Python. In this paper, we introduce the freely available SUpervised Self-organIzing maps (SUSI) Python package which performs supervised regression and classification. The implementation of SUSI is described with respect to the underlying mathematics. Then, we present first evaluations of the SOM for regression and classification datasets from two different domains of geospatial image analysis. Despite the early stage of its development, the SUSI framework performs well and is characterized by only small performance differences between the training and the test datasets. A comparison of the SUSI framework with existing Python and R packages demonstrates the importance of the SUSI framework. In future work, the SUSI framework will be extended, optimized and upgraded e.g. with tools to better understand and visualize the input data as well as the handling of missing and incomplete data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 8

Code Repositories

susi

SUSI: Python package for unsupervised and supervised self-organizing maps (SOM)


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

With increasing computing power and increasing amount of data, artificial neural networks (ANN) have become a standard tool for regression and classification tasks. Feed-forward neural networks and convolutional neural networks (CNN) are the most common types of ANN in current research. According to the

no free lunch theorem by Wolpert and Macready (1995), a variety of possible tools is necessary to be able to adapt to new tasks.

One underrepresented type of ANNs is the self-organizing map (SOM). The SOM was introduced by Kohonen (1982, 1990, 1995, 2013). It is a shallow ANN architecture consisting of an input layer and a 2-dimensional (2D) grid as output layer. The latter is fully connected to the input layer. Besides, the neurons on the output grid are interconnected to each other through a neighborhood relationship. Changes on the weights of one output neuron also affect the neurons in its neighborhood. This unique characteristic decreases overfitting of the training datasets. Further, the 2D output grid visualizes the results of the SOM comprehensibly. This plain visualization does not exist in the majority of ANNs.

In the following, we give a brief overview of the various SOM applications in different fields of research. Most SOMs are applied in unsupervised learning like clustering, visualization and dimensionality reduction. A good overview of SOMs as unsupervised learners and their applications in the research field of water resources is presented by Kalteh et al. (2008). SOMs are also applied to the scientific field of maritime environment research (Lobo, 2009). One major application of SOMs is clustering data (Vesanto and Alhoniemi, 2000) and cluster-wise regression (Muruzábal et al., 2012).

SOMs can be combined with other machine learning techniques. For example, the output of the unsupervised SOM is used by Hsu et al. (2009)

as input for a support vector regressor to forecast stock prices.

Hsu et al. (2002)

present the combination of SOMs with linear regression in hydrology. SOMs can also be used for data fusion, e.g. for plant disease detection

(Moshou et al., 2005). Hagenbuchner and Tsoi (2005)

add majority voting to SOMs for the application as supervised classifier. The combination of SOMs and nearest-neighbor classification is shown by

Ji (2000) for the classification of land use. Additional combinations of unsupervised SOMs and supervised algorithms used for classification are presented by Martinez et al. (2001); Zaccarelli et al. (2003); Zhong et al. (2006); Fessant et al. (2001). One example for the application of SOMs to solve non-linear regression tasks is presented by Hecht et al. (2015) in the field of robotics.

Two of the most popular programming languages for machine learning applications are Python and R. Programming frameworks like scikit-learn (Pedregosa et al., 2011) in Python have simplified the application of existing machine learning techniques considerably. While in the programming language R the kohonen package (Wehrens and Kruisselbrink, 2018) provides a standardized framework for SOMs, in Python there exists no such standard SOM package, yet.

In this paper, we introduce the Python package SUpervised Self- organIzing maps (SUSI) framework for regression and classification. It is the first Python package that provides unsupervised and supervised SOM algorithms for easy usage. The SUSI framework is available freely on GitHub (Riese, 2019). This ensures the maintainability and the opportunity for future upgrades by the authors as well as any member of the community. The implementation was briefly introduced in Riese and Keller (2018b) with respect to the regression of soil-moisture (Keller et al., 2018b)

and the estimation of water quality parameters 

(Keller et al., 2018a). The main contributions of this paper are:

  • the implementation of the SUSI framework including the combination of an unsupervised SOM and a supervised SOM for regression and classification tasks that is able to perform on small as well as on large datasets without significant overfitting,

  • the mathematical description of all implemented processes of the SUSI framework with a consistent variable naming convention for the unsupervised SOM in Section 2 and the supervised SOM in Section 3,

  • a first evaluation of the regression and classification capabilities of the SUSI framework in Sections 4.2 and 4.1,

  • a detailed comparison of the SUSI framework with existing Python and R packages based on a list of requirements in Section 4.3 and

  • an outlook into the future of the SUSI framework in Section 5.

2 SUSI part 1: unsupervised learning

In the following, we describe the architecture and mathematics of the unsupervised part of the SUSI framework. The grid of a SOM, the map, can be implemented in different topologies. In this paper, we use the simplest topology: the 2D rectangular grid consisting of nodes. This grid is fully connected to the input layer as each node is connected to all input features via weights. The variable naming conventions of the SUSI framework are given in Table 1. The training process of the unsupervised SOM is illustrated in Figure 1 and consists of the following steps:

  1. Initialize the SOM.

  2. Get random input datapoint.

  3. Find best matching unit (BMU) (cf. Section 2.1).

  4. Calculate learning rate (cf. Section 2.2) and neighborhood function (cf. Section 2.3).

  5. Calculate neighborhood distance weight matrix (cf. Section 2.4).

  6. Modify SOM weight matrix (cf. Section 2.5).

  7. Repeat from step 2 until the maximum number of iterations is reached.

Variable Description
Number of features of a datapoint
Number of datapoints
Number of rows on the SOM grid
Number of columns on the SOM grid
Number of current iteration
Number of maximum iterations,
Datapoint at iteration with
Label of datapoint
Best matching unit (BMU) of datapoint
with
Function of the learning rate
Start value of the learning rate
Neighborhood function
Start value of the neighborhood function
End value of the neighborhood function
Neighborhood distance weight between
BMU and SOM node
Weight of node at iteration with
Table 1: Variable naming conventions of the SUSI framework.
Figure 1: Flowchart of the unsupervised SOM algorithm resulting in the trained unsupervised SOM (orange).

The initialization approach of a SOM mainly affects the speed of its training phase. The SOM weights of the SUSI framework are initialized randomly at this stage of development. Attik et al. (2005) and Akinduko et al. (2016)

, for example, propose more sophisticated initialization approaches like applying a principal component analysis. In the following subsections, the training of an unsupervised SOM is described in detail.

2.1 Finding the best matching unit

During the search for the best matching unit (BMU), the input datapoint is compared to all weights on the SOM grid. The SOM node that is the closest one to the input node according to the chosen distance metric is the BMU. Several distance metrics can be applied. The most common distance metric is the Euclidean distance defined as

(1)

with a dimension of the vectors . Another possible choice is the Manhattan distance which is defined as the sum of the absolute distances per element:

(2)

The Tanimoto distance as third option is defined as the distance or dissimilarity between two boolean (binary: ) vectors :

(3)

with as the number of occurrences of and for all elements as defined in Jones et al. (2001). The Mahalanobis distance between two 1-dimensional vectors is defined as

(4)

with the covariance matrix of the two vectors. The default distance metric of the SUSI framework is the Euclidean distance defined in Equation 1.

2.2 Learning rate

For a faster convergence and to prevent oscillations, decreasing learning rates are often implemented in ANNs. The learning rate of the SOM training is a function that decreases from a value with increasing number of iterations. In general, there is an infinite number of possible functions for the learning rate. In the following, we present several functions implemented into the SUSI framework. In Natita et al. (2016), different learning rates for SOMs are introduced:

(5)
(6)
(7)

In de Sá et al. (2012), the following learning rate was applied:

(8)

The implementation of Barreto and Araújo (2004) includes not only a start value for the learning rate but also an end value :

(9)

In Figure 2, some examples for the behaviour of the functions are plotted. The default learning rate function of the SUSI framework is set according to Equation 9.

Figure 2: Comparison of different choices for the functional behavior of decreasing rates, which are implemented as learning rate and neighborhood function, on the number of iterations ().

2.3 Neighborhood function

Similar to the learning rate, the neighborhood function is monotonically decreasing. We include the following three widely-used functions in the SUSI framework. Equivalent to Equation 6, the neighborhood function in Matsushita and Nishio (2010) is defined as

(10)

with as initial value of the neighborhood function. In de Sá et al. (2012), the neighborhood function is implemented as

(11)

equivalent to Equation 8. In Barreto and Araújo (2004), the neighborhood function is defined similarly to Equation 9 as

(12)

The default neighborhood function of the SUSI framework is set according to Equation 10.

2.4 Neighborhood distance weight

The neighborhood distance weight is a function of the number of iterations and the distance between the BMU and every other node on the SOM grid. The distance between the BMU and node is defined as the Euclidean distance (cf. Equation 1) on the map grid. In this paper, we give two examples for neighborhood distance weights. Matsushita and Nishio (2010) use a Pseudo-Gaussian neighborhood distance weight. The weight between the BMU and the node on the SOM grid is defined as

(13)

with the neighborhood function from Equation 10 and the Euclidean distance on the SOM grid. This definition of a neighborhood distance weight is the default setting of the SUSI framework. Another possible neighborhood distance weight is the Mexican Hat (Kohonen, 1995) defined as

(14)

again with neighborhood function of Equation 10 and Euclidean distance on the SOM grid. The implications of the chosen neighborhood distance weight definitions on the SOM are investigated in e.g. Ritter et al. (1992) and Horowitz and Alvarez (1995).

2.5 Adapting weights

The two most commonly used approaches to adapt the SOM weights are the online and the batch mode. The weights of the SOM are adapted based on the learning rate and the neighborhood distance weight. The online mode is described in detail in Kohonen (2013). After each iteration, all weights of the SOM are adapted to the current datapoint as follows:

(15)

with neighborhood function , learning rate , weight vector of node at iteration . In the batch mode (Kohonen and Somervuo, 2002; Matsushita and Nishio, 2010), the whole dataset consisting of datapoints is used in every iteration. Each weight is adapted as follows:

(16)

with the neighborhood function from Equation 13 and the weight vector of node at iteration . In this first stage of development, the SUSI package provides only the online algorithm.

2.6 Trained unsupervised SOM

After reaching the maximum number of iterations , the unsupervised SOM is fully trained. New datapoints can be allocated to their respective BMU which will be used in Section 3. Note that not every node on the SOM grid has to be linked to a datapoint from the training dataset, since there can be more SOM nodes than datapoints.

3 SUSI part 2: supervised learning

To apply the SUSI framework for solving supervised regression or classification tasks, we attach a second SOM to the unsupervised SOM. The flowchart of the second, supervised SOM is illustrated in Figure 3. The two SOMs differ with respect to the dimension of the weights and their estimation algorithm. The weights of the unsupervised SOM have the same dimension as the input data. Thus, adapting these weights often changes the BMU for each input datapoint. In contrast, the weights of the supervised SOM have the same dimension as the target variable of the respective task. One has to distinguish between two cases: regression and classification. In the regression case, the weights are one-dimensional and contain a continuous number. In the classification case, the weights contain a class. By combining the unsupervised and the supervised SOM, the former is used to select the BMU for each datapoint while the latter links the selected BMU to a specific estimation. In the following, we describe the different implementations for regression and classification tasks.

3.1 Regression

The implementation of the regression SOM is described in Riese and Keller (2018b) using the example of the soil-moisture regression based on hyperspectral data. The training of the regression SOM proceeds analogous to the unsupervised SOM: first, the SOM is initialized randomly. Again, it iterates randomly through the dataset (cf. Step 1). In each iteration, the BMU is found for the current datapoint based on the trained unsupervised SOM (cf. Steps 2, 3). The BMUs do not change for the datapoints during the training since the unsupervised SOM is fully trained. Then, the neighborhood function, the learning rate and the neighborhood distance weight matrix are calculated similarly to the algorithm of the unsupervised SOM (cf. Steps 4, 5). Finally, the weights are adapted to the label of the input datapoint (cf. Step 6).

Figure 3: Flowchart of the algorithms for the regression SOM (black and cyan) and the classification SOM (black and blue). The "Trained unsupervised SOM" (orange) is the result of the unsupervised SOM algorithm illustrated in Figure 1.

In the case of the regression SOM, the label is a continuous value and the weights of the regression SOM can be modified similarly to the process described in Section 2.5. After the training (and in the case of a 1-dimensional target variable), the regression SOM consists of a map with a continuous distribution of the regression target variable. To apply the trained regression SOM to a new dataset, the BMUs needs to be found by the unsupervised SOM. For each datapoint in the new dataset, the estimated output value of the SUSI framework is the weight of the found BMU on the regression SOM. The regression SOM is illustrated in Figure 3.

3.2 Classification

In the case of a classification task, the labels are discrete. In contrast to the commonly used majority voting approach (cf. Hagenbuchner and Tsoi, 2005), we have implemented a training process similar to the adaptation approach of the unsupervised SOM (cf. Section 2.5):

  1. Initialize the classification SOM.

  2. Get random input datapoint with label.

  3. Find BMU based on trained unsupervised SOM.

  4. Calculate learning rate and neighborhood function.

  5. Calculate neighborhood distance weight.

  6. Calculate class-change probability matrix.

  7. Modify classification SOM weight matrix.

  8. Repeat from step 2 until the maximum number of iterations is reached.

The classification SOM is illustrated in Figure 3. The initialization in step 1 contains a simple majority vote: each node is assigned to the class representing the majority of datapoints allocated to the respective node. The steps 2 to 5 are implemented similarly to the regression SOM in Section 3.1. To modify the discrete weights of the classification SOM, we introduce the class-change probability in step 6. In the regression SOM, the SOM nodes around the BMU are adapted to the current datapoint with a certain probability depending on the learning rate and the neighborhood distance weight. Since the labels are discrete in a classification task, this process needs to be adapted. In the following, we explain our proposed adaptation.

For datasets with imbalanced class distributions, meaning datasets with significantly different number of datapoints per class, we provide the possibility to re-weight the dataset. The optional class weight is defined as

(17)

with the number of datapoints , the number of datapoints of class and the number of classes . Similar to Equation 15, we define a term that affects the modifying of the SOM weights. Since the modifications need to be discrete, we work with probabilities. The probability for a class change of node with BMU of the datapoint with label is defined as

(18)

with the class weight (cf. Equation 17), the learning rate (cf. Section 2.2) and the neighborhood distance weight (cf. Section 2.4). To decide if a node changes its assigned class, a binary decision rule is created based on this probability. A simple threshold of e.g. would lead to a static SOM after a certain number of iterations. Therefore, we include randomization into the decision process. For every node in every iteration, a random number

is generated which is uniformly distributed between

and . The modification of the weights is then implemented based on the class change probability defined in Equation 18 as follows:

(19)

with the label linked the datapoint of the current iteration . After the maximum number of iterations is reached, the classification SOM is fully trained. Then, every node on the SOM grid is assigned to one class of the dataset. To apply the classification SOM on new data, the BMU needs to be found for each datapoint with the unsupervised SOM. This process is similar to the one in the trained regression SOM. The estimation of the classification SOM for this datapoint is equivalent to the weight of the neuron in the classification SOM at the position of the selected BMU.

4 Evaluation

For a first evaluation of the regression and classification capabilities of the introduced SUSI framework, we rely on two datasets from different domains of geospatial image analysis. The regression is evaluated in Section 4.1 with a hyperspectral dataset on the target variable soil moisture. The evaluation of the classification SOM in Section 4.2

is performed based on the freely available Salinas valley dataset for land cover classification from hyperspectral data. The results of the two different SOM applications are compared against a random forest (RF) estimator 

(Breiman, 2001). Finally, the SUSI package is compared to existing SOM packages in the programming languages Python and R in Section 4.3.

4.1 Regression of soil moisture

The performance of the regression SOM is evaluated based on the soil-moisture dataset measured during a field campaign and published in Riese and Keller (2018a). A similar evaluation is published in Riese and Keller (2018b) with a preceding version of the SOM algorithm and code. The dataset consists of 679 datapoints collected by a Cubert UHD 285 camera. One datapoint consists of 125 bands between   to   . A soil moisture sensor measured the reference values in a range of   to   soil moisture. For the validation of the estimation performance and the generalization capabilities of the SOM, the dataset is randomly divided into a training and a test subset in the ratio . The training of the estimator is performed on the training subset and the evaluation is performed on the test subset.

The regression SOM is set up with default parameters with the exception of the grid size and the number of iterations. The grid size of the SOM is . The unsupervised and the supervised SOMs are trained with

iterations each. These hyperparameters can be further optimized depending on the applied dataset. The RF regressor is set up with

estimators and the scikit-learn default hyperparameters (cf. Pedregosa et al., 2011). For the evaluation, we choose the coefficient of determination .

The regression SOM achieves on the test subset. This score implies that the regression SOM is able to generalize on this dataset. Interestingly, the result for the training subset is only marginally better with . In comparison, the RF regressor results in and on the dataset. To conclude, the SOM seems to be robust regarding overfitting. In this case, the score could function as out-of-bag estimate for the dataset similar to Breiman (2001). When dealing with small datasets, the SOM provides the advantage of not necessitating on the split of the dataset.

In Figure (a)a, the distribution of the BMUs of the soil-moisture dataset is shown. No clear maximum exists, rather a random and uniform distribution is recognizable. Figure (a)a illustrates further that despite the fact that the dataset is smaller than the number of SOM nodes, the training takes advantage of the whole SOM grid. The spread over the whole SOM grid makes generalization possible. The continuous regression output for each SOM node is presented in Figure (b)b. Although the SOM nodes outnumber the input datapoints, each SOM node is linked to a soil moisture value.

(a)
(b)
Figure 4: Regression SOM distributions of (a) the BMUs of the dataset and (b) the regression output calculated for each node.

4.2 Classification of land cover

The Salinas valley dataset111http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes. is a freely available land cover dataset consisting of pixels collected by the 224-band AVIRIS sensor in California. The spatial resolution of this dataset is . Of the 224 bands in the range of   to   , the 20 water absorption bands are discarded, namely bands 108-112, 154-167, 224. The dataset consists of datapoints with reference data of 16 classes including vegetation classes and bare soil. Compared to the dataset used in the evaluation of the regression SOM in Section 4.1, this dataset is considered as large dataset. We apply a 5-fold cross-validation on this dataset for the evaluation of the classification SOM. The evaluation results are the average results over all five cross-validation combinations.

Similar to Section 4.1, the default SOM hyperparameters are used except for the grid size and the number of iterations. The grid size of the SOM is . The unsupervised SOM is trained with iterations and the supervised SOM is trained with iterations. The hyperparameters of the classification SOM can be further optimized. The RF classifier is set up with estimators and the scikit-learn default hyperparameters (cf. Pedregosa et al., 2011). For the evaluation, we choose the metrics overall accuracy (OA), average accuracy (AA) and Cohen’s kappa coefficient . The OA is defined as the ratio between the number of correctly classified datapoints and the size of the dataset. The AA is the sum of the recall of each class divided by the number of classes with the recall of a class being the number of correctly classified instances (datapoints) of that class, divided by the total number of instances of that class. Cohen’s kappa coefficient is defined as

(20)

with the hypothetical probability of chance agreement .

The classification results of the complete dataset are shown in Figure 5. The SOM achieves a test OA of , AA and . The training OA is , the training AA is and the score on the training subsets is . In contrast, the RF classifier achieves a test OA of , AA and while the RF training metrics are all at . The RF classifier performs significantly better than the classification SOM which has not been fully optimized. Analog to the regression (cf. Section 4.1), the results for the classification SOM with respect to the training and test subsets are similar while the RF classifier shows overfitting on the training subset.

In Figure (a)a, the distribution of the BMUs of the dataset is illustrated. Although the dataset is much larger compared to Figure (a)a, not all nodes are linked to a datapoint while some nodes are linked to a significant number of datapoints. The distribution of the classification output of the SOM is shown in Figure (b)b. Nodes assigned to the same classes are closer together on the SOM grid due to the inclusion of the neighborhood during the training process.

Figure 5: Map of the reference data (left) and the classification result of the classification SOM (center) and the RF classifier (right) on the Salinas Valley dataset. The white area is ignored.
(a)
(b)
Figure 6: Classification SOM distributions of (a) the BMUs of the dataset and of (b) the classes linked to each node as output of the classification calculated.

4.3 Comparison of SUSI and other packages

In the following section, we compare the SUSI framework with existing Software packages. We compare it with the Python packages SOMPY (Moosavi et al., 2018), SimpSOM (Comitani, 2018), MiniSom (Vettigli, 2019), TensorFlow SOM (Gorman, 2018) and the R kohonen package (Wehrens and Kruisselbrink, 2018). All entitled packages are freely available, regularly maintained (in the last year) and include unsupervised clustering. Table 2 illustrates this comparison. So far, no supervised SOM package for Python is available that matches the defined requirements requirements (cf. Table 2). The fact that the unsupervised SOM packages are all maintained regularly implies a significant interest in Python SOM packages. Overall, the SUSI package is perfectly suited for an easy use and a variety of applications.

SUSI SOMPY SimpSOM MiniSom TensorFlow SOM kohonen
Simple (scikit-learn) syntax
Comprehensive paper or documentation
Well documented and structured code
Unsupervised clustering
Supervised regression
Supervised classification
Simple installation (e.g. Pypi)
GPU support
Programming language Python 3 Python 2 Python 3 Python 3 Python 3 R
Table 2: Comparison of the SUSI package with existing SOM packages. All packages are freely available and regularly maintained.

5 Conclusion and outlook

SOMs are applied in a variety of research areas. In this paper, we introduce the SUpervised Self-organIzing maps (SUSI) package in Python. It provides unsupervised and supervised SOM algorithms for free and easy usage. The mathematical description of the package is presented in Sections 3 and 2. We demonstrate first regression and classification results in Sections 4.2 and 4.1. Overall, the performance of the SUSI package is satisfactory, taking into account that a further optimization is possible. The regression is performed on a small dataset with datapoints while the classification SOM is applied on a large dataset with datapoints. The application to these two datasets illustrates the ability of the SUSI framework to perform on differently sized datasets. Although, the RF regressor and classifier perform better in the given tasks, the SOM performance metrics of the training and the test subset differ only slightly. This shows the robustness of the SUSI framework. Further, the performance metric based on the training dataset could function as out-of-bag estimate for the dataset. This implies that a dataset does not have to be split which improves training especially on small datasets. Finally, we compare the SUSI framework against different existing SOM frameworks in Python and R with respect to e.g. features, documentation and availability. We conclude that there is a significant interest in a standardized SOM package in Python which is perfectly covered by the SUSI framework.

In the future, the SUSI package will be extended, optimized and upgraded. The handling of missing and incomplete data, as described in Hagenbuchner and Tsoi (2005), is one example for a possible new extension. In addition, the 2D SOM grid visualizes the results of the SOM and therefore ensures to better understand the underlying dataset. This ability to learn from underlying datasets can be extended as described e.g. by Hsu et al. (2002). Furthermore, we will present applications on new datasets as well as share best practices to make the SUSI framework as valuable as possible for its users.

References

  • Akinduko et al. (2016) Akinduko, A. A., Mirkes, E. M. and Gorban, A. N., 2016. SOM: stochastic initialization versus principal components. Inf. Sci. 364-365, pp. 213–221.
  • Attik et al. (2005) Attik, M., Bougrain, L. and Alexandre, F., 2005. Self-organizing map initialization. In: W. Duch, J. Kacprzyk, E. Oja and S. Zadrożny (eds), Artificial Neural Networks: Biological Inspirations – ICANN 2005, Vol. 3696, Springer, Berlin, Heidelberg, pp. 357–362.
  • Barreto and Araújo (2004) Barreto, G. A. and Araújo, A. F. R., 2004. Identification and control of dynamical systems using the self-organizing map. IEEE Transactions on Neural Networks 15(5), pp. 1244–1259.
  • Breiman (2001) Breiman, L., 2001. Random forests. Machine Learning 45(1), pp. 5–32.
  • Comitani (2018) Comitani, F., 2018. SimpSOM: a lightweight implementation of Kohonen Self-Organising Maps. https://github.com/fcomitani/SimpSOM. Release 1.3.3.
  • de Sá et al. (2012) de Sá, J. A. S., da Rocha, B. R. P., Almeida, A. and Souza, J. R., 2012.

    Recurrent self-organizing map for severe weather patterns recognition.

    In: Recurrent Neural Networks and Soft Computing, IntechOpen, Rijeka, chapter 8, pp. 151–174.
  • Fessant et al. (2001) Fessant, F., Aknin, P., Oukhellou, L. and Midenet, S., 2001. Comparison of Supervised Self-Organizing Maps Using Euclidian or Mahalanobis Distance in Classification Context. In: J. Mira and A. Prieto (eds),

    Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence

    , Vol. 2084, Springer, Berlin, Heidelberg, pp. 637–644.
  • Gorman (2018) Gorman, C., 2018. A multi-gpu implementation of the self-organizing map in TensorFlow. https://github.com/cgorman/tensorflow-som. Commit of 7 November 2018.
  • Hagenbuchner and Tsoi (2005) Hagenbuchner, M. and Tsoi, A. C., 2005. A supervised training algorithm for self-organizing maps for structures. Pattern Recognition Letters 26(12), pp. 1874–1884.
  • Hecht et al. (2015) Hecht, T., Lefort, M. and Gepperth, A., 2015. Using self-organizing maps for regression: the importance of the output function. In: European Symposium on Artificial Neural Networks (ESANN), Bruges, Belgium, p. 1–6.
  • Horowitz and Alvarez (1995) Horowitz, R. and Alvarez, L., 1995. Convergence properties of self-organizing neural networks. In: Proceedings of 1995 American Control Conference-ACC’95, Vol. 2, IEEE, pp. 1339–1344.
  • Hsu et al. (2002) Hsu, K., Gupta, H. V., Gao, X., Sorooshian, S. and Imam, B., 2002. Self‐organizing linear output map (solo): An artificial neural network suitable for hydrologic modeling and analysis. Water Resources Research 38(12), pp. 1–17.
  • Hsu et al. (2009) Hsu, S.-H., Hsieh, J. P.-A., Chih, T.-C. and Hsu, K.-C., 2009. A two-stage architecture for stock price forecasting by integrating self-organizing map and support vector regression. Expert Systems with Applications 36(4), pp. 7947–7951.
  • Ji (2000) Ji, C. Y., 2000. Land-use classification of remotely sensed data using kohonen self-organizing feature map neural networks. Photogrammetric Engineering & Remote Sensing 66(12), pp. 1451–1460.
  • Jones et al. (2001) Jones, E., Oliphant, T., Peterson, P. et al., 2001. SciPy: Open source scientific tools for Python.
  • Kalteh et al. (2008) Kalteh, A., Hjorth, P. and Berndtsson, R., 2008. Review of the self-organizing map (som) approach in water resources: Analysis, modelling and application. Environmental Modelling & Software 23(7), pp. 835–845.
  • Keller et al. (2018a) Keller, S., Maier, P. M., Riese, F. M., Norra, S., Holbach, A., Börsig, N., Wilhelms, A., Moldaenke, C., Zaake, A. and Hinz, S., 2018a. Hyperspectral Data and Machine Learning for Estimating CDOM, Chlorophyll a, Diatoms, Green Algae, and Turbidity. International Journal of Environmental Research and Public Health 15(9), pp. 1881.
  • Keller et al. (2018b) Keller, S., Riese, F. M., Stötzer, J., Maier, P. M. and Hinz, S., 2018b. Developing a machine learning framework for estimating soil moisture with VNIR hyperspectral data. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-1, pp. 101–108.
  • Kohonen (1982) Kohonen, T., 1982. Self-organized formation of topologically correct feature maps. Biological Cybernetics 43(1), pp. 59–69.
  • Kohonen (1990) Kohonen, T., 1990. The self-organizing map. Proceedings of the IEEE 78(9), pp. 1464–1480.
  • Kohonen (1995) Kohonen, T., 1995. Self-Organizing Maps. Vol. 30, Self-organizing maps.
  • Kohonen (2013) Kohonen, T., 2013. Essentials of the self-organizing map. Neural Networks 37, pp. 52–65.
  • Kohonen and Somervuo (2002) Kohonen, T. and Somervuo, P., 2002. How to make large self-organizing maps for nonvectorial data. Neural Networks 15(8-9), pp. 945–952.
  • Lobo (2009) Lobo, V. J. A. S., 2009. Application of self-organizing maps to the maritime environment. In: V. V. Popovich, C. Claramunt, M. Schrenk and K. V. Korolenko (eds), Information Fusion and Geographic Information Systems, Springer, Berlin, Heidelberg, pp. 19–36.
  • Martinez et al. (2001) Martinez, P., Gualtieri, J., Aguilar, P., Plaza, A., Pérez, R. and Preciado, J., 2001. Hyperspectral image classification using a self-organizing map. In: Proceedings of the Tenth JPL Airborne Earth Science Workshop, Vol. 10, pp. 267–274.
  • Matsushita and Nishio (2010) Matsushita, H. and Nishio, Y., 2010. Batch-Learning Self-Organizing Map with Weighted Connections Avoiding False-Neighbor Effects. The 2010 International Joint Conference on Neural Networks (IJCNN) pp. 1–6.
  • Moosavi et al. (2018) Moosavi, V., Packmann, S. and Vallés, I., 2018. SOMPY: A Python Library for Self Organizing Map (SOM). https://github.com/sevamoo/SOMPY. Commit of 4 March 2019.
  • Moshou et al. (2005) Moshou, D., Bravo, C., Oberti, R., West, J., Bodria, L., McCartney, A. and Ramon, H., 2005. Plant disease detection based on data fusion of hyper-spectral and multi-spectral fluorescence imaging using Kohonen maps. Real-Time Imaging 11, pp. 75–83.
  • Muruzábal et al. (2012) Muruzábal, J., Vidaurre, D. and Sánchez, J., 2012. SOMwise regression: a new clusterwise regression method. Neural Computing and Applications 21, pp. 1229–1241.
  • Natita et al. (2016) Natita, W., Wiboonsak, W. and Dusadee, S., 2016. Appropriate Learning Rate and Neighborhood Function of Self-organizing Map (SOM) for Specific Humidity Pattern Classification over Southern Thailand. International Journal of Modeling and Optimization 6, pp. 61–65.
  • Pedregosa et al. (2011) Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M. and Duchesnay, E., 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12, pp. 2825–2830.
  • Riese (2019) Riese, F. M., 2019. SUSI: SUpervised Self-organIzing maps in Python. doi.org/10.5281/zenodo.2609130.
  • Riese and Keller (2018a) Riese, F. M. and Keller, S., 2018a. Hyperspectral benchmark dataset on soil moisture. doi.org/10.5281/zenodo.1227836.
  • Riese and Keller (2018b) Riese, F. M. and Keller, S., 2018b. Introducing a Framework of Self-Organizing Maps for Regression of Soil Moisture with Hyperspectral Data. In: IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, pp. 6151–6154.
  • Ritter et al. (1992) Ritter, H., Martinetz, T., Schulten, K., Barsky, D., Tesch, M. and Kates, R., 1992. Neural computation and self-organizing maps: an introduction. Addison-Wesley Reading, MA.
  • Vesanto and Alhoniemi (2000) Vesanto, J. and Alhoniemi, E., 2000. Clustering of the self-organizing map. IEEE Transactions on Neural Networks 11(3), pp. 586–600.
  • Vettigli (2019) Vettigli, G., 2019. MiniSom: minimalistic and NumPy-based implementation of the Self Organizing Map. https://github.com/JustGlowing/minisom. Release 2.1.5.
  • Wehrens and Kruisselbrink (2018) Wehrens, R. and Kruisselbrink, J., 2018. Flexible self-organizing maps in kohonen 3.0. Journal of Statistical Software 87(7), pp. 1–18.
  • Wolpert and Macready (1995) Wolpert, D. H. and Macready, W. G., 1995. No free lunch theorems for search. Technical report, Technical Report SFI-TR-95-02-010, Santa Fe Institute.
  • Zaccarelli et al. (2003) Zaccarelli, N., Zurlini, G., Rizzo, G., Blasi, E. and Palazzo, M., 2003. Spectral Self-Organizing Map for hyperspectral image classification. World Scientific Publishing Company Incorporated, pp. 218–223.
  • Zhong et al. (2006) Zhong, Y., Zhang, L., Huang, B. and Li, P., 2006. An unsupervised artificial immune classifier for multi/hyperspectral remote sensing imagery. IEEE TGRS 44, pp. 420–431.