Log In Sign Up

A Novel Chaos Theory Inspired Neuronal Architecture

by   Harikrishnan N B, et al.

The practical success of widely used machine learning (ML) and deep learning (DL) algorithms in Artificial Intelligence (AI) community owes to availability of large datasets for training and huge computational resources. Despite the enormous practical success of AI, these algorithms are only loosely inspired from the biological brain and do not mimic any of the fundamental properties of neurons in the brain, one such property being the chaotic firing of biological neurons. This motivates us to develop a novel neuronal architecture where the individual neurons are intrinsically chaotic in nature. By making use of the topological transitivity property of chaos, our neuronal network is able to perform classification tasks with very less number of training samples. For the MNIST dataset, with as low as 0.1 % of the total training data, our method outperforms ML and matches DL in classification accuracy for up to 7 training samples/class. For the Iris dataset, our accuracy is comparable with ML algorithms, and even with just two training samples/class, we report an accuracy as high as 95.8 %. This work highlights the effectiveness of chaos and its properties for learning and paves the way for chaos-inspired neuronal architectures by closely mimicking the chaotic nature of neurons in the brain.


ChaosNet: A Chaos based Artificial Neural Network Architecture for Classification

Inspired by chaotic firing of neurons in the brain, we propose ChaosNet ...

Insect cyborgs: Biological feature generators improve machine learning accuracy on limited data

Despite many successes, machine learning (ML) methods such as neural net...

Neurochaos Feature Transformation and Classification for Imbalanced Learning

Learning from limited and imbalanced data is a challenging problem in th...

A Neurochaos Learning Architecture for Genome Classification

There has been empirical evidence of presence of non-linearity and chaos...

Tamil Vowel Recognition With Augmented MNIST-like Data Set

We report generation of a MNIST [4] compatible data set [1] for Tamil vo...

Putting a bug in ML: The moth olfactory network learns to read MNIST

We seek to (i) characterize the learning architectures exploited in biol...

Blessing of dimensionality at the edge

In this paper we present theory and algorithms enabling classes of Artif...

I Introduction

Next to the universe, the human brain is the most complex and sparsely understood system. Brain science is said to be in Faraday stage[1]

, which means our current understanding of the working of the brain is at a very primitive stage. It has been estimated that human brain has approximately 86 billion neurons

[2] which interact with each other forming a very complex system. Neurons are inherently nonlinear and found to exhibit chaos[3], [4]. An interesting counter-intuitive property of networks of neurons in the brain is their ability to learn in the presence of enormous amount of noise and neural interference [5]. Inspired by the biological brain, researchers have developed Artificial Intelligent systems which use learning algorithms such as Deep Learning (DL) and Machine Learning (ML) that loosely mimic the human brain.

DL and ML algorithms have a wide variety of practical applications in computer vision, natural language processing, speech processing

[6], cyber-security [7], medical diagnosis [8] etc. However, these algorithms do not use the essential properties of human brain. One such property of brain is the presence of chaotic neurons [3], [4]

. Even though Artificial Neural Networks are biologically inspired, none of its varied architectures have neurons which exhibit chaos though it has been shown that certain type of neural networks exhibit chaotic dynamics (for e.g., in Recurrent Neural Networks 

[9]). Chaotic regimes with a wide range of behaviors are beneficial for the brain to quickly adapt to changing conditions [3]. There has also been some evidence that weak chaos is good for learning [10]. Inspired by these studies, in this work, we explore whether chaos can be useful in learning algorithms.

There have been previous attempts at developing novel biologically inspired learning architectures. A recent research study by [11] proposes a learning architecture that uses a mathematical model of the olfactory network of moth to train to read MNIST[12]. The same study [11] also highlights learning from limited data samples. In another interesting research [13], the authors propose a novel compression based neural architecture for memory encoding and decoding that uses a 1D chaotic map known as Generalized Luröth Series (GLS) [14]. GLS coding, a generalized form of Arithmetic coding [15], is used for memory encoding in their work.

In this work, we propose for the first time - a novel neuronal architecture of GLS neurons and train it for a classification task using a unique property of chaos known as Topological Transitivity (TT). This research is a first step towards building a more realistic brain-inspired learning architecture. Here, chaos is used at the level of individual neurons. As we shall demonstrate, one of the key benefits of our proposed method is its superior performance in low training samples regime.

The paper is organized as follows. We introduce the novel architecture in Section II along with a detailed description of the topological transitivity based classification algorithm. This is followed by experiments and results in section III. We conclude by highlighting the unique advantages of our method while also mentioning some of the possible future research directions in section IV.

Fig. 1: The proposed Chaotic GLS neural network architecture. represent the input layer of chaotic GLS neurons each with an initial normalized membrane potential of units. is the normalized set of stimuli that is input to the network. Each GLS neuron fires (chaotically) until its membrane potential is in the neighbourhood of the stimulus. The firing time of the corresponding GLS neuron is the topological transitivity based extracted feature.

Ii The Proposed Architecture

The basic diagram of the proposed neural architecture is provided in Figure 1. It comprises of an input layer of GLS neurons represented as . The GLS neuron is a 1D chaotic map which we shall describe shortly. The GLS neurons get activated in the presence of a stimulus (input data) which results in a chaotic firing pattern. Each GLS neuron in the input layer continues to fire chaotically until its amplitude matches that of the stimulus - at which point it stops firing. In the model provided in Figure 1, represents the stimulus (normalized) which is assumed to be a real number between and . Each GLS neuron has an initial normalized membrane potential of units (a real number between and ) which is also the initial value of the chaotic map. In general, each GLS neuron can have a different initial normalized membrane potential though in this work we assume that they are all the same. The GLS neurons have a refractory period of 1 millisecond which means that the inter-firing interval is 1 (from the instant they are presented with a stimulus). When the GLS neuron encounters a stimulus say , the neuron starts firing chaotically until it matches the amplitude of the stimulus, i.e., when the normalized membrane potential reaches a neighbourhood of , at which time it stops firing. The time duration for which the -th GLS neuron is active is defined as the firing time. The firing of each GLS neuron is guaranteed to halt (as soon as its membrane potential reaches the neighbourhood of ) owing to the property of Topological Transitivity which is defined below.

Definition 1 Topological Transitivity: Given a map , is said to be topologically transitive on , if for every pair of non-empty open sets and in , there exist a non negative integer and a real number such that .

In our example, is the 1D GLS chaotic map with . We define and as the two non-empty open sets and . It follows from the above definition that there exists (integer) and a real number such that . We take . It may be the case that certain values of may not work, but we can always find a that works.

Ii-a GLS Neuron: Chaotic map

The GLS neuron [16] is a 1D map defined as:

where . We have chosen in our study. Figure 2 represents the GLS map.

Fig. 2: The first return map of Generalized Luröth Series (GLS) [16, 15]. GLS is a chaotic 1D map that exhibits topological transitivity.

Ii-B Topological Transitivity (TT) based classification algorithm

Let X be a matrix where each row represents distinct data instance and the columns represent the different features. Each row (data instance) of X is . If the data instances are images then each

represents a vectorized image with each

representing a pixel value. In our study, we have normalized333For a non-constant matrix , normalization is achieved by performing . A constant matrix is normalized to all ones. the values of the matrix X to lie in .

There are mainly three steps in TT based classification algorithm.

  • TT based feature extraction

    - Algorithm 2 represents the TT based feature extraction.

    Let be the -neighbourhood of where . Let be the firing time of the -th GLS Neuron when subjected to the normalized stimulus . This is nothing but the time in or equivalently the number of iterations of the GLS map that is required to reach the interval starting from the initial membrane potential . In other words, where for the first time. The GLS neuron stops firing as soon as this is satisfied and we shall call as TT based feature.

  • Training - Algorithm 1 represents the TT based training step. Let us assume there are classes , , …, with labels respectively. Let the data belonging to , , …, be denoted as respectively. For simplicity, let us assume that are distinct matrices of size . The TT based feature extraction step is applied to separately to yield . have the same size as since TT based feature extraction is applied to each stimulus. The average across rows is computed as follows:

    These row-vectors are termed as representation vectors for the classes. Each vector is the average internal representation of all the stimuli corresponding to class . This is biologically analogous to internal representations of experiences induced by storing memory traces corresponding to distinct classes in the brain.

  • Testing - Algorithm 3 represents the testing part. Let the normalized test data be an matrix denoted as . The th test data instance of is denoted as . TT based feature extraction is performed for each of the test data instances where . Let the resultant TT based feature extracted matrix be denoted as , where is the th row of

    . Now we compute the cosine similarity of

    individually with each of the representation vectors as follows:

    where is the norm of row-vector and is the dot product between the row-vectors and . From the above scalar values we find that index () which corresponds to the maximum cosine similarity between the vector and :

    Now, the index is assigned as the class label for . This step is repeated until each instance of the test data is assigned a class label.

Fig. 3: Illustration of topological transitivity based feature extraction (Algorithm 2) for an example. Starting from the initial normalized membrane potential of units, it takes iterations to reach the neighbourhood of the -th stimulus.

Example: We explain the aforementioned steps with the help of an example. For simplicity, let us assume a binary classification problem with two classes and with class labels and respectively. Let the input data be a matrix of size where represents the first two rows of which is the data belonging to class- and represents the remaining two rows of which is the data belonging to class-. The input layer of the proposed neuronal architecture (Figure 1) consists of neurons which are denoted as . The initial membrane potential for each of these neurons is set to units. As an example, consider the first row of : . The first step is to normalize the data. After normalization, let us say we have . These are the stimulus to the input layer of the GLS neural network (for the first four GLS neurons: and ). The stimulus initiates the firing of these 4 GLS neurons. Let us assume the firing times are milliseconds. As depicted in Figure 3, it takes number of iterations to reach which is the neighbourhood of the -th stimulus. Choosing , the four neighbourhoods are . Similary, are the firing times for the GLS neurons from respectively. This completes the TT based feature extraction step.

At the beginning of the training step, the TT based features extracted from the data are arranged as: In the training step, we compute the two representation vectors corresponding to the two classes as and .

Once the representation vectors are computed, we are ready to perform the testing on unseen data. Assume that the test data is a matrix of size

. We are required to classify each row of

to belong to either of class or . We first normalize the matrix so that it contains only real values between 0 and 1. The TT based features are extracted for by recording the firing times of the GLS neurons to yield . In order to classify , the first row of (and hence the first row of ), we compute the cosine similarity measure between and the two representation vectors and independently (say and ). We find the maximum of these two values, say . In this case, the label assigned to the first row of would be 2. We repeat this for the second row of . In this way, the unseen test data is classified using the representation vectors.

Ii-C Hyperparameters

The hyperparameters used in this method are as follows:

  1. Map and its properties: In the proposed algorithm, we used 1D GLS chaotic map for the neuron. In general, we can also use other chaotic maps (such as logistic map) that satisfy the topological transitivity property. In the GLS map used in the proposed method (Figure 2), is another hyperparameter.

  2. The initial normalized membrane potential , which is also the initial value for the chaotic map, is another hyperparameter. This initial value can be different for each GLS neuron, though in our work we have chosen the same for all the neurons.

The above hyperparameters can be tuned to further improve the performance.

Iii Experiments and Results

Learning from limited samples is a challenging problem in the AI community. We evaluate the performance of our proposed TT based classification on MNIST [12] and Iris data [17] with limited training data samples. A brief description of these datasets is given below.

Iii-a Datasets

Iii-A1 Mnist

This dataset consists of hand written digits from 0 to 9 stored as digitized 8-bit grayscale images with dimensions of pixels pixels with a total of images for training and images for testing. This is a 10-class classification task, i.e., the goal is to automatically classify these images to the ten classes corresponding to the digits 0 to 9. In our study, we performed independent trials of training with only data samples per class (randomly chosen from the available training images). For each trial, we tested our algorithm with unseen test images.

Iii-A2 Iris data

This dataset consists of 4 attributes of 3 types (classes) of Iris plants (namely Setosa, Versicolour and Virginica). The 4 attributes are: sepal length (cm), sepal width (cm), petal length (cm) and petal width (cm). There are data samples per class. This is a -class classification problem. In this study, we performed independent trials of training with randomly chosen data samples per class. For each trail, we tested with unseen randomly chosen data samples.

Iii-B Comparative performance evaluation of the proposed method with other methods

We compare our method with existing algorithms in literature. Specifically, we compare with Decision Tree (DT), Support Vector Machine (SVM), K-Nearest Neighbour (KNN) and 2-layer neural network. The machine learning techniques used in this research are implemented using

Scikit-learn [18]. The default parameters in Scikit-learn for DT, SVM and KNN are used in this research. We have used gini criterion for DT classifier and radial basis function (RBF) kernel for SVM based classification. For KNN, the number of nearest neighbours used was . We have used Keras [19] package for the implementation of -layer neural network with neurons in the input layer, neurons in the hidden layer and neurons in the output layer for MNIST classification task. For Iris data classification task, we used neurons in the input layer, neurons in the hidden layer and neurons in the output layer.

The comparative performance of TT based method and ML methods for MNIST and Iris data are provided in Figure 4 and 5 respectively. From these results, we make the following observations:

  • The proposed method shows consistent performance in the low training sample regime for both datasets.

  • For the MNIST dataset, our method outperforms the classical ML techniques - SVM, KNN, and DT. When compared with DL (-layers) the method closely matches the accuracy up to training samples/class.

  • For the Iris dataset, our method has the best performance when trained with samples/class. DL (-layers) gave the least accuracy throughout the low training sample regime.

Fig. 4: Comparative performance evaluation of TT based method with DT, SVM, KNN and DL (-layers) for MNIST dataset in the low training sample regime.

Fig. 5: Comparative performance Comparison of TT based method with DT, SVM, KNN and DL (-layers) for Iris dataset in the low training sample regime.

Iv Conclusions and Future Research Directions

As evident from the results, TT based classification gives a consistent performance in low sample regime compared to classical ML/DL techniques. This method can be particularly useful when the number of available training samples is less. As the size of training data increases, conventional ML/DL methods outperform our method. A direction for future research is to investigate whether we can combine TT based method with ML/DL methods to yield a superior hybrid algorithm.

A significant advantage of the proposed method is that it need not be re-trained from scratch if a new class (with new data samples) is added. The representation vectors of all the existing classes will not change. Only the representation vector for the new class needs to be computed. In contrast, such a scenario would require a complete re-designing and re-training in the case of ML and DL.

Our method has fewer hyperparameters than ML/DL methods. It can be noticed that we have not performed hyperparameter tuning in this work since we are dealing with very few training samples. Future work could involve using multiple chaotic maps (such as logistic map) and also designing a network with multiple layers of chaotic neurons. We expect that such modifications can further increase the accuracy.

To conclude, we have for the first time proposed a novel chaos based neural architecture which makes use of the property of topological transitivity. In our architecture, the non-linearity and chaos is intrinsic to the neuron unlike conventional ANN. Earlier research ([20] and [21]) highlight the presence of neurons in the hippocampus (of the rat’s brain) which are sensitive to a particular point in space. In a similar vein, our method proposes temporally sensitive neurons. In the proposed model, the firing time of the chaotic GLS neuron required to match the response of the stimulus is a discriminating feature to distinguish different classes. Thus, our research is an initial step towards employing chaos (and its fascinating properties) in an intrinsic fashion to design novel learning architectures and algorithms that are inspired from the biological brain.


H.N.B. thanks “The University of Trans-Disciplinary Health Sciences and Technology (TDU)” for permitting this research as part of the PhD program. The authors gratefully acknowledge the financial support of Tata Trusts.


  • [1] Vilayanur S Ramachandran, Sandra Blakeslee, and Neil Shah. Phantoms in the brain: Probing the mysteries of the human mind. William Morrow New York, 1998.
  • [2] Frederico AC Azevedo, Ludmila RB Carvalho, Lea T Grinberg, José Marcelo Farfel, Renata EL Ferretti, Renata EP Leite, Wilson Jacob Filho, Roberto Lent, and Suzana Herculano-Houzel. Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain. Journal of Comparative Neurology, 513(5):532–541, 2009.
  • [3] Philippe Faure and Henri Korn. Is there chaos in the brain? i. concepts of nonlinear dynamics and methods of investigation. Comptes Rendus de l’Académie des Sciences-Series III-Sciences de la Vie, 324(9):773–793, 2001.
  • [4] Henri Korn and Philippe Faure. Is there chaos in the brain? ii. experimental evidence and related models. Comptes rendus biologies, 326(9):787–840, 2003.
  • [5] Gabriela Czanner, Sridevi V Sarma, Demba Ba, Uri T Eden, Wei Wu, Emad Eskandar, Hubert H Lim, Simona Temereanca, Wendy A Suzuki, and Emery N Brown. Measuring the signal-to-noise ratio of a neuron. Proceedings of the National Academy of Sciences, 112(23):7141–7146, 2015.
  • [6] Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (icassp), 2013 ieee international conference on, pages 6645–6649. IEEE, 2013.
  • [7] NB Harikrishnan, R Vinayakumar, and KP Soman. A machine learning approach towards phishing email detection. In Proceedings of the Anti-Phishing Pilot at ACM International Workshop on Security and Privacy Analytics (IWSPA AP). CEUR-WS. org, volume 2013, pages 455–468, 2018.
  • [8] Yiming Ding, Jae Ho Sohn, Michael G Kawczynski, Hari Trivedi, Roy Harnish, Nathaniel W Jenkins, Dmytro Lituiev, Timothy P Copeland, Mariam S Aboian, Carina Mari Aparici, et al. A deep learning model to predict a diagnosis of alzheimer disease by using 18f-fdg pet of the brain. Radiology, 290(2):456–464, 2018.
  • [9] A Zerroug, L Terrissa, and A Faure. Chaotic dynamical behavior of recurrent neural network. Annu. Rev. Chaos Theory Bifurc. Dyn. Syst, 4:55–66, 2013.
  • [10] JC Sprott. Is chaos good for learning? Nonlinear dynamics, psychology, and life sciences, 17(2):223–232, 2013.
  • [11] Charles B Delahunt and J Nathan Kutz. Putting a bug in ML: The moth olfactory network learns to read MNIST. arXiv preprint arXiv:1802.05405, 2018.
  • [12] Yann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010.
  • [13] Aditi Kathpalia and Nithin Nagaraj. A novel compression based neuronal architecture for memory encoding. In Proceedings of the 20th International Conference on Distributed Computing and Networking, pages 365–370. ACM, 2019.
  • [14] Nithin Nagaraj. Novel applications of chaos theory to coding and cryptography. PhD thesis, NIAS, 2008.
  • [15] Nithin Nagaraj, Prabhakar G Vaidya, and Kishor G Bhat. Arithmetic coding as a non-linear dynamical system. Communications in Nonlinear Science and Numerical Simulation, 14(4):1013–1020, 2009.
  • [16] Karma Dajani and Cor Kraaikamp. Ergodic theory of numbers. Number 29. Cambridge University Press, 2002.
  • [17] Catherine L Blake and Christopher J Merz. UCI repository of machine learning databases, 1998.
  • [18] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
  • [19] François Chollet et al. Keras., 2015.
  • [20] John O’Keefe. Place units in the hippocampus of the freely moving rat. Experimental neurology, 51(1):78–109, 1976.
  • [21] John O’Keefe and Jonathan Dostrovsky. The hippocampus as a spatial map: preliminary evidence from unit activity in the freely-moving rat. Brain research, 1971.