Tactile Hallucinations on Artificial Skin Induced by Homeostasis in a Deep Boltzmann Machine

06/25/2019 ∙ by Michael Deistler, et al. ∙ 2

Perceptual hallucinations are present in neurological and psychiatric disorders and amputees. While the hallucinations can be drug-induced, it has been described that they can even be provoked in healthy subjects. Understanding their manifestation could thus unveil how the brain processes sensory information and might evidence the generative nature of perception. In this work, we investigate the generation of tactile hallucinations on biologically inspired, artificial skin. To model tactile hallucinations, we apply homeostasis, a change in the excitability of neurons during sensory deprivation, in a Deep Boltzmann Machine (DBM). We find that homeostasis prompts hallucinations of previously learned patterns on the artificial skin in the absence of sensory input. Moreover, we show that homeostasis is capable of inducing the formation of meaningful latent representations in a DBM and that it significantly increases the quality of the reconstruction of these latent states. Through this, our work provides a possible explanation for the nature of tactile hallucinations and highlights homeostatic processes as a potential underlying mechanism.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Hallucinations have been observed in several sensory systems, yet the underlying mechanisms are still poorly understood. In the somatosensory system, different kinds of hallucinations can emerge. In patients with an amputated arm or leg, phantom limb sensations have been described in the position of the missing limb [1, 2]. Tactile hallucinations have also been described in people with neuropsychiatric diseases, most notably schizophrenia or Parkinson’s disease [3, 4]. In order to treat these diseases, a clear understanding of the underlying mechanisms is required. The clinical heterogeneity of hallucinations and their potential variety of underlying causes display a great challenge and opportunity from the computational modelling point of view [5]. Developing mathematical models that are able to explain these effects provides answers to how the brain perceives the world. In fact, it has been recently shown that hallucinations can be even provoked in healthy subjects [6] by means of sensory contingency conditioning.

Previous modelling works have focused on auditory [7] and visual hallucinations [8]. However, fewer models like [9, 10] have properly addressed tactile hallucinations despite its implications on prosthetic [11] and embodiment disorders.

Here, we provide such a framework inspired by recent work on the visual system. Therein, the emergence of visual hallucinations in visually impaired individuals, known as the Charles-Bonnet-Syndrome (CBS) [12], was modelled by applying homeostasis in a Deep Boltzmann Machine (DBM) [8]. Homeostasis is the adjustment of the excitability of the neurons due to reduced activity. This model provides a possible explanation for several observations made in people with CBS. As the CBS is frequently compared to the phantom limb or other kinds of tactile hallucinations [13, 12], we here extend the existing approach to tactile hallucinations using a previously developed artificial skin [14]. Our neuro-inspired framework provides a possible explanation for the emergence of tactile hallucinations and gives insights into the neurological processes during neuropsychiatric diseases such as Parkinson’s or schizophrenia.

We first explain our approach which uses a DBM to model tactile hallucinations (section II). Secondly, the experimental setup and the usage of the artificial skin for creating tactile patterns and visualizing hallucinations is presented in section III. Thirdly, we show our main results where homeostasis in a DBM leads to tactile hallucinations on an artificial skin (section IV). We conclude by a discussion of the neuroscientific relevance of this work and its potential impact in medical and engineering applications (section V).

Ii Proposed model

Ii-a Modelling tactile hallucinations

Few works in the literature have proposed computational models for hallucinations [5, 15]. Auditory hallucinations of schizophrenic patients [7, 16] and visual hallucinations for the Charles-Bonnet syndrome [8]

were addressed by means of neural networks. Furthermore, perceptual hallucinations in a Bayesian framework were discussed in

[17] under the free-energy principle and in [15] based on the circular inference hypothesis.

In order to model tactile hallucinations, this work leverages a Deep Boltzmann Machine (DBM), pre-trained with a Deep Belief Network (DBN) (see appendix). The overall setup of our model is depicted in Fig.

1

. The artificial skin cells provide the input to the visual layer of the DBM, which is then trained on those patterns. We will show that the DBM learns hidden representations of the input data and that these representations are encoded in the hidden layers of the network. We claim that, through homeostasis, the network will even in absence of sensory input produce meaningful latent representations corresponding to previously learned patterns, i.e. a hallucination pattern. As shown in Fig.

1, homeostasis is modeled as an increase (or decrease) of the bias values of the network. The colored LEDs of the skin were used to visualize both the input and the output of the DBM.

Fig. 1:

Deep Boltzmann Machine (DBM) for modelling tactile hallucinations. Each of the 18 artificial skin cells is connected to an input neuron. Input neurons are connected by receptive fields to the hidden layers with sigmoidal activation functions and binary neurons. In the deepest hidden layer

also provides inputs to the first hidden layer . Homeostasis is modeled as a change on the biases.

Ii-B Receptive Fields

Cortical connectivity has been observed to be hierarchical and sparse [18, 19]. To model this, we use a limited connectivity between the layers of the DBM, where each neuron is connected only to neurons in the same or neighbouring columns (comparable to cortical columns), as depicted in Fig. 1. In order to allow for the same number of input connections for all neurons in a certain layer, we also consider a ring-like connectivity. Here, every neuron is again connected only to neurons in the same or neighbouring columns. Unlike in the first case, the columns are now ordered in a ring-like, circular, structure. We refer to the two described cases as linear and circular.

Ii-C Homeostasis

At the heart of our approach lies the change of excitability of neurons during sensory deprivation. On a single neuron scale, this process is termed homeostasis. Its timescale is on the order of hours or days [20, 21, 22]. In general, homeostatic mechanisms decrease the excitability of highly active neurons and increase the excitability of inactive neurons. It is thus often considered to be underlying stable network function and to prevent runaway network excitability in Hebbian cell assemblies [23]. Here, we make use of homeostatic mechanisms to evoke hallucinations corresponding to learned patterns. We do this by measuring the average activity of each neuron when presented with patterns from the training dataset. This average

is then considered to be the healthy activity of that neuron. When presenting a zeros vector as an input of the network, the activity of the neuron

will deviate from this baseline. In our model, homeostatic mechanisms will increase (or decrease) the bias of neuron

in order to regain a healthy activity level:

(1)

where is the adaptation rate, set to 0.01 for all neurons. Note that, due to the symmetry of the DBM, homeostatic mechanisms can both increase or decrease the bias in response to blank input.

Ii-D Decoding the hidden state

In order to evaluate the hidden states of the DBM, we need a method to decode the internal states of the hidden layers and to infer whether the hidden representations are mere noise or whether they correspond to a pattern. Instead of training a classifier on the states of the hidden units, we use the DBM as its own decoder

[8].

In this process, we pass the hidden state through the network towards the visible layer in a single feedforward pass. While connections in a DBM are usually bidirectional, in this feedfordward pass, all neurons in a layer get input only from the adjacent deeper layer. Therefore, the total input to a neuron decreases due to the lack of input from the adjacent shallower layer. We compensate for this lack of input by multiplying the weights by a factor of two. The weights attached to the visible layer, however, are not multiplied by two as the visible units always get input only from one side. It is important to note that we apply this process only to decode the internal states of the network for inspection. This process is hence not required to be biologically plausible. To draw a comparison to biological neural networks, this process would correspond to matching measured neuronal activity (e.g. from local field potential) to the prevalent external stimulus.

Ii-E Evaluation measure

In order to evaluate that the hidden representations correspond to patterns that come from the same distribution as the dataset, we analyzed the quality of the reconstructions by means of the Dice-coefficient

(2)

where and are two binary patterns and denotes the cardinality of the patterns (or, in other words, the number of values that are one).

Thus, given the patterns in the dataset and the pattern to be evaluated , we define the performance of the network as the maximal Dice-coefficient between and any of the patterns in the dataset

(3)

Intuitively, if the performance of the network is one, its output (i.e. the decoding sample or the hallucination) corresponds to one of the patterns. If the performance is zero, the output does not have an overlap with any of the patterns in the training dataset.

Iii Artificial Skin and Experimental Setup

Fig. 2: Biologically inspired multi-modal skin [14, 24]. The skin is composed of hexagonally shaped skin cells. Each skin cell employs four different kind of sensors: three capacitive force sensors to sense contacts, one proximity sensor to sense pre-contacts, a 3D acceleration sensor to sense vibrations, and a temperature sensor. These skin cells communicate with each other, create a self-organizing communication network, and form skin patches.

We work with an artificial skin [14, 24] consisting of 18 skin cells, which are ordered in a hexagonal grid with shape 3 6 (Fig 1). Each of these cells is equipped with, among others, three normal force sensors (Fig. 2).

For the pattern acquisition, we take the maximal recorded value of the three force sensors per cell. For further robustness, we cluster time steps into one round. If a certain cell exceeds its force threshold within the round, it is considered to be on in this interval. Each time step is 250 ms long, which means that one round corresponds to 1.25 seconds.

When recording, we accept the pattern for the dataset only if at least cells are on in this round. This prevents having a vast amount of either empty or noisy data in the dataset.

When the measured value does not exceed , the LED lights up in green. If the respective threshold is exceeded, the LED of the respective artificial skin cell lights up in blue.

The skin connection is bidirectional. Thus, it also displays patterns extracted from the output of the DBM and shows them for time steps, with each time step again corresponding to 250 ms.

To model the tactile hallucinations presented in the result section IV, a proof-of-concept dataset was created, consisting of three patterns with one triangle each, where a triangle is made up of three cells. The size of the patterns was selected for visual comprehension.

(a) Input procedure
(b)

Examples of training patterns

Fig. 3: Tactile patterns for training.

For reproducibility, the hyperparameters of the proposed method, and their meaning, as well as their options and a recommended default value, are given in Table

I.

Hyperparameter Variable Options Value
MAX_FORCE [0,1] 0.012
MIN_NUMBER_OF_CELLS 2
COMBINE_ITER 5
DISPLAY_DURATION 3
TABLE I: Overview of the hyperparameters for the artificial skin

Iv Results

Iv-a Training and sampling from the DBM

For the below described results, we used a DBM with three layers, with each layer consisting of 18 neurons. During training, samples from the network were evaluated by calculating the Dice-coefficient-based evaluation measure (see section II-E). We pretrained the DBM as a Deep Belief Network. Thus, the training process of a three-layer DBM has three phases. In the first phase, the weights between the first and the second layer are pre-trained. After 2000 iterations, a performance of around 0.85 is reached (Fig. 4, left). In the second phase, only the weights between the second and the third layer are pre-trained by taking the latent representations in the first hidden layer as training data. Again, the performance reaches a value of around 0.85. Lastly, samples from the entire DBM were taken. The pre-training accuracy of the DBM is already around 0.85. The last phase is to train the overall DBM, refining the network to a performance of 0.97. We apply early stopping to ensure that the network does not collapse onto only one of the patterns.

Fig. 4: Training accuracy curves as measured by the performance measure (section II-E

), averaged over ten trials (solid line represents mean and shaded area represents standard deviation). First, the two layers are trained separately (left and middle). Then, the pre-trained weights are transferred to a DBM. After that, the DBM is trained (right).

Iv-B Decoding the latent representations

Once the DBM has been trained, it is able to create samples from the training data. To inspect whether the DBM also forms meaningful latent representations, we decode the hidden states of the trained DBM using the decoding procedure described in section II-D. The decoding method was tested in three scenarios. As in the training, the quality of the decoding was evaluated using the measure . In the first scenario, the DBM is clamped to one of the patterns. In this case, the hidden representations in the second hidden layer clearly represent the data the network is clamped to. Thus, the decoded states mostly represent one of the training patterns, as shown in figure 5. The performance for circular connectivity is about 0.87 in this case.

As a second scenario, the input pattern was corrupted by turning two of the three cells off. The DBM is then, again, clamped to this input and the deepest layer is decoded. By corrupting the pattern, the performance of the decoded pattern dropped to about 0.64 for cicurlar connectivity.

Lastly, fully empty patterns were fed to the network. As a result, the performance drops even further and is now only around for cicurlar connectivity.

Table II shows the performance of the model for the different scenarios. For further analysis, we also define the measure , which is the difference in performances between the presentation of training patterns and zeros input - see Fig. 6.

(4)
Fig. 5: Decoded states when clamping to one of the training patterns for circular connectivity. The measure of performance is 0.87.
Fig. 6: Visualization of the measures and . In the training phase of the DBM (corresponding to figure 4, right), the accuracy of the decoding goes up to around 0.87. Then, we present the network with blank tactile patterns. The drop in decoding accuracy is termed . After that, homeostatic processes start. The gain in performance through homeostasis is termed .
Receptive field circular linear
0.87 0.83
0.58 0.50
0.50 0.42
0.37 0.41
0.72 0.62
0.22 0.20
TABLE II: Accuracy of the decoded reconstructions of the hidden layer representations. All values are the mean performance over ten trials.

Iv-C Homeostasis causes hallucinations

We have shown that presenting the network with a corrupted or blank input impedes the reconstruction quality. Now we show that, when introducing the homeostatic processes described in section II-C, the performance of the network is increased when presented with a blank pattern and meaningful latent states are retrieved.

For this, we let the homeostatic process run for 2000 time steps. In each of the time steps, the activity of every neuron is measured and the biases are increased such that the activity moves back to the baseline level with an adaptation rate of 0.01. After every time step, the performance of the network is measured. The results of this process for the two types of connectivity are shown in Fig. 7. The performance at the beginning corresponds to the performance of the network when presented with a blank input. Then, in the early stages of the homeostatic process, a strong increase in the performance is visible. After around 1200 time steps, the performance saturates.

We define the overall improvement , visualized in figure 6, as:

(5)

The final performance after 2000 steps of homeostasis is presented in the second last row of Table II. For both types of investigated connectivity, the homeostatic process leads to a significant increase in accuracy. In Fig. 7, the increase in performance from the initial state to the state after the homeostatic process is shown.

Figure 8 then shows a sample trace for circular connectivity. Additionally, sample reconstructions are shown. While, before applying homeostatic mechanisms, the reconstructions are strongly perturbed, the decoded states at the final point clearly correspond to the trained patterns. Still, some of the decoded states correspond to corrupted versions of patterns.

We further analyzed what affects the gain of performance through homeostasis. We found that there is a relation between the drop in performance and the gain through homeostasis . This is shown in Fig. 9 for both connectivity types. When the performance decreased strongly, the homeostatic mechanism will lead to a strong increase in performance afterwards (correlation coefficients, respectively, for the two lines: =0.71, =0.84). Thus, the difference in the quality of the reconstruction is particularly strong if the network suffered heavily from the blanked input pattern.

Fig. 7: Increase in reconstruction performance through homeostasis. Reconstruction quality over time, averaged over ten trials, corresponding to ten independently trained DBMs. After each time step, the biases are updated and get closer to the baseline level. The two traces correspond to the two types of connectivity namely circular (orange), and linear (blue). In both cases, the homeostatic processes clearly increase the quality of the reconstructions. The shaded area represents the standard deviation.
Fig. 8: Demonstration of the reconstructed patterns given a sample trace during homeostasis for ciruclar connectivity. In the beginning, the reconstruction quality is rather poor and no patterns correspond to a training pattern. After applying homeostasis, several patterns correspond to the training samples.
Fig. 9: Correlation between and . On the -axis, the difference in reconstruction performance between clamping to a training pattern or a blank pattern is shown. On the -axis, the gain in performance through hallucinations is shown. The two traces correspond to the two types of connectivity namely circular (orange), and linear (blue). In all cases, there is a clear correlation (respective correlation coefficients: =0.71, =0.84). Each dot represents a separately trained DBM and is in itself an averaged value over 100 reconstruction samples of this particular DBM. The solid line is the least-squares regression line. The shaded area represents the standard deviation.

V Discussion

We suggest that tactile hallucinations can emerge as a consequence of homeostasis after an ill-formed input from the afferent pathway or other brain regions (corresponding to corrupted patterns) or through the complete lack of input (corresponding to blank patterns). Our results show that, when the network is clamped to one of the training patterns, the quality of the reconstruction is high, whereas it is low if the network is clamped to a corrupted or zeros pattern. Homeostatic mechanisms, modeled as an adaptation of the bias of the network, strongly increased the reconstruction quality. Thus, while homeostasis could be involved in the proper reconstruction or adaptation of sensory input when partial information is available, abnormal connectivity or dysfunctional homeostatic processes could induce tactile hallucinations.

V-a Medical applications

Our results showed that homeostasis is a possible mechanism to induce hallucinations. In order to verify this approach, biological experiments would be required. A possible way to tackle this would be to match the timescales of homeostatic processes and the onset of hallucinations. In the phantom limb phenomenon, the onset has been reported to happen within the first 24 hours after amputation for half of the patients and within a week for another 25 % of the patients [25]. This nicely fits the timescale of homeostatic mechanisms. Even though recent work has argued that there are fast homeostatic mechanisms on the timescale of seconds to minutes [23], most experimental work describes homeostasis to happen on a timescale of hours to weeks [20, 21, 22].

If experimental work would further provide evidence for our hypothesis, it would thus be conceivable to make use of our approach in a medical application. In the cases where patients have unpleasant tactile hallucinations, alleviating the homeostatic mechanism could lead to a delayed onset of the hallucinations. Instead of letting the neurons recover previous baseline activity, medical interventions could keep the activity levels in the corresponding brain region low. This inference could happen in several ways, for example through drugs or targeted electrical stimulation.

V-B Technical applications

The presented neuro-inspired framework can inspire technical systems beyond medical applications. DBMs have frequently been used as a method for feature extraction in classification tasks, where the features in the hidden layer are fed into a classifier

[26]

. In a robotics setting, homeostasis could then be used as an adaptive artificial intelligence mechanism. A general problem of artificial intelligence is domain adaptation, meaning that the network would perform badly in a different domain than the one it was trained on. In this work, we showed that keeping the learned weights constant and only changing the biases even in a non-selective way can recover the representations formed in the latent layers. Thus, even in non-optimal conditions, proper features could be selected by an adaptive DBM. As an example, the robot would be trained to perform a certain task under daylight conditions. However, when tested in a dim-light condition, we would expect it to perform worse. In that case, we could apply homeostasis to the DBM to recover the baseline activity that the classifier is familiar with. Furthermore, the generative nature of the proposed approach suits for flexible body perception and adaptation in robotics as an alternative method of predictive coding differential equations

[27].

V-C Technical challenges and limitations

While our work provides a general framework for tactile hallucinations, it makes several, partly biologically implausible assumptions. Firstly, all our patterns were binary, while biological skin-receptors can have graded firing rates. A common approach is to either use tuning curves at the input layer to encode non-binary sensory information or to code the activity into a population of neurons representing a single skin cell [28, 29]. Secondly, while humans have millions of somatosensory receptors, we use only a small skin-patch. For further increasing the role of the spatial structure, a larger skin patch could be mounted on a robotic limb to allow for a natural interaction with the robot. As, in most cases, such interaction covers only a small fraction of all active skin cells, we used sparse patterns (see Fig. 3). In comparison to dense patterns, these sparser patterns seemed to stabilize the performance of the network and of the homeostatic mechanisms. Besides, only by pre-training the DBM with a DBN, the system was able to learn the proper latent space representations. Conversely, training the RBM directly with

-step contrastive divergence was highly unstable and produced reasonable results only in a fraction of all trials.

Lastly, our approach has only used a small dataset. This does not resemble biology, where the input patterns are samples from a diverse distribution. This issue could generally be resolved by collecting more samples from a larger skin patch, as described in the paragraph above. In this case, the training data would then also come from actual interactions with a robot, allowing for diversity and realism in the dataset.

Further analysis should be performed on the spatial structure of the skin cells. In the two cases, using circular or linear receptive fields, the input matrix is linearized. In somatosensation, however, the input patterns have a highly ordered spatial structure. The effects of this spatial structure could be investigated by applying two-dimensional receptive fields or learning the spatial connectivity structure [30]. These receptive fields would lead to localized processing of the input data. Our work has already shown that constraining the connectivity has effects on the quality of the decoding and the strength of the hallucinations (Fig. 7). When moving towards more and more realistic tactile patterns, local processing and constrained connectivity could thus play a major role.

Vi Conclusion

We presented a framework for modelling tactile hallucinations in an artificial skin. Overall, this work shows a possible role of homeostatic processes in the emergence of hallucinations. While homeostasis has been linked to many crucial parts of network function, we here showed that it could lead to the formation of meaningful latent representations without actual input. On the one hand, our model allows computational neuroscientists to investigate the potential role of different parameters such as the severity of the neurological disease (by modifying the corrupted input pattern). On the other hand, this framework also allows for more targeted experiments in order to elucidate the emergence of hallucinations.

References

  • [1] P. V. Rabins, “The genesis of phanton (deenervation) hallucinations: An hypothesis,” International journal of geriatric psychiatry, vol. 9, no. 10, pp. 775–777, 1994.
  • [2] L. Pang, “Hallucinations experienced by visually impaired: Charles bonnet syndrome,” Optometry and Vision Science, vol. 93, no. 12, p. 1466, 2016.
  • [3] G. Fénelon, S. Thobois, A.-M. Bonnet, E. Broussolle, and F. Tison, “Tactile hallucinations in parkinson’s disease,” Journal of neurology, vol. 249, no. 12, pp. 1699–1703, 2002.
  • [4] K. Mueser, A. Bellack, and E. Brady, “Hallucinations in schizophrenia,” Acta Psychiatrica Scandinavica, vol. 82, no. 1, pp. 26–29, 1990.
  • [5] P. Lanillos, D. Oliva, A. Philippsen, Y. Yamashita, Y. Nagai, and G. Cheng, “A review on neural network models of schizophrenia and autism spectrum disorder,” arXiv preprint arXiv:1906.10015, 2019.
  • [6] A. R. Powers, C. Mathys, and P. Corlett, “Pavlovian conditioning–induced hallucinations result from overweighting of perceptual priors,” Science, vol. 357, no. 6351, pp. 596–600, 2017.
  • [7] R. E. Hoffman and T. H. McGlashan, “Book review: Neural network models of schizophrenia,” The Neuroscientist, vol. 7, no. 5, pp. 441–454, 2001.
  • [8] P. Series, D. P. Reichert, and A. J. Storkey, “Hallucinations in charles bonnet syndrome induced by homeostasis: a deep boltzmann machine model,” in Advances in Neural Information Processing Systems, 2010, pp. 2020–2028.
  • [9] M. Spitzer, P. Böhler, M. Weisbrod, and U. Kischka, “A neural network model of phantom limbs,” Biological cybernetics, vol. 72, no. 3, pp. 197–206, 1995.
  • [10] H. Brown, R. A. Adams, I. Parees, M. Edwards, and K. Friston, “Active inference, sensory attenuation and illusions,” Cognitive processing, vol. 14, no. 4, pp. 411–427, 2013.
  • [11] K. J. Boström, M. H. De Lussanet, T. Weiss, C. Puta, and H. Wagner, “A computational model unifies apparently contradictory findings concerning phantom pain,” Scientific reports, vol. 4, p. 5298, 2014.
  • [12] G. J. Menon, I. Rahman, S. J. Menon, and G. N. Dutton, “Complex visual hallucinations in the visually impaired: the charles bonnet syndrome,” Survey of ophthalmology, vol. 48, no. 1, pp. 58–72, 2003.
  • [13] G. Schultz and R. Melzack, “The charles bonnet syndrome:‘phantom visual images’,” Perception, vol. 20, no. 6, pp. 809–825, 1991.
  • [14] P. Mittendorfer and G. Cheng, “Humanoid multimodal tactile-sensing modules,” IEEE Transactions on Robotics, vol. 27, no. 3, pp. 401–410, 2011.
  • [15] R. Jardri and S. Denève, “Computational models of hallucinations,” in The neuroscience of hallucinations.   Springer, 2013, pp. 289–313.
  • [16] E. Ruppin, J. A. Reggia, and D. Horn, “A neural model of delusions and hallucinations in schizophrenia,” in Advances in Neural Information Processing Systems, 1995, pp. 149–156.
  • [17] R. A. Adams, K. E. Stephan, H. R. Brown, C. D. Frith, and K. J. Friston, “The computational anatomy of psychosis,” Frontiers in psychiatry, vol. 4, p. 47, 2013.
  • [18] Y. Iwamura, “Hierarchical somatosensory processing,” Current opinion in neurobiology, vol. 8, no. 4, pp. 522–528, 1998.
  • [19] D. J. Felleman and D. E. Van, “Distributed hierarchical processing in the primate cerebral cortex.” Cerebral cortex (New York, NY: 1991), vol. 1, no. 1, pp. 1–47, 1991.
  • [20]

    G. G. Turrigiano, “The self-tuning neuron: synaptic scaling of excitatory synapses,”

    Cell, vol. 135, no. 3, pp. 422–435, 2008.
  • [21] ——, “The dialectic of hebb and homeostasis,” Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 372, no. 1715, p. 20160258, 2017.
  • [22] A. J. Watt and N. S. Desai, “Homeostatic plasticity and stdp: keeping a neuron’s cool in a fluctuating world,” Frontiers in synaptic neuroscience, vol. 2, p. 5, 2010.
  • [23] F. Zenke, G. Hennequin, and W. Gerstner, “Synaptic plasticity in neural networks needs homeostasis with a fast rate detector,” PLoS computational biology, vol. 9, no. 11, p. e1003330, 2013.
  • [24] F. Bergner, E. Dean-Leon, and G. Cheng, “Event-based signaling for large-scale artificial robotic skin - realization and performance evaluation,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016, pp. 4918–4924.
  • [25] S. R. Weeks, V. C. Anderson-Barnes, and J. W. Tsao, “Phantom limb pain: theories and therapies,” The neurologist, vol. 16, no. 5, pp. 277–286, 2010.
  • [26] A. de Andrade, “A comparison of neural network training methods for text classification.”
  • [27]

    P. Lanillos and G. Cheng, “Adaptive robot body learning and estimation through predictive coding,”

    Intelligent Robots and Systems (IROS), 2018 IEEE/RSJ International Conference on, 2018.
  • [28] E. Salinas and L. Abbott, “Vector reconstruction from firing rates,” Journal of computational neuroscience, vol. 1, no. 1-2, pp. 89–107, 1994.
  • [29] O. Franzén, D. Kenshalo, and G. Essick, “Neural population encoding of touch intensity,” in Information processing in the somatosensory system.   Springer, 1991, pp. 71–80.
  • [30] M. Hoffmann, Z. Straka, I. Farkaš, M. Vavrečka, and G. Metta, “Robotic homunculus: Learning of artificial skin representation in a humanoid robot motivated by primary somatosensory cortex,” IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 2, pp. 163–176, 2017.
  • [31] P. Smolensky, “Information processing in dynamical systems: Foundations of harmony theory,” COLORADO UNIV AT BOULDER DEPT OF COMPUTER SCIENCE, Tech. Rep., 1986.
  • [32] G. E. Hinton, “Training products of experts by minimizing contrastive divergence,” Neural computation, vol. 14, no. 8, pp. 1771–1800, 2002.
  • [33] N. Caporale and Y. Dan, “Spike timing–dependent plasticity: a hebbian learning rule,” Annu. Rev. Neurosci., vol. 31, pp. 25–46, 2008.
  • [34] P. Földiak, “Forming sparse representations by local anti-hebbian learning,” Biological cybernetics, vol. 64, no. 2, pp. 165–170, 1990.