Training Spiking Neural Networks for Cognitive Tasks: A Versatile Framework Compatible to Various Temporal Codes

09/02/2017 ∙ by Chaofei Hong, et al. ∙ 0

Conventional modeling approaches have found limitations in matching the increasingly detailed neural network structures and dynamics recorded in experiments to the diverse brain functionalities. On another approach, studies have demonstrated to train spiking neural networks for simple functions using supervised learning. Here, we introduce a modified SpikeProp learning algorithm, which achieved better learning stability in different activity states. In addition, we show biological realistic features such as lateral connections and sparse activities can be included in the network. We demonstrate the versatility of this framework by implementing three well-known temporal codes for different types of cognitive tasks, which are MNIST digits recognition, spatial coordinate transformation, and motor sequence generation. Moreover, we find several characteristic features have evolved alongside the task training, such as selective activity, excitatory-inhibitory balance, and weak pair-wise correlation. The coincidence between the self-evolved and experimentally observed features indicates their importance on the brain functionality. Our results suggest a unified setting in which diverse cognitive computations and mechanisms can be studied.



There are no comments yet.


page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.