References
- Barrett et al. (2018) Barrett, D. G., Hill, F., Santoro, A., Morcos, A. S., and Lillicrap, T. (2018). Measuring abstract reasoning in neural networks. arXiv preprint arXiv:1807.04225.
- Bellec et al. (2018) Bellec, G., Salaj, D., Subramoney, A., Legenstein, R., and Maass, W. (2018). Long short-term memory and learning-to-learn in networks of spiking neurons. In 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), arXiv preprint arXiv:1803.09574.
- Brea and Gerstner (2016) Brea, J. and Gerstner, W. (2016). Does computational neuroscience need new synaptic learning paradigms? Current Opinion in Behavioral Sciences, (11):61–66.
- Buzsaki (2006) Buzsaki, G. (2006). Rhythms of the Brain. Oxford University Press.
- Buzzell et al. (2017) Buzzell, G. A., Richards, J. E., White, L. K., Barker, T. V., Pine, D. S., and Fox, N. A. (2017). Development of the error-monitoring system from ages 9–35: Unique insight provided by mri-constrained source localization of eeg. Neuroimage, 157:13–26.
- Clopath et al. (2010) Clopath, C., Büsing, L., Vasilaki, E., and Gerstner, W. (2010). Connectivity reflects coding: a model of voltage-based stdp with homeostasis. Nature neuroscience, 13(3):344.
- Czarnecki et al. (2017) Czarnecki, W. M., Świrszcz, G., Jaderberg, M., Osindero, S., Vinyals, O., and Kavukcuoglu, K. (2017). Understanding synthetic gradients and decoupled neural interfaces. arXiv preprint arXiv:1703.00522.
- D’Angelo et al. (2016) D’Angelo, E., Mapelli, L., Casellato, C., Garrido, J. A., Luque, N., Monaco, J., Prestori, F., Pedrocchi, A., and Ros, E. (2016). Distributed circuit plasticity: new clues for the cerebellar mechanisms of learning. The Cerebellum, 15(2):139–151.
- Davies et al. (2018) Davies, M., Srinivasa, N., Lin, T.-H., Chinya, G., Cao, Y., Choday, S. H., Dimou, G., Joshi, P., Imam, N., Jain, S., et al. (2018). Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro, 38(1):82–99.
- Duan et al. (2016) Duan, Y., Schulman, J., Chen, X., Bartlett, P. L., Sutskever, I., and Abbeel, P. (2016). Rl2: Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779.
- Engelhard et al. (2018) Engelhard, B., Finkelstein, J., Cox, J., Fleming, W., Jang, H. J., Ornelas, S., Koay, S. A., Thiberge, S., Daw, N., Tank, D., et al. (2018). Specialized and spatially organized coding of sensory, motor, and cognitive variables in midbrain dopamine neurons. bioRxiv, page 456194.
- Frémaux and Gerstner (2016) Frémaux, N. and Gerstner, W. (2016). Neuromodulated spike-timing-dependent plasticity, and theory of three-factor learning rules. Frontiers in neural circuits, 9:85.
- Furber et al. (2014) Furber, S. B., Galluppi, F., Temple, S., and Plana, L. A. (2014). The spinnaker project. Proceedings of the IEEE, 102(5):652–665.
- Gehring et al. (1993) Gehring, W. J., Goss, B., Coles, M. G., Meyer, D. E., and Donchin, E. (1993). A neural system for error detection and compensation. Psychological science, 4(6):385–390.
- Gerstner et al. (2018) Gerstner, W., Lehmann, M., Liakoni, V., Corneil, D., and Brea, J. (2018). Eligibility traces and plasticity on behavioral time scales: Experimental support of neohebbian three-factor learning rules. arXiv preprint arXiv:1801.05219.
- Glass et al. (1999) Glass, J., Smith, A., and K. Halberstadt, A. (1999). Heterogeneous acoustic measurements and multiple classifiers for speech recognition.
- Graves and Schmidhuber (2005) Graves, A. and Schmidhuber, J. (2005). Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks, 18(5-6):602–610.
- Graves et al. (2014) Graves, A., Wayne, G., and Danihelka, I. (2014). Neural turing machines. arXiv preprint arXiv:1410.5401.
- Hochreiter and Schmidhuber (1997) Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8):1735–1780.
- Hosp et al. (2011) Hosp, J. A., Pekanovic, A. Rioult-Pedotti, M. S., and Luft, A. R. (2011). Dopaminergic projections from midbrain to primary motor cortex mediate motor skill learning. The Journal of Neuroscience, 31(7):2481–24887.
- Jaderberg et al. (2016) Jaderberg, M., Czarnecki, W. M., Osindero, S., Vinyals, O., Graves, A., Silver, D., and Kavukcuoglu, K. (2016). Decoupled neural interfaces using synthetic gradients. arXiv preprint arXiv:1608.05343.
- Kaiser et al. (2018) Kaiser, J., Mostafa, H., and Neftci, E. (2018). Synaptic plasticity dynamics for deep continuous local learning. arXiv preprint arXiv:1811.10766.
- Kandel et al. (2000) Kandel, E. R., Schwartz, J. H., Jessell, T. M., of Biochemistry, D., Jessell, M. B. T., Siegelbaum, S., and Hudspeth, A. (2000). Principles of neural science, volume 4. McGraw-hill New York.
- Kingma and Ba (2014) Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
- Lake et al. (2017) Lake, B. M., Ullman, T. D., B., T. J., and Gershman, S. J. (2017). Building machines that learn and think like peole. Behavioral and Brain Sciences, 40.
- LeCun et al. (2015) LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. nature, 521(7553):436.
- Lillicrap et al. (2016) Lillicrap, T. P., Cownden, D., Tweed, D. B., and Akerman, C. J. (2016). Random synaptic feedback weights support error backpropagation for deep learning. Nature communications, 7:13276.
- Lorente de Nó (1938) Lorente de Nó, R. (1938). Architectonics and structure of the cerebral cortex. Physiology of the nervous system, pages 291–330.
- MacLean et al. (2015) MacLean, S. J., Hassall, C. D., Ishigami, Y., Krigolson, O. E., and Eskes, G. A. (2015). Using brain potentials to understand prism adaptation: the error-related negativity and the p300. Frontiers in human neuroscience, 9:335.
- Nayebi et al. (2018) Nayebi, A., Bear, D., Kubilius, J., Kar, K., Ganguli, S., Sussillo, D., DiCarlo, J. J., and Yamins, D. L. (2018). Task-driven convolutional recurrent models of the visual system. In Advances in Neural Information Processing Systems, pages 5291–5302.
- Nevian and Sakmann (2006) Nevian, T. and Sakmann, B. (2006). Spine ca2+ signaling in spike-timing-dependent plasticity. Journal of Neuroscience, 26(43):11001–11013.
- Ngezahayo et al. (2000) Ngezahayo, A., Schachner, M., and Artola, A. (2000). Synaptic activity modulates the induction of bidirectional synaptic changes in adult mouse hippocampus. Journal of Neuroscience, 20(7):2451–2458.
- Nicola and Clopath (2017) Nicola, W. and Clopath, C. (2017). Supervised learning in spiking neural networks with force training. Nature communications, 8(1):2208.
- Nøkland (2016) Nøkland, A. (2016). Direct feedback alignment provides learning in deep neural networks. In Advances in neural information processing systems, pages 1037–1045.
- Pi et al. (2013) Pi, H., Hangya, B., Kvitsiani, D., Sanders, J. I., Huang, Z. J., and Kepecs, A. (2013). Cortical interneurons that specialize in disinhibitory control. Nature, 503(7477):521–524.
- Samadi et al. (2017) Samadi, A., Lillicrap, T. P., and Tweed, D. B. (2017). Deep learning with dynamic spiking neurons and fixed feedback weights. Neural computation, 29(3):578–602.
- Schemmel et al. (2010) Schemmel, J., Briiderle, D., Griibl, A., Hock, M., Meier, K., and Millner, S. (2010). A wafer-scale neuromorphic hardware system for large-scale neural modeling. In Circuits and systems (ISCAS), proceedings of 2010 IEEE international symposium on, pages 1947–1950. IEEE.
- Sjöström et al. (2001) Sjöström, P. J., Turrigiano, G. G., and Nelson, S. B. (2001). Rate, timing, and cooperativity jointly determine cortical synaptic plasticity. Neuron, 32(6):1149–1164.
- Sugihara et al. (2016) Sugihara, H., Chen, N., and Sur, M. (2016). Cell-specific modulation of plasticity and cortical state by cholinergic inputs to the visual cortex. Journal of Physiology, 110(1-2):37–43.
- Sutton and Barto (1998) Sutton, R. S. and Barto, A. G. (1998). Introduction to reinforcement learning, volume 135. MIT press Cambridge.
- Wang et al. (2016) Wang, J. X., Kurth-Nelson, Z., Tirumala, D., Soyer, H., Leibo, J. Z., Munos, R., Blundell, C., Kumaran, D., and Botvinick, M. (2016). Learning to reinforcement learn. arXiv preprint arXiv:1611.05763.
-
Wang et al. (2018)
Wang, Z., Joshi, S., Savel’ev, S., Song, W., Midya, R., Li, Y., Rao, M., Yan,
P., Asapu, S., Zhuo, Y., et al. (2018).
Fully memristive neural networks for pattern classification with unsupervised learning.
Nature Electronics, 1(2):137. - Werbos (1990) Werbos, P. J. (1990). Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10):1550–1560.
- Williams and Zipser (1989) Williams, R. J. and Zipser, D. (1989). A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1(2):270–280.
- Yang et al. (2017) Yang, Y., Yin, M., Yu, Z., Wang, Z., Zhang, T., Cai, Y., Lu, W. D., and Huang, R. (2017). Multifunctional nanoionic devices enabling simultaneous heterosynaptic plasticity and efficient in-memory boolean logic. Advanced Electronic Materials, 3(7):1700032.
- Zenke and Ganguli (2018) Zenke, F. and Ganguli, S. (2018). Superspike: Supervised learning in multilayer spiking neural networks. Neural computation, 30(6):1514–1541.