Deep learning research landscape roadmap in a nutshell: past, present and future – Towards deep cortical learning

07/30/2019 ∙ by Aras R. Dargazany, et al. ∙ 0

The past, present and future of deep learning is presented in this work. Given this landscape roadmap, we predict that deep cortical learning will be the convergence of deep learning cortical learning which builds an artificial cortical column ultimately.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Past: Deep learning inspirations

Deep learning horizon, landscape and research roadmap in nutshell is presented in this figure 1.

Figure 1: Deep learning research landscape & roadmap: past, present, future. The future is highlighted as deep cortical learning.

The historical development and timeline of deep learning & neural network is separately illustrated in figure

2.

Figure 2: Neural nets origin, timeline & history made by Favio Vazquez

The Origin of neural nets [WR17] is thoroughly reviewed in terms of the evolutionary history of deep learning models. Vernon Mountcastle discovery of cortical columns in somatosensory cortex [Mou97] was a breakthrough in brain science. The big bang was the discovery of Hubel & Wiesel of simple cells and complex cell in visual cortex [HW59] which won the Nobel prize for this discovery in 1981. This work was heavily founded on Vernon Mountcastle discovery of cortical columns in somatosensory cortex [Mou97]

. After the discovery of Hubel & Wiesel, Fukushima proposed a pattern recognition architecture based on the simple cell and complex cell discovery, known as NeoCognitron

[FM82]

. In this work, a deep neural network was proposed using simple cell layer and complex cell layer repeatedly. In 80s and maybe a bit earlier backpropagation have been proposed by multiple people but the first time it was well-explained and applied for learning neural nets was done by Hinton and his colleagues in 1987

[RHW86].

2 Present: Deep learning by LeCun, Bengio and Hinton

Convolutional nets was invented by LeCun [LBD89] which led to deep learning conspiracy which also started by the three founding fathers of the field: LeCun, Bengio and Hinton [LBH15]

. The main hype in deep learning happened in 2012 when the state-of-the-art result in Imagenet classification and TIMIT speech recognition task were dramatically reduced using an end-to-end deep convolutional network 

[KSH12] and deep belief net [HDY12].

The power of deep learning is scalability and the ability to learn in an end-to-end fashion. In this sense, deep learning architectures are capable of learning big datasets such as Imagenet [KSH12, GDG17] and TIMIT using multiple GPUs in an end-to-end fashion meaning directly from raw inputs, all the way the desired outputs. Alexnet [KSH12] used two GPUs for Imagenet classification which is a very big dataset of images, almost 1.5 million images of size 215x215. Kaiming He et al. [GDG17]

proposed a highly scalable approach for training on Image using 256 GPUs for almost an hour which shows an amazingly powerful approach based stochastic gradient descent for applying big cluster of GPUs on huge datasets. Very many application domains have been revolutionized using deep learning architectures such as image classifications

[KSH12], machine translation [WSC16, JSL16], speech recognition [HDY12], and robotics [MKS15].

The Nobel Prize in Physiology or Medicine 2014 was given to John O’Keefe, May-Britt Moser and Edvard I. Moser “for their discoveries of cells that constitute a positioning system in the brain.” [Bur14]. This study of cognitive neuroscience shed light on how the world is represented within the brain. Hinton’s Capsule network [SFH17] and Hawkins’ cortical learning algorithm [HAD11] are highly inspired by this Nobel-prize winning work [Bur14].

3 Future: Brain-plausible deep learning & cortical learning algorithms

The main direction and inclination in the deep learning for future is the ability to bridge the gap between the cortical architecture and deep learning architectures, specifically convolutional nets. In this quest, Hinton proposed capsule network [SFH17] as an effort to get rid of pooling layers and replace it with capsules which are highly inspired bu cortical mini-columns in cortical columns and layers and include the location information or pose information of parts.

Another important quest in deep learning is understanding the biological root of learning in our brain, specifically in our cortex. Backpropagation is not biologically inspired and plausible. Hinton and the other founding fathers of deep learning have been trying to understand how backprop might be feasible biologically in brain. Feedback alignment [LCTA16] and spike time-dependent plasticity or STDP-based backprop [BSR18]

are some of the works which have been done by Timothy Lillicrap, Blake Richards, and Hinton in order to model backprop biologically based on the pyramidal neuron in the cortex.

In the far future, the main goal should be the merge of two very independent quest to build cortical structure in our brain: The first one is heavily target by the big and active deep learning community; The second one is targeted independently and neuroscientifically by Numenta and Geoff Hawkins [HAD11]. These people argue that the cortical structure and our neocortex is the main source of our intelligence and for building a true intelligent machine, we should be able to reconstruct the cortex and to do so, we should first focus more on the cortex and understand what cortex is made out of.

4 Finale: Deep cortical learning as the merge of deep learning and cortical learning

By merging deep learning and cortical learning, a very more focused and detailed architectures, named deep cortical learning might be created. We might be able to understand and reconstruct the cortical structure with much more accuracy and have a better idea what the true intelligence is and how artificial general intelligence or AGI might be reproducible. Deep cortical learning might be the algorithm behind one cortical column in the neocortex.

References