Geoffrey Hinton
Geoffrey Everest Hinton’s work on artificial neural networks is an English-Canadian cognitive psychologist and informatician. He has been working with Google and the University of Toronto since 2013. Hinton has been the co-author of a highly quoted 1986 paper popularizing back-propagation algorithms for multi-layer trainings on neural networks by David E. Rumelhart and Ronald J. Williams. Some regard him as a leading figure in the profound learning community and some call him “The Patron of Deep Learning.” The dramatic image-recognition milestone of AlexNet for the Imagenet 2012 challenge was created by its student Alex Krizhevsky. Together with Yoshua Bengio and Yann LeCun Hinton received the 2018 Turing Prize for their work on deep learning.
After his PhD, he worked at Sussex University, San Diego University and Carnegie Mellon University. He was founding director of the Computational Neuroscience Unit of the Gatsby Charitable Foundation of University College London and is presently professor of computer science at the University of Toronto. He has a Canada Research Chair in Machine Learning and is currently a Consultant in the Canadian Institute for Advanced Research for Machine & Brain Learning. Hinton taught a free online course on the Coursera educational platform in 2012 on Neural Networks. In March 2013, Hinton joined Google when its firm, DNNresearch Inc., was acquired. Hinton’s research investigates ways to use neural networks for machine learning, memory, perception and symbolics processing. He plans to “split his time between his university research and his work at Google.” Whilst Hinton was the professor at Carnegie Mellon, David E. Rumelhart and Hinton and Ronald J. Williams applied the back-propagation algorithm to neural networks in several layers. Their experiments have shown that these networks can learn useful internal data representation. While this work was important for popularizing backpropagation, it was not the first one to propose the approach. Automatic reverse mode differentiation, for which backpropagation is a special case, was proposed by Seppo Linnainmaa in 1970. Paul Werbos proposed using this mode to train neural networks in 1974. In the same period, Hinton and David Ackley and Terry Sejnowski co-invented Boltzmann machines. He also contributes to the research of the neural networks, including distributed representations, time delay neural networks, expert mixtures, Helmholtz machinery and expert product. In 2007 Hinton co-authored an unchecked learning paper entitled “Unchecked image transformation learning.” Hinton’s articles in the Scientific American in September 1992 and October 1993 offer an accessible introduction to Geoffrey Hinton’s research. Hinton published two open-access research papers respectively on the issue of capsule neural networks in October and November 2017, which according to Hinton are finally “something that works well.”
Hinton received a bachelor’s degree in experimental psychology from King’s College, Cambridge, in 1970. In 1978 he received a PhD in Artificial Intelligence from the University of Edinburgh, under the supervision of Christopher Longuet-Higgins.
Hinton has moved from the USA to Canada in part due to disillusionment with the policies of Ronald Reagan and disapproval of military funding of artificial intelligence. Regarding Artificial Intelligence’s existential risk Hinton typically refuses to predict for more than five years, noting that the uncertainty is too great due to exponential progress. However, in an informal conversation in November 2015, with the well-noted AI-risk alarmist Nick Bostrom, overheard by journalist Raffi Khatchadourian, he is said not to have expected general AI. Hinton said, ‘I believe that political systems will use it to terrorize people,’ and expressed his belief that, in the context of a dichotomy previously introduced by Bostrom between people who think managing the existential risk of artificial intelligence is probably hopeless versus easy enough to automatically resolve it, in the hopeless camp. But the truth is that the prospect of discution is too sweet.” – A comment from J. Robert Oppenheimer when asked why he had done his research in the Manhattan Project. According to the same report Hinton does not exclude people who control an artificial superintelligence categorically.
In 2001, Hinton received an honorary doctorate from Edinburgh University. He was the 2005 recipient of the IJCAI Lifetime Achievement Award for Research Excellence. The 2011 Herzberg Canada Gold Medal for science and engineering was also awarded to him. He was elected as a foreign member of the National Engineering Academy in 2016, “For contributions to the theory and practice of artificial Neural Networks and their implementation in terms of speech recognition and computer vision.” Hinton received an honorary doctorate from Université de Sherbrooke in 2013. He received a 2016 IEEE / RSE Wolfson James Clerk Maxwell Award. He was awarded an IEEE / RSE Wolfson Wolfson James Clerk Maxwell Award in the field of information and communications technologies “for his pioneering and highly influential works” to endow machines with the ability to learn.
Hinton is the great grandson of logician George Boole, whose work was eventually one of the bases of modern computer science, and of surgeon and authors James Hinton. Howard Hinton’s dad. His middle name is George Everest, another relative. He is the economist’s nephew, Colin Clark. In 1994, he lost his first wife to ovarian cancer.
In 1998, Hinton was elected a Royal Society Fellow. He was the first Rumelhart Prize winner in 2001. His election certificate for the Royal Society reads: