Metaphors We Learn By

11/11/2022
by   Roland Memisevic, et al.
0

Gradient based learning using error back-propagation (“backprop”) is a well-known contributor to much of the recent progress in AI. A less obvious, but arguably equally important, ingredient is parameter sharing - most well-known in the context of convolutional networks. In this essay we relate parameter sharing (“weight sharing”) to analogy making and the school of thought of cognitive metaphor. We discuss how recurrent and auto-regressive models can be thought of as extending analogy making from static features to dynamic skills and procedures. We also discuss corollaries of this perspective, for example, how it can challenge the currently entrenched dichotomy between connectionist and “classic” rule-based views of computation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/20/2020

RAN Cognitive Controller

Cognitive Autonomous Networks (CAN) deploys learning based Cognitive Fun...
research
09/23/2019

Learning in the Machine: To Share or Not to Share?

Weight-sharing is one of the pillars behind Convolutional Neural Network...
research
06/04/2020

Developing Excel Thought Leadership

Over a period of five years, the Institute of Chartered Accountants in E...
research
02/26/2019

Learning Implicitly Recurrent CNNs Through Parameter Sharing

We introduce a parameter sharing scheme, in which different layers of a ...
research
01/09/2020

Multi-Scale Weight Sharing Network for Image Recognition

In this paper, we explore the idea of weight sharing over multiple scale...
research
07/29/2020

Deriving Differential Target Propagation from Iterating Approximate Inverses

We show that a particular form of target propagation, i.e., relying on l...

Please sign up or login with your details

Forgot password? Click here to reset