Transforming task representations to allow deep learning models to perform novel tasks

05/08/2020
by   Andrew K. Lampinen, et al.
6

An important aspect of intelligence is the ability to adapt to a novel task without any direct experience (zero-shot), based on its relationship to previous tasks. Humans can exhibit this cognitive flexibility. By contrast, deep-learning models that achieve superhuman performance in specific tasks generally fail to adapt to even slight task alterations. To address this, we propose a general computational framework for adapting to novel tasks based on their relationship to prior tasks. We begin by learning vector representations of tasks. To adapt to new tasks, we propose meta-mappings, higher-order tasks that transform basic task representations. We demonstrate this framework across a wide variety of tasks and computational paradigms, ranging from regression to image classification and reinforcement learning. We compare to both human adaptability, and language-based approaches to zero-shot learning. Across these domains, meta-mapping is successful, often achieving 80-90 without any data, on a novel task that directly contradicts its prior experience. We further show that using meta-mapping as a starting point can dramatically accelerate later learning on a new task, and reduce learning time and cumulative error substantially. Our results provide insight into a possible computational basis of intelligent adaptability, and offer a possible framework for modeling cognitive flexibility and building more flexible artificial intelligence.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset