DeepAI AI Chat
Log In Sign Up

Embodiment dictates learnability in neural controllers

by   Joshua Powers, et al.

Catastrophic forgetting continues to severely restrict the learnability of controllers suitable for multiple task environments. Efforts to combat catastrophic forgetting reported in the literature to date have focused on how control systems can be updated more rapidly, hastening their adjustment from good initial settings to new environments, or more circumspectly, suppressing their ability to overfit to any one environment. When using robots, the environment includes the robot's own body, its shape and material properties, and how its actuators and sensors are distributed along its mechanical structure. Here we demonstrate for the first time how one such design decision (sensor placement) can alter the landscape of the loss function itself, either expanding or shrinking the weight manifolds containing suitable controllers for each individual task, thus increasing or decreasing their probability of overlap across tasks, and thus reducing or inducing the potential for catastrophic forgetting.


page 2

page 3

page 4

page 6


A good body is all you need: avoiding catastrophic interference via agent architecture search

In robotics, catastrophic interference continues to restrain policy trai...

Towards continually learning new languages

Multilingual speech recognition with neural networks is often implemente...

Some thoughts on catastrophic forgetting and how to learn an algorithm

The work of McCloskey and Cohen popularized the concept of catastrophic ...

Reducing catastrophic forgetting when evolving neural networks

A key stepping stone in the development of an artificial general intelli...

A Theoretical Analysis of Catastrophic Forgetting through the NTK Overlap Matrix

Continual learning (CL) is a setting in which an agent has to learn from...

Uncertainty-based Modulation for Lifelong Learning

The creation of machine learning algorithms for intelligent agents capab...