A sparse code for neuro-dynamic programming and optimal control

06/22/2020
by   P. N. Loxley, et al.
0

Sparse codes have been suggested to offer certain computational advantages over other neural representations of sensory data. To explore this viewpoint, a sparse code is used to represent natural images in an optimal control task solved with neuro-dynamic programming, and its computational properties are investigated. The central finding is that sparse code properties of over-completeness and decorrelation lead to important advantages for neuro-dynamic programming. The sparse code is found to maximise the memory capacity of a linear network by transforming the design matrix of the least-squares problem to one of full rank. It also conditions the Hessian matrix of the least-squares problem, thereby increasing the speed of learning the network weights when inputs are correlated, as in the case of natural images. When many tasks are learned sequentially, the sparse code makes a linear network less prone to "forgetting" tasks that were previously learned (catastrophic forgetting) by reducing the chance that different tasks overlap and interfere. An over-complete sparse code is found to remain approximately decorrelated, allowing it to increase memory capacity in an efficient manner beyond that possible for a complete code. A 2.25 times over-complete sparse code is shown to at least double memory capacity compared with a complete sparse code. This is used in a partitioned representation to avoid catastrophic forgetting; allowing a large number of tasks to be learned sequentially, and yielding a cost-to-go function approximator for each partition.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro