Supervised Generative Reconstruction: An Efficient Way To Flexibly Store and Recognize Patterns

12/13/2011
by   Tsvi Achler, et al.
0

Matching animal-like flexibility in recognition and the ability to quickly incorporate new information remains difficult. Limits are yet to be adequately addressed in neural models and recognition algorithms. This work proposes a configuration for recognition that maintains the same function of conventional algorithms but avoids combinatorial problems. Feedforward recognition algorithms such as classical artificial neural networks and machine learning algorithms are known to be subject to catastrophic interference and forgetting. Modifying or learning new information (associations between patterns and labels) causes loss of previously learned information. I demonstrate using mathematical analysis how supervised generative models, with feedforward and feedback connections, can emulate feedforward algorithms yet avoid catastrophic interference and forgetting. Learned information in generative models is stored in a more intuitive form that represents the fixed points or solutions of the network and moreover displays similar difficulties as cognitive phenomena. Brain-like capabilities and limits associated with generative models suggest the brain may perform recognition and store information using a similar approach. Because of the central role of recognition, progress understanding the underlying principles may reveal significant insight on how to better study and integrate with the brain.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro