Multiple-Modality Associative Memory: a framework for Learning

07/11/2022
by   Rodrigo Simas, et al.
0

Drawing from memory the face of a friend you have not seen in years is a difficult task. However, if you happen to cross paths, you would easily recognize each other. The biological memory is equipped with an impressive compression algorithm that can store the essential, and then infer the details to match perception. Willshaw's model of Associative memory is a likely candidate for a computational model of this brain function, but its application on real-world data is hindered by the so-called Sparse Coding Problem. Due to a recently proposed sparse encoding prescription [31], which maps visual patterns into binary feature maps, we were able to analyze the behavior of the Willshaw Network (WN) on real-world data and gain key insights into the strengths of the model. To further enhance the capabilities of the WN, we propose the Multiple-Modality architecture. In this new setting, the memory stores several modalities (e.g., visual, or textual) simultaneously. After training, the model can be used to infer missing modalities when just a subset is perceived, thus serving as a flexible framework for learning tasks. We evaluated the model on the MNIST dataset. By storing both the images and labels as modalities, we were able to successfully perform pattern completion, classification, and generation with a single model.

READ FULL TEXT

page 7

page 9

page 18

research
07/26/2023

Visual Prompt Flexible-Modal Face Anti-Spoofing

Recently, vision transformer based multimodal learning methods have been...
research
02/11/2023

Flexible-modal Deception Detection with Audio-Visual Adapter

Detecting deception by human behaviors is vital in many fields such as c...
research
08/19/2019

A unified representation network for segmentation with missing modalities

Over the last few years machine learning has demonstrated groundbreaking...
research
07/25/2019

Hetero-Modal Variational Encoder-Decoder for Joint Modality Completion and Segmentation

We propose a new deep learning method for tumour segmentation when deali...
research
10/22/2022

Greedy Modality Selection via Approximate Submodular Maximization

Multimodal learning considers learning from multi-modality data, aiming ...
research
08/02/2023

Memory Encoding Model

We explore a new class of brain encoding model by adding memory-related ...
research
07/27/2023

Cortex Inspired Learning to Recover Damaged Signal Modality with ReD-SOM Model

Recent progress in the fields of AI and cognitive sciences opens up new ...

Please sign up or login with your details

Forgot password? Click here to reset