Learning Representations from Temporally Smooth Data

12/12/2020
by   Shima Rahimi Moghaddam, et al.
0

Events in the real world are correlated across nearby points in time, and we must learn from this temporally smooth data. However, when neural networks are trained to categorize or reconstruct single items, the common practice is to randomize the order of training items. What are the effects of temporally smooth training data on the efficiency of learning? We first tested the effects of smoothness in training data on incremental learning in feedforward nets and found that smoother data slowed learning. Moreover, sampling so as to minimize temporal smoothness produced more efficient learning than sampling randomly. If smoothness generally impairs incremental learning, then how can networks be modified to benefit from smoothness in the training data? We hypothesized that two simple brain-inspired mechanisms, leaky memory in activation units and memory-gating, could enable networks to rapidly extract useful representations from smooth data. Across all levels of data smoothness, these brain-inspired architectures achieved more efficient category learning than feedforward networks. This advantage persisted, even when leaky memory networks with gating were trained on smooth data and tested on randomly-ordered data. Finally, we investigated how these brain-inspired mechanisms altered the internal representations learned by the networks. We found that networks with multi-scale leaky memory and memory-gating could learn internal representations that un-mixed data sources which vary on fast and slow timescales across training samples. Altogether, we identified simple mechanisms enabling neural networks to learn more quickly from temporally smooth data, and to generate internal representations that separate timescales in the training signal.

READ FULL TEXT

page 1

page 5

page 11

page 12

page 14

page 16

page 18

page 20

research
11/28/2017

FearNet: Brain-Inspired Model for Incremental Learning

Incremental class learning involves sequentially learning classes in bur...
research
06/24/2022

Learning sparse features can lead to overfitting in neural networks

It is widely believed that the success of deep networks lies in their ab...
research
10/29/2020

Collaborative Method for Incremental Learning on Classification and Generation

Although well-trained deep neural networks have shown remarkable perform...
research
03/05/2019

Learning a smooth kernel regularizer for convolutional neural networks

Modern deep neural networks require a tremendous amount of data to train...
research
05/28/2018

Lifelong Learning of Spatiotemporal Representations with Dual-Memory Recurrent Self-Organization

Humans excel at continually acquiring and fine-tuning knowledge over sus...
research
02/14/2021

Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization

Large scale distributed optimization has become the default tool for the...

Please sign up or login with your details

Forgot password? Click here to reset