Unsupervised Transfer Learning for Spatiotemporal Predictive Networks

by   Zhiyu Yao, et al.

This paper explores a new research problem of unsupervised transfer learning across multiple spatiotemporal prediction tasks. Unlike most existing transfer learning methods that focus on fixing the discrepancy between supervised tasks, we study how to transfer knowledge from a zoo of unsupervisedly learned models towards another predictive network. Our motivation is that models from different sources are expected to understand the complex spatiotemporal dynamics from different perspectives, thereby effectively supplementing the new task, even if the task has sufficient training samples. Technically, we propose a differentiable framework named transferable memory. It adaptively distills knowledge from a bank of memory states of multiple pretrained RNNs, and applies it to the target network via a novel recurrent structure called the Transferable Memory Unit (TMU). Compared with finetuning, our approach yields significant improvements on three benchmarks for spatiotemporal prediction, and benefits the target task even from less relevant pretext ones.


page 6

page 7

page 8


Transfer Learning Approaches for Knowledge Discovery in Grid-based Geo-Spatiotemporal Data

Extracting and meticulously analyzing geo-spatiotemporal features is cru...

ModeRNN: Harnessing Spatiotemporal Mode Collapse in Unsupervised Predictive Learning

Learning predictive models for unlabeled spatiotemporal data is challeng...

Lifelong Learning of Spatiotemporal Representations with Dual-Memory Recurrent Self-Organization

Humans excel at continually acquiring and fine-tuning knowledge over sus...

PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive Learning

The predictive learning of spatiotemporal sequences aims to generate fut...

Zoo-Tuning: Adaptive Transfer from a Zoo of Models

With the development of deep networks on various large-scale datasets, a...

Using Multi-task and Transfer Learning to Solve Working Memory Tasks

We propose a new architecture called Memory-Augmented Encoder-Solver (MA...

Blissful Ignorance: Anti-Transfer Learning for Task Invariance

We introduce the novel concept of anti-transfer learning for neural netw...