Log In Sign Up

Learning task-agnostic representation via toddler-inspired learning

by   Kwanyoung Park, et al.

One of the inherent limitations of current AI systems, stemming from the passive learning mechanisms (e.g., supervised learning), is that they perform well on labeled datasets but cannot deduce knowledge on their own. To tackle this problem, we derive inspiration from a highly intentional learning system via action: the toddler. Inspired by the toddler's learning procedure, we design an interactive agent that can learn and store task-agnostic visual representation while exploring and interacting with objects in the virtual environment. Experimental results show that such obtained representation was expandable to various vision tasks such as image classification, object localization, and distance estimation tasks. In specific, the proposed model achieved 100 noticeably better than autoencoder-based model (99.7 comparable with those of supervised models (100


page 1

page 2

page 3

page 4


On the Importance of Attention in Meta-Learning for Few-Shot Text Classification

Current deep learning based text classification methods are limited by t...

Visuomotor Control in Multi-Object Scenes Using Object-Aware Representations

Perceptual understanding of the scene and the relationship between its d...

Domain Agnostic Learning with Disentangled Representations

Unsupervised model transfer has the potential to greatly improve the gen...

Task-Agnostic Robust Representation Learning

It has been reported that deep learning models are extremely vulnerable ...

VisualEchoes: Spatial Image Representation Learning through Echolocation

Several animal species (e.g., bats, dolphins, and whales) and even visua...