Universal representations:The missing link between faces, text, planktons, and cat breeds

01/25/2017
by   Hakan Bilen, et al.
0

With the advent of large labelled datasets and high-capacity models, the performance of machine vision systems has been improving rapidly. However, the technology has still major limitations, starting from the fact that different vision problems are still solved by different models, trained from scratch or fine-tuned on the target data. The human visual system, in stark contrast, learns a universal representation for vision in the early life of an individual. This representation works well for an enormous variety of vision problems, with little or no change, with the major advantage of requiring little training data to solve any of them.

READ FULL TEXT

page 6

page 8

research
05/22/2023

Contextualising Implicit Representations for Semantic Tasks

Prior works have demonstrated that implicit representations trained only...
research
02/09/2023

Knowledge is a Region in Weight Space for Fine-tuned Language Models

Research on neural networks has largely focused on understanding a singl...
research
06/14/2022

Self-Supervision on Images and Text Reduces Reliance on Visual Shortcut Features

Deep learning models trained in a fully supervised manner have been show...
research
10/24/2020

ReadOnce Transformers: Reusable Representations of Text for Transformers

While large-scale language models are extremely effective when directly ...
research
11/08/2019

Ruminating Word Representations with Random Noised Masker

We introduce a training method for both better word representation and p...
research
09/26/2022

Self-supervised similarity models based on well-logging data

Adopting data-based approaches leads to model improvement in numerous Oi...
research
02/13/2018

Challenging Images For Minds and Machines

There is no denying the tremendous leap in the performance of machine le...

Please sign up or login with your details

Forgot password? Click here to reset