Distributional Modeling on a Diet: One-shot Word Learning from Text Only

04/14/2017
by   Su Wang, et al.
0

We test whether distributional models can do one-shot learning of definitional properties from text only. Using Bayesian models, we find that first learning overarching structure in the known data, regularities in textual contexts and in properties, helps one-shot learning, and that individual context items can be highly informative. Our experiments show that our model can learn properties from a single exposure when given an informative utterance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/12/2022

Using dependency parsing for few-shot learning in distributional semantics

In this work, we explore the novel idea of employing dependency parsing ...
research
09/19/2020

Few-shot learning using pre-training and shots, enriched by pre-trained samples

We use the EMNIST dataset of handwritten digits to test a simple approac...
research
07/25/2021

Will Multi-modal Data Improves Few-shot Learning?

Most few-shot learning models utilize only one modality of data. We woul...
research
11/22/2018

Self Paced Adversarial Training for Multimodal Few-shot Learning

State-of-the-art deep learning algorithms yield remarkable results in ma...
research
06/07/2021

One-shot learning of paired associations by a reservoir computing model with Hebbian plasticity

One-shot learning can be achieved by algorithms and animals, but how the...
research
07/09/2020

Wandering Within a World: Online Contextualized Few-Shot Learning

We aim to bridge the gap between typical human and machine-learning envi...
research
11/22/2017

Unleashing the Potential of CNNs for Interpretable Few-Shot Learning

Convolutional neural networks (CNNs) have been generally acknowledged as...

Please sign up or login with your details

Forgot password? Click here to reset