Which one is the dax? Achieving mutual exclusivity with neural networks

04/08/2020
by   Kristina Gulordava, et al.
0

Learning words is a challenge for children and neural networks alike. However, what they struggle with can differ. When prompted by novel words, children have been shown to tend to associate them with unfamiliar referents. This has been taken to reflect a propensity toward mutual exclusivity. In this study, we investigate whether and under which circumstances neural models can exhibit analogous behavior. To this end, we evaluate cross-situational neural models on novel items with distractors, contrasting the interaction between different word learning and referent selection strategies. We find that, as long as they bring about competition between words, constraints in both learning and referent selection can improve success in tasks with novel words and referents. For neural network research, our findings clarify the role of available options for enhanced performance in tasks where mutual exclusivity is advantageous. For cognitive research, they highlight latent interactions between word learning, referent selection mechanisms, and the structure of stimuli.

READ FULL TEXT
research
06/24/2019

Mutual exclusivity as a challenge for neural networks

Strong inductive biases allow children to learn in fast and adaptable wa...
research
12/06/2020

Competition in Cross-situational Word Learning: A Computational Study

Children learn word meanings by tapping into the commonalities across di...
research
02/22/2017

Calculating Probabilities Simplifies Word Learning

Children can use the statistical regularities of their environment to le...
research
07/07/2022

Predicting Word Learning in Children from the Performance of Computer Vision Systems

For human children as well as machine learning systems, a key challenge ...
research
03/12/2020

Learning word-referent mappings and concepts from raw inputs

How do children learn correspondences between the language and the world...
research
04/04/2019

Neural Models of the Psychosemantics of `Most'

How are the meanings of linguistic expressions related to their use in c...
research
07/22/2019

Sparsity Emerges Naturally in Neural Language Models

Concerns about interpretability, computational resources, and principled...

Please sign up or login with your details

Forgot password? Click here to reset