Efficient Induction of Language Models Via Probabilistic Concept Formation

12/22/2022
by   Christopher J. MacLellan, et al.
0

This paper presents a novel approach to the acquisition of language models from corpora. The framework builds on Cobweb, an early system for constructing taxonomic hierarchies of probabilistic concepts that used a tabular, attribute-value encoding of training cases and concepts, making it unsuitable for sequential input like language. In response, we explore three new extensions to Cobweb – the Word, Leaf, and Path variants. These systems encode each training case as an anchor word and surrounding context words, and they store probabilistic descriptions of concepts as distributions over anchor and context information. As in the original Cobweb, a performance element sorts a new instance downward through the hierarchy and uses the final node to predict missing features. Learning is interleaved with performance, updating concept probabilities and hierarchy structure as classification occurs. Thus, the new approaches process training cases in an incremental, online manner that it very different from most methods for statistical language learning. We examine how well the three variants place synonyms together and keep homonyms apart, their ability to recall synonyms as a function of training set size, and their training efficiency. Finally, we discuss related work on incremental learning and directions for further research.

READ FULL TEXT

page 12

page 13

research
10/05/2021

Word Acquisition in Neural Language Models

We investigate how neural language models acquire individual words durin...
research
05/05/2023

LMs stand their Ground: Investigating the Effect of Embodiment in Figurative Language Interpretation by Language Models

Figurative language is a challenge for language models since its interpr...
research
09/18/2023

Towards Ontology Construction with Language Models

We present a method for automatically constructing a concept hierarchy f...
research
05/23/2023

Concept-aware Training Improves In-context Learning Ability of Language Models

Many recent language models (LMs) of Transformers family exhibit so-call...
research
07/17/2017

A Simple Language Model based on PMI Matrix Approximations

In this study, we introduce a new approach for learning language models ...
research
01/18/2022

Convolutional Cobweb: A Model of Incremental Learning from 2D Images

This paper presents a new concept formation approach that supports the a...

Please sign up or login with your details

Forgot password? Click here to reset