Exploring Category Structure with Contextual Language Models and Lexical Semantic Networks

02/14/2023
by   Joseph Renner, et al.
0

Recent work on predicting category structure with distributional models, using either static word embeddings (Heyman and Heyman, 2019) or contextualized language models (CLMs) (Misra et al., 2021), report low correlations with human ratings, thus calling into question their plausibility as models of human semantic memory. In this work, we revisit this question testing a wider array of methods for probing CLMs for predicting typicality scores. Our experiments, using BERT (Devlin et al., 2018), show the importance of using the right type of CLM probes, as our best BERT-based typicality prediction methods substantially improve over previous works. Second, our results highlight the importance of polysemy in this task: our best results are obtained when using a disambiguation mechanism. Finally, additional experiments reveal that Information Contentbased WordNet (Miller, 1995), also endowed with disambiguation, match the performance of the best BERT-based method, and in fact capture complementary information, which can be combined with BERT to achieve enhanced typicality predictions.

READ FULL TEXT
research
05/02/2020

BERT-kNN: Adding a kNN Search Component to Pretrained Language Models for Better QA

Khandelwal et al. (2020) show that a k-nearest-neighbor (kNN) component ...
research
10/27/2020

Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias

Contextualized word embeddings have been replacing standard embeddings a...
research
05/02/2020

Language Models as an Alternative Evaluator of Word Order Hypotheses: A Case Study in Japanese

We examine a methodology using neural language models (LMs) for analyzin...
research
11/14/2022

SPE: Symmetrical Prompt Enhancement for Fact Probing

Pretrained language models (PLMs) have been shown to accumulate factual ...
research
10/06/2020

Exploring BERT's Sensitivity to Lexical Cues using Tests from Semantic Priming

Models trained to estimate word probabilities in context have become ubi...
research
10/14/2021

P-Adapters: Robustly Extracting Factual Information from Language Models with Diverse Prompts

Recent work (e.g. LAMA (Petroni et al., 2019)) has found that the qualit...

Please sign up or login with your details

Forgot password? Click here to reset