Do PLMs Know and Understand Ontological Knowledge?

09/12/2023
by   Weiqi Wu, et al.
0

Ontological knowledge, which comprises classes and properties and their relationships, is integral to world knowledge. It is significant to explore whether Pretrained Language Models (PLMs) know and understand such knowledge. However, existing PLM-probing studies focus mainly on factual knowledge, lacking a systematic probing of ontological knowledge. In this paper, we focus on probing whether PLMs store ontological knowledge and have a semantic understanding of the knowledge rather than rote memorization of the surface form. To probe whether PLMs know ontological knowledge, we investigate how well PLMs memorize: (1) types of entities; (2) hierarchical relationships among classes and properties, e.g., Person is a subclass of Animal and Member of Sports Team is a subproperty of Member of ; (3) domain and range constraints of properties, e.g., the subject of Member of Sports Team should be a Person and the object should be a Sports Team. To further probe whether PLMs truly understand ontological knowledge beyond memorization, we comprehensively study whether they can reliably perform logical reasoning with given knowledge according to ontological entailment rules. Our probing results show that PLMs can memorize certain ontological knowledge and utilize implicit knowledge in reasoning. However, both the memorizing and reasoning performances are less than perfect, indicating incomplete knowledge and understanding.

READ FULL TEXT

page 1

page 7

page 13

page 20

research
10/12/2022

Can Pretrained Language Models (Yet) Reason Deductively?

Acquiring factual knowledge with Pretrained Language Models (PLMs) has a...
research
09/20/2019

Teaching Pretrained Models with Commonsense Reasoning: A Preliminary KB-Based Approach

Recently, pretrained language models (e.g., BERT) have achieved great su...
research
10/12/2021

ALL Dolphins Are Intelligent and SOME Are Friendly: Probing BERT for Nouns' Semantic Properties and their Prototypicality

Large scale language models encode rich commonsense knowledge acquired t...
research
02/28/2022

KMIR: A Benchmark for Evaluating Knowledge Memorization, Identification and Reasoning Abilities of Language Models

Previous works show the great potential of pre-trained language models (...
research
11/20/2022

VER: Learning Natural Language Representations for Verbalizing Entities and Relations

Entities and relationships between entities are vital in the real world....
research
11/14/2022

SPE: Symmetrical Prompt Enhancement for Fact Probing

Pretrained language models (PLMs) have been shown to accumulate factual ...
research
06/10/2018

What Knowledge is Needed to Solve the RTE5 Textual Entailment Challenge?

This document gives a knowledge-oriented analysis of about 20 interestin...

Please sign up or login with your details

Forgot password? Click here to reset