Learning Permutation-Invariant Embeddings for Description Logic Concepts

03/03/2023
by   Caglar Demir, et al.
0

Concept learning deals with learning description logic concepts from a background knowledge and input examples. The goal is to learn a concept that covers all positive examples, while not covering any negative examples. This non-trivial task is often formulated as a search problem within an infinite quasi-ordered concept space. Although state-of-the-art models have been successfully applied to tackle this problem, their large-scale applications have been severely hindered due to their excessive exploration incurring impractical runtimes. Here, we propose a remedy for this limitation. We reformulate the learning problem as a multi-label classification problem and propose a neural embedding model (NERO) that learns permutation-invariant embeddings for sets of examples tailored towards predicting F_1 scores of pre-selected description logic concepts. By ranking such concepts in descending order of predicted scores, a possible goal concept can be detected within few retrieval operations, i.e., no excessive exploration. Importantly, top-ranked concepts can be used to start the search procedure of state-of-the-art symbolic models in multiple advantageous regions of a concept space, rather than starting it in the most general concept ⊤. Our experiments on 5 benchmark datasets with 770 learning problems firmly suggest that NERO significantly (p-value <1 the number of explored concepts, and the total runtime. We provide an open-source implementation of our approach.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset