Distributional Inclusion Hypothesis and Quantifications: Probing Hypernymy in Functional Distributional Semantics

09/15/2023
by   Chun Hei Lo, et al.
0

Functional Distributional Semantics (FDS) models the meaning of words by truth-conditional functions. This provides a natural representation for hypernymy, but no guarantee that it is learnt when FDS models are trained on a corpus. We demonstrate that FDS models learn hypernymy when a corpus strictly follows the Distributional Inclusion Hypothesis. We further introduce a training objective that allows FDS to handle simple universal quantifications, thus enabling hypernymy learning under the reverse of DIH. Experimental results on both synthetic and real data sets confirm our hypotheses and the effectiveness of our proposed objective.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset