Pre-trained Language Models as Symbolic Reasoners over Knowledge?

06/18/2020
by   Nora Kassner, et al.
0

How can pre-trained language models (PLMs) learn factual knowledge from the training set? We investigate the two most important mechanisms: reasoning and memorization. Prior work has attempted to quantify the number of facts PLMs learn, but we present, using synthetic data, the first study that establishes a causal relation between facts present in training and facts learned by the PLM. For reasoning, we show that PLMs learn to apply some symbolic reasoning rules; but in particular, they struggle with two-hop reasoning. For memorization, we identify schema conformity (facts systematically supported by other facts) and frequency as key factors for its success.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset