Using Language Models For Knowledge Acquisition in Natural Language Reasoning Problems

04/04/2023
by   Fangzhen Lin, et al.
0

For a natural language problem that requires some non-trivial reasoning to solve, there are at least two ways to do it using a large language model (LLM). One is to ask it to solve it directly. The other is to use it to extract the facts from the problem text and then use a theorem prover to solve it. In this note, we compare the two methods using ChatGPT and GPT4 on a series of logic word puzzles, and conclude that the latter is the right approach.

READ FULL TEXT
research
04/12/2023

Using large language models for (de-)formalization and natural argumentation exercises for beginner's students

We describe two systems that use text-davinci-003, a large language mode...
research
12/21/2022

Language Models as Inductive Reasoners

Inductive reasoning is a core component of human intelligence. In the pa...
research
07/15/2023

Coupling Large Language Models with Logic Programming for Robust and General Reasoning from Text

While large language models (LLMs), such as GPT-3, appear to be robust a...
research
06/14/2017

Is Natural Language a Perigraphic Process? The Theorem about Facts and Words Revisited

As we discuss, a stationary stochastic process is nonergodic when a rand...
research
10/07/2022

Novice Type Error Diagnosis with Natural Language Models

Strong static type systems help programmers eliminate many errors withou...
research
08/05/2022

Knowledge Authoring with Factual English

Knowledge representation and reasoning (KRR) systems represent knowledge...
research
05/05/2023

MindGames: Targeting Theory of Mind in Large Language Models with Dynamic Epistemic Modal Logic

Theory of Mind (ToM) is a critical component of intelligence, yet accura...

Please sign up or login with your details

Forgot password? Click here to reset