MoT: Pre-thinking and Recalling Enable ChatGPT to Self-Improve with Memory-of-Thoughts

05/09/2023
by   Xiaonan Li, et al.
0

Large Language Models have shown impressive abilities on various tasks. However, fundamentally improving them depends on high-quality datasets or computationally expensive fine-tuning. On the contrary, human can easily improve themselves by thinking and memory, without external resources. In this paper, we propose a framework, MoT, to let the LLM self-improve through Memory of Thoughts, without annotated datasets and parameter updates. Specifically, the framework is divided into two stages: 1. before the test stage, we let the LLM pre-think on the unlabeled dataset and save the high-confidence thoughts as external memory; 2. during inference, given a test question, we let the LLM recall relevant memory to help itself reason and answer it. Experimental results show that the proposed framework can help ChatGPT significantly improve its abilities in math reasoning, commonsense reasoning, factual reasoning and natural language inference. Further analyses show that each component contributes critically to the improvements.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/20/2022

Large Language Models Can Self-Improve

Large Language Models (LLMs) have achieved excellent performances in var...
research
09/16/2021

Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings

Natural language inference (NLI) requires models to learn and apply comm...
research
12/31/2022

Rethinking with Retrieval: Faithful Large Language Model Inference

Despite the success of large language models (LLMs) in various natural l...
research
04/22/2023

Dialectical language model evaluation: An initial appraisal of the commonsense spatial reasoning abilities of LLMs

Language models have become very popular recently and many claims have b...
research
12/06/2021

Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention

Most of today's AI systems focus on using self-attention mechanisms and ...
research
05/01/2023

Learning to Reason and Memorize with Self-Notes

Large language models have been shown to struggle with limited context m...

Please sign up or login with your details

Forgot password? Click here to reset