Federated Prompting and Chain-of-Thought Reasoning for Improving LLMs Answering

04/27/2023
by   Xiangyang Liu, et al.
0

We investigate how to enhance answer precision in frequently asked questions posed by distributed users using cloud-based Large Language Models (LLMs). Our study focuses on a typical situations where users ask similar queries that involve identical mathematical reasoning steps and problem-solving procedures. Due to the unsatisfactory accuracy of LLMs' zero-shot prompting with standalone questions, we propose to improve the distributed synonymous questions using Self-Consistency (SC) and Chain-of-Thought (CoT) techniques. Specifically, we first retrieve synonymous questions from a crowd-sourced database and create a federated question pool. We call these federated synonymous questions with the same or different parameters SP-questions or DP-questions, respectively. We refer to our methods as Fed-SP-SC and Fed-DP-CoT, which can generate significantly more accurate answers for all user queries without requiring sophisticated model-tuning. Through extensive experiments, we demonstrate that our proposed methods can significantly enhance question accuracy by fully exploring the synonymous nature of the questions and the consistency of the answers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/25/2023

Let's Do a Thought Experiment: Using Counterfactuals to Improve Moral Reasoning

Language models still struggle on moral reasoning, despite their impress...
research
10/07/2022

Measuring and Narrowing the Compositionality Gap in Language Models

We investigate the ability of language models to perform compositional r...
research
04/19/2023

Progressive-Hint Prompting Improves Reasoning in Large Language Models

The performance of Large Language Models (LLMs) in reasoning tasks depen...
research
05/30/2023

Graph Reasoning for Question Answering with Triplet Retrieval

Answering complex questions often requires reasoning over knowledge grap...
research
12/29/2022

Learning One Abstract Bit at a Time Through Self-Invented Experiments Encoded as Neural Networks

There are two important things in science: (A) Finding answers to given ...
research
08/29/2023

FedLogic: Interpretable Federated Multi-Domain Chain-of-Thought Prompt Selection for Large Language Models

Leveraging “chain-of-thought (CoT)” reasoning to elicit rapid and precis...
research
04/16/2021

Enriching a Model's Notion of Belief using a Persistent Memory

Although pretrained language models (PTLMs) have been shown to contain s...

Please sign up or login with your details

Forgot password? Click here to reset