Ladder-of-Thought: Using Knowledge as Steps to Elevate Stance Detection

08/31/2023
by   Kairui Hu, et al.
0

Stance detection aims to identify the attitude expressed in a document towards a given target. Techniques such as Chain-of-Thought (CoT) prompting have advanced this task, enhancing a model's reasoning capabilities through the derivation of intermediate rationales. However, CoT relies primarily on a model's pre-trained internal knowledge during reasoning, thereby neglecting the valuable external information that is previously unknown to the model. This omission, especially within the unsupervised reasoning process, can affect the model's overall performance. Moreover, while CoT enhances Large Language Models (LLMs), smaller LMs, though efficient operationally, face challenges in delivering nuanced reasoning. In response to these identified gaps, we introduce the Ladder-of-Thought (LoT) for the stance detection task. Constructed through a dual-phase Progressive Optimization Framework, LoT directs the small LMs to assimilate high-quality external knowledge, refining the intermediate rationales produced. These bolstered rationales subsequently serve as the foundation for more precise predictions - akin to how a ladder facilitates reaching elevated goals. LoT achieves a balance between efficiency and performance. Our empirical evaluations underscore LoT's efficacy, marking a 16 on stance detection task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2023

MultiTool-CoT: GPT-3 Can Use Multiple External Tools with Chain of Thought Prompting

Large language models (LLMs) have achieved impressive performance on var...
research
08/18/2023

Enhancing Reasoning Capabilities of Large Language Models: A Graph-Based Verification Approach

Large Language Models (LLMs) have showcased impressive reasoning capabil...
research
05/26/2023

Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models

With the widespread use of large language models (LLMs) in NLP tasks, re...
research
04/04/2023

REFINER: Reasoning Feedback on Intermediate Representations

Language models (LMs) have recently shown remarkable performance on reas...
research
08/25/2023

Knowledge-Driven CoT: Exploring Faithful Reasoning in LLMs for Knowledge-intensive Question Answering

Equipped with Chain-of-Thought (CoT), Large language models (LLMs) have ...
research
09/14/2023

Tree of Uncertain Thoughts Reasoning for Large Language Models

While the recently introduced Tree of Thoughts (ToT) has heralded advanc...
research
08/11/2023

Thinking Like an Expert:Multimodal Hypergraph-of-Thought (HoT) Reasoning to boost Foundation Modals

Reasoning ability is one of the most crucial capabilities of a foundatio...

Please sign up or login with your details

Forgot password? Click here to reset