PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine

08/23/2023
by   Chenrui Zhang, et al.
0

As an effective tool for eliciting the power of Large Language Models (LLMs), prompting has recently demonstrated unprecedented abilities across a variety of complex tasks. To further improve the performance, prompt ensemble has attracted substantial interest for tackling the hallucination and instability of LLMs. However, existing methods usually adopt a two-stage paradigm, which requires a pre-prepared set of prompts with substantial manual effort, and is unable to perform directed optimization for different weak learners. In this paper, we propose a simple, universal, and automatic method named PREFER (Pompt Ensemble learning via Feedback-Reflect-Refine) to address the stated limitations. Specifically, given the fact that weak learners are supposed to focus on hard examples during boosting, PREFER builds a feedback mechanism for reflecting on the inadequacies of existing weak learners. Based on this, the LLM is required to automatically synthesize new prompts for iterative refinement. Moreover, to enhance stability of the prompt effect evaluation, we propose a novel prompt bagging method involving forward and backward thinking, which is superior to majority voting and is beneficial for both feedback and weight calculation in boosting. Extensive experiments demonstrate that our PREFER achieves state-of-the-art performance in multiple types of tasks by a significant margin. We have made our code publicly available.

READ FULL TEXT
research
06/25/2023

Language models are weak learners

A central notion in practical and theoretical machine learning is that o...
research
10/11/2018

Online Multiclass Boosting with Bandit Feedback

We present online boosting algorithms for multiclass classification with...
research
12/26/2017

Timely Feedback in Unstructured Cybersecurity Exercises

Cyber defence exercises are intensive, hands-on learning events for team...
research
04/30/2023

Few-shot Classification via Ensemble Learning with Multi-Order Statistics

Transfer learning has been widely adopted for few-shot classification. R...
research
11/04/2020

Residual Likelihood Forests

This paper presents a novel ensemble learning approach called Residual L...
research
03/16/2022

Shepherd Pre-trained Language Models to Develop a Train of Thought: An Iterative Prompting Approach

While Pre-trained Language Models (PLMs) internalize a great amount of w...

Please sign up or login with your details

Forgot password? Click here to reset