The False Promise of Imitating Proprietary LLMs

05/25/2023
by   Arnav Gudibande, et al.
0

An emerging method to cheaply improve a weaker language model is to finetune it on outputs from a stronger model, such as a proprietary system like ChatGPT (e.g., Alpaca, Self-Instruct, and others). This approach looks to cheaply imitate the proprietary model's capabilities using a weaker open-source model. In this work, we critically analyze this approach. We first finetune a series of LMs that imitate ChatGPT using varying base model sizes (1.5B–13B), data sources, and imitation data amounts (0.3M–150M tokens). We then evaluate the models using crowd raters and canonical NLP benchmarks. Initially, we were surprised by the output quality of our imitation models – they appear far better at following instructions, and crowd workers rate their outputs as competitive with ChatGPT. However, when conducting more targeted automatic evaluations, we find that imitation models close little to none of the gap from the base LM to ChatGPT on tasks that are not heavily supported in the imitation data. We show that these performance discrepancies may slip past human raters because imitation models are adept at mimicking ChatGPT's style but not its factuality. Overall, we conclude that model imitation is a false promise: there exists a substantial capabilities gap between open and closed LMs that, with current methods, can only be bridged using an unwieldy amount of imitation data or by using more capable base LMs. In turn, we argue that the highest leverage action for improving open-source models is to tackle the difficult challenge of developing better base LMs, rather than taking the shortcut of imitating proprietary systems.

READ FULL TEXT
research
07/05/2023

Open-Source Large Language Models Outperform Crowd Workers and Approach ChatGPT in Text-Annotation Tasks

This study examines the performance of open-source Large Language Models...
research
03/06/2023

On the Feasibility of Specialized Ability Extracting for Large Language Code Models

Recent progress in large language code models (LLCMs) has led to a drama...
research
10/02/2019

Scenario Generalization of Data-driven Imitation Models in Crowd Simulation

Crowd simulation, the study of the movement of multiple agents in comple...
research
05/22/2023

Lion: Adversarial Distillation of Closed-Source Large Language Model

The practice of transferring knowledge from a sophisticated, closed-sour...
research
08/09/2023

Sudowoodo: a Chinese Lyric Imitation System with Source Lyrics

Lyrics generation is a well-known application in natural language genera...
research
09/19/2023

Rethinking Imitation-based Planner for Autonomous Driving

In recent years, imitation-based driving planners have reported consider...
research
06/05/2023

Orca: Progressive Learning from Complex Explanation Traces of GPT-4

Recent research has focused on enhancing the capability of smaller model...

Please sign up or login with your details

Forgot password? Click here to reset