Multi-Method Self-Training: Improving Code Generation With Text, And Vice Versa

07/20/2023
by   Shriyash K. Upadhyay, et al.
0

Large Language Models have many methods for solving the same problem. This introduces novel strengths (different methods may work well for different problems) and weaknesses (it may be difficult for users to know which method to use). In this paper, we introduce Multi-Method Self-Training (MMST), where one method is trained on the filtered outputs of another, allowing us to augment the strengths and ameliorate the weaknesses of each method. Using a 176B parameter model trained on both language and code, we show that MMST can 1) improve the less performant method (up to 30 2) improve the more performant method (up to 32.2 performant, and 3) improve the performance of related but distinct tasks (up to 10.3 conduct ablation analyses to explore why MMST works. We show that MMST generates more data than traditional self-training, but the improvement in performance is driven by the use of multiple methods. We also analyze prompt-engineering and anti-correlated performance between methods as means of making MMST more effective. We hope the evidence from our paper motivates machine learning researchers to explore ways in which advances in language models allow for new forms of training.

READ FULL TEXT
research
06/21/2023

Solving and Generating NPR Sunday Puzzles with Large Language Models

We explore the ability of large language models to solve and generate pu...
research
10/20/2022

Large Language Models Can Self-Improve

Large Language Models (LLMs) have achieved excellent performances in var...
research
11/16/2021

Interpreting Language Models Through Knowledge Graph Extraction

Transformer-based language models trained on large text corpora have enj...
research
05/19/2023

Prompting with Pseudo-Code Instructions

Prompting with natural language instructions has recently emerged as a p...
research
02/15/2023

Learning Performance-Improving Code Edits

The waning of Moore's Law has shifted the focus of the tech industry tow...
research
07/19/2023

On the Origin of LLMs: An Evolutionary Tree and Graph for 15,821 Large Language Models

Since late 2022, Large Language Models (LLMs) have become very prominent...

Please sign up or login with your details

Forgot password? Click here to reset