Inverse scaling can become U-shaped

11/03/2022
by   Jason Wei, et al.
0

Although scaling language models improves performance on a range of tasks, there are apparently some scenarios where scaling hurts performance. For instance, the Inverse Scaling Prize Round 1 identified four ”inverse scaling” tasks, for which performance gets worse for larger models. These tasks were evaluated on models of up to 280B parameters, trained up to 500 zettaFLOPs of compute. This paper takes a closer look at these four tasks. We evaluate models of up to 540B parameters, trained on five times more compute than those evaluated in the Inverse Scaling Prize. With this increased range of model sizes and training compute, three out of the four tasks exhibit what we call ”U-shaped scaling” – performance decreases up to a certain model size, and then increases again up to the largest model evaluated. One hypothesis is that U-shaped scaling occurs when a task comprises a ”true task” and a ”distractor task”. Medium-size models can do the distractor task, which hurts performance, while only large-enough models can ignore the distractor task and do the true task. The existence of U-shaped scaling implies that inverse scaling may not hold for larger models. Second, we evaluate the inverse scaling tasks using chain-of-thought (CoT) prompting, in addition to basic prompting without CoT. With CoT prompting, all four tasks show either U-shaped scaling or positive scaling, achieving perfect solve rates on two tasks and several sub-tasks. This suggests that the term "inverse scaling task" is under-specified – a given task may be inverse scaling for one prompt but positive or U-shaped scaling for a different prompt.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2023

Beyond Positive Scaling: How Negation Impacts Scaling Trends of Language Models

Language models have been shown to exhibit positive scaling, where perfo...
research
06/15/2023

Inverse Scaling: When Bigger Isn't Better

Work on scaling laws has found that large language models (LMs) show pre...
research
05/24/2023

Emergent inabilities? Inverse scaling over the course of pretraining

Does inverse scaling only occur as a function of model parameter size, o...
research
06/12/2023

Probing Quantifier Comprehension in Large Language Models

With their increasing size, Large language models (LLMs) are becoming in...
research
10/20/2022

Transcending Scaling Laws with 0.1

Scaling language models improves performance but comes with significant ...
research
12/20/2022

Task Ambiguity in Humans and Language Models

Language models have recently achieved strong performance across a wide ...
research
04/07/2021

Scaling Scaling Laws with Board Games

The largest experiments in machine learning now require resources far be...

Please sign up or login with your details

Forgot password? Click here to reset