PaLM: Scaling Language Modeling with Pathways

04/05/2022
∙
by   Aakanksha Chowdhery, et al.
∙
6
∙

Large language models have been shown to achieve remarkable performance across a variety of natural language tasks using few-shot learning, which drastically reduces the number of task-specific training examples needed to adapt the model to a particular application. To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM. We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML system which enables highly efficient training across multiple TPU Pods. We demonstrate continued benefits of scaling by achieving state-of-the-art few-shot learning results on hundreds of language understanding and generation benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough performance, outperforming the finetuned state-of-the-art on a suite of multi-step reasoning tasks, and outperforming average human performance on the recently released BIG-bench benchmark. A significant number of BIG-bench tasks showed discontinuous improvements from model scale, meaning that performance steeply increased as we scaled to our largest model. PaLM also has strong capabilities in multilingual tasks and source code generation, which we demonstrate on a wide array of benchmarks. We additionally provide a comprehensive analysis on bias and toxicity, and study the extent of training data memorization with respect to model scale. Finally, we discuss the ethical considerations related to large language models and discuss potential mitigation strategies.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
∙ 02/09/2022

Generating Training Data with Language Models: Towards Zero-Shot Language Understanding

Pretrained language models (PLMs) have demonstrated remarkable performan...
research
∙ 06/29/2023

Benchmarking Large Language Model Capabilities for Conditional Generation

Pre-trained large language models (PLMs) underlie most new developments ...
research
∙ 06/28/2021

What's in a Measurement? Using GPT-3 on SemEval 2021 Task 8 – MeasEval

In the summer of 2020 OpenAI released its GPT-3 autoregressive language ...
research
∙ 10/20/2022

Transcending Scaling Laws with 0.1

Scaling language models improves performance but comes with significant ...
research
∙ 05/23/2023

Improving Factuality and Reasoning in Language Models through Multiagent Debate

Large language models (LLMs) have demonstrated remarkable capabilities i...
research
∙ 09/10/2021

What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers

GPT-3 shows remarkable in-context learning ability of large-scale langua...
research
∙ 02/19/2021

Calibrate Before Use: Improving Few-Shot Performance of Language Models

GPT-3 can perform numerous tasks when provided a natural language prompt...

Please sign up or login with your details

Forgot password? Click here to reset